id
stringlengths
10
10
title
stringlengths
12
156
abstract
stringlengths
279
2.02k
full_text
sequence
qas
sequence
figures_and_tables
sequence
1905.08949
Recent Advances in Neural Question Generation
Emerging research in Neural Question Generation (NQG) has started to integrate a larger variety of inputs, and generating questions requiring higher levels of cognition. These trends point to NQG as a bellwether for NLP, about how human intelligence embodies the skills of curiosity and integration. We present a comprehensive survey of neural question generation, examining the corpora, methodologies, and evaluation methods. From this, we elaborate on what we see as emerging on NQG's trend: in terms of the learning paradigms, input modalities, and cognitive levels considered by NQG. We end by pointing out the potential directions ahead.
{ "section_name": [ "Introduction", "Fundamental Aspects of NQG", "Learning Paradigm", "Input Modality", "Cognitive Levels", "Corpora", "Evaluation Metrics", "Methodology", "Encoding Answers", "Question Word Generation", "Paragraph-level Contexts", "Answer-unaware QG", "Technical Considerations", "The State of the Art", "Emerging Trends", "Multi-task Learning", "Wider Input Modalities", "Generation of Deep Questions", "Conclusion – What's the Outlook?" ], "paragraphs": [ [ "Question Generation (QG) concerns the task of “automatically generating questions from various inputs such as raw text, database, or semantic representation\" BIBREF0 . People have the ability to ask rich, creative, and revealing questions BIBREF1 ; e.g., asking Why did Gollum betray his master Frodo Baggins? after reading the fantasy novel The Lord of the Rings. How can machines be endowed with the ability to ask relevant and to-the-point questions, given various inputs? This is a challenging, complementary task to Question Answering (QA). Both QA and QG require an in-depth understanding of the input source and the ability to reason over relevant contexts. But beyond understanding, QG additionally integrates the challenges of Natural Language Generation (NLG), i.e., generating grammatically and semantically correct questions.", "QG is of practical importance: in education, forming good questions are crucial for evaluating students’ knowledge and stimulating self-learning. QG can generate assessments for course materials BIBREF2 or be used as a component in adaptive, intelligent tutoring systems BIBREF3 . In dialog systems, fluent QG is an important skill for chatbots, e.g., in initiating conversations or obtaining specific information from human users. QA and reading comprehension also benefit from QG, by reducing the needed human labor for creating large-scale datasets. We can say that traditional QG mainly focused on generating factoid questions from a single sentence or a paragraph, spurred by a series of workshops during 2008–2012 BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 .", "Recently, driven by advances in deep learning, QG research has also begun to utilize “neural” techniques, to develop end-to-end neural models to generate deeper questions BIBREF8 and to pursue broader applications BIBREF9 , BIBREF10 .", "While there have been considerable advances made in NQG, the area lacks a comprehensive survey. This paper fills this gap by presenting a systematic survey on recent development of NQG, focusing on three emergent trends that deep learning has brought in QG: (1) the change of learning paradigm, (2) the broadening of the input spectrum, and (3) the generation of deep questions." ], [ "For the sake of clean exposition, we first provide a broad overview of QG by conceptualizing the problem from the perspective of the three introduced aspects: (1) its learning paradigm, (2) its input modalities, and (3) the cognitive level it involves. This combines past research with recent trends, providing insights on how NQG connects to traditional QG research." ], [ "QG research traditionally considers two fundamental aspects in question asking: “What to ask” and “How to ask”. A typical QG task considers the identification of the important aspects to ask about (“what to ask”), and learning to realize such identified aspects as natural language (“how to ask”). Deciding what to ask is a form of machine understanding: a machine needs to capture important information dependent on the target application, akin to automatic summarization. Learning how to ask, however, focuses on aspects of the language quality such as grammatical correctness, semantically preciseness and language flexibility.", "Past research took a reductionist approach, separately considering these two problems of “what” and “how” via content selection and question construction. Given a sentence or a paragraph as input, content selection selects a particular salient topic worthwhile to ask about and determines the question type (What, When, Who, etc.). Approaches either take a syntactic BIBREF11 , BIBREF12 , BIBREF13 or semantic BIBREF14 , BIBREF3 , BIBREF15 , BIBREF16 tack, both starting by applying syntactic or semantic parsing, respectively, to obtain intermediate symbolic representations. Question construction then converts intermediate representations to a natural language question, taking either a tranformation- or template-based approach. The former BIBREF17 , BIBREF18 , BIBREF13 rearranges the surface form of the input sentence to produce the question; the latter BIBREF19 , BIBREF20 , BIBREF21 generates questions from pre-defined question templates. Unfortunately, such QG architectures are limiting, as their representation is confined to the variety of intermediate representations, transformation rules or templates.", "In contrast, neural models motivate an end-to-end architectures. Deep learned frameworks contrast with the reductionist approach, admitting approaches that jointly optimize for both the “what” and “how” in an unified framework. The majority of current NQG models follow the sequence-to-sequence (Seq2Seq) framework that use a unified representation and joint learning of content selection (via the encoder) and question construction (via the decoder). In this framework, traditional parsing-based content selection has been replaced by more flexible approaches such as attention BIBREF22 and copying mechanism BIBREF23 . Question construction has become completely data-driven, requiring far less labor compared to transformation rules, enabling better language flexibility compared to question templates.", "However, unlike other Seq2Seq learning NLG tasks, such as Machine Translation, Image Captioning, and Abstractive Summarization, which can be loosely regarded as learning a one-to-one mapping, generated questions can differ significantly when the intent of asking differs (e.g., the target answer, the target aspect to ask about, and the question's depth). In Section \"Methodology\" , we summarize different NQG methodologies based on Seq2Seq framework, investigating how some of these QG-specific factors are integrated with neural models, and discussing what could be further explored. The change of learning paradigm in NQG era is also represented by multi-task learning with other NLP tasks, for which we discuss in Section \"Multi-task Learning\" ." ], [ "Question generation is an NLG task for which the input has a wealth of possibilities depending on applications. While a host of input modalities have been considered in other NLG tasks, such as text summarization BIBREF24 , image captioning BIBREF25 and table-to-text generation BIBREF26 , traditional QG mainly focused on textual inputs, especially declarative sentences, explained by the original application domains of question answering and education, which also typically featured textual inputs.", "Recently, with the growth of various QA applications such as Knowledge Base Question Answering (KBQA) BIBREF27 and Visual Question Answering (VQA) BIBREF28 , NQG research has also widened the spectrum of sources to include knowledge bases BIBREF29 and images BIBREF10 . This trend is also spurred by the remarkable success of neural models in feature representation, especially on image features BIBREF30 and knowledge representations BIBREF31 . We discuss adapting NQG models to other input modalities in Section \"Wider Input Modalities\" ." ], [ "Finally, we consider the required cognitive process behind question asking, a distinguishing factor for questions BIBREF32 . A typical framework that attempts to categorize the cognitive levels involved in question asking comes from Bloom's taxonomy BIBREF33 , which has undergone several revisions and currently has six cognitive levels: Remembering, Understanding, Applying, Analyzing, Evaluating and Creating BIBREF32 .", "Traditional QG focuses on shallow levels of Bloom's taxonomy: typical QG research is on generating sentence-based factoid questions (e.g., Who, What, Where questions), whose answers are simple constituents in the input sentence BIBREF2 , BIBREF13 . However, a QG system achieving human cognitive level should be able to generate meaningful questions that cater to higher levels of Bloom's taxonomy BIBREF34 , such as Why, What-if, and How questions. Traditionally, those “deep” questions are generated through shallow methods such as handcrafted templates BIBREF20 , BIBREF21 ; however, these methods lack a real understanding and reasoning over the input.", "Although asking deep questions is complex, NQG's ability to generalize over voluminous data has enabled recent research to explore the comprehension and reasoning aspects of QG BIBREF35 , BIBREF1 , BIBREF8 , BIBREF34 . We investigate this trend in Section \"Generation of Deep Questions\" , examining the limitations of current Seq2Seq model in generating deep questions, and the efforts made by existing works, indicating further directions ahead.", "The rest of this paper provides a systematic survey of NQG, covering corpus and evaluation metrics before examining specific neural models." ], [ "As QG can be regarded as a dual task of QA, in principle any QA dataset can be used for QG as well. However, there are at least two corpus-related factors that affect the difficulty of question generation. The first is the required cognitive level to answer the question, as we discussed in the previous section. Current NQG has achieved promising results on datasets consisting mainly of shallow factoid questions, such as SQuAD BIBREF36 and MS MARCO BIBREF38 . However, the performance drops significantly on deep question datasets, such as LearningQ BIBREF8 , shown in Section \"Generation of Deep Questions\" . The second factor is the answer type, i.e., the expected form of the answer, typically having four settings: (1) the answer is a text span in the passage, which is usually the case for factoid questions, (2) human-generated, abstractive answer that may not appear in the passage, usually the case for deep questions, (3) multiple choice question where question and its distractors should be jointly generated, and (4) no given answer, which requires the model to automatically learn what is worthy to ask. The design of NQG system differs accordingly.", "Table 1 presents a listing of the NQG corpora grouped by their cognitive level and answer type, along with their statistics. Among them, SQuAD was used by most groups as the benchmark to evaluate their NQG models. This provides a fair comparison between different techniques. However, it raises the issue that most NQG models work on factoid questions with answer as text span, leaving other types of QG problems less investigated, such as generating deep multi-choice questions. To overcome this, a wider variety of corpora should be benchmarked against in future NQG research." ], [ "Although the datasets are commonly shared between QG and QA, it is not the case for evaluation: it is challenging to define a gold standard of proper questions to ask. Meaningful, syntactically correct, semantically sound and natural are all useful criteria, yet they are hard to quantify. Most QG systems involve human evaluation, commonly by randomly sampling a few hundred generated questions, and asking human annotators to rate them on a 5-point Likert scale. The average rank or the percentage of best-ranked questions are reported and used for quality marks.", "As human evaluation is time-consuming, common automatic evaluation metrics for NLG, such as BLEU BIBREF41 , METEOR BIBREF42 , and ROUGE BIBREF43 , are also widely used. However, some studies BIBREF44 , BIBREF45 have shown that these metrics do not correlate well with fluency, adequacy, coherence, as they essentially compute the $n$ -gram similarity between the source sentence and the generated question. To overcome this, BIBREF46 proposed a new metric to evaluate the “answerability” of a question by calculating the scores for several question-specific factors, including question type, content words, function words, and named entities. However, as it is newly proposed, it has not been applied to evaluate any NQG system yet.", "To accurately measure what makes a good question, especially deep questions, improved evaluation schemes are required to specifically investigate the mechanism of question asking." ], [ "Many current NQG models follow the Seq2Seq architecture. Under this framework, given a passage (usually a sentence) $X = (x_1, \\cdots , x_n)$ and (possibly) a target answer $A$ (a text span in the passage) as input, an NQG model aims to generate a question $Y = (y_1, \\cdots , y_m)$ asking about the target answer $A$ in the passage $X$ , which is defined as finding the best question $\\bar{Y}$ that maximizes the conditional likelihood given the passage $X$ and the answer $A$ :", "$$\\bar{Y} & = \\arg \\max _Y P(Y \\vert X, A) \\\\\n\\vspace{-14.22636pt}\n& = \\arg \\max _Y \\sum _{t=1}^m P(y_t \\vert X, A, y_{< t})$$ (Eq. 5) ", " BIBREF47 pioneered the first NQG model using an attention Seq2Seq model BIBREF22 , which feeds a sentence into an RNN-based encoder, and generate a question about the sentence through a decoder. The attention mechanism is applied to help decoder pay attention to the most relevant parts of the input sentence while generating a question. Note that this base model does not take the target answer as input. Subsequently, neural models have adopted attention mechanism as a default BIBREF48 , BIBREF49 , BIBREF50 .", "Although these NQG models all share the Seq2Seq framework, they differ in the consideration of — (1) QG-specific factors (e.g., answer encoding, question word generation, and paragraph-level contexts), and (2) common NLG techniques (e.g., copying mechanism, linguistic features, and reinforcement learning) — discussed next." ], [ "The most commonly considered factor by current NQG systems is the target answer, which is typically taken as an additional input to guide the model in deciding which information to focus on when generating; otherwise, the NQG model tend to generate questions without specific target (e.g., “What is mentioned?\"). Models have solved this by either treating the answer's position as an extra input feature BIBREF48 , BIBREF51 , or by encoding the answer with a separate RNN BIBREF49 , BIBREF52 .", "The first type of method augments each input word vector with an extra answer indicator feature, indicating whether this word is within the answer span. BIBREF48 implement this feature using the BIO tagging scheme, while BIBREF50 directly use a binary indicator. In addition to the target answer, BIBREF53 argued that the context words closer to the answer also deserve more attention from the model, since they are usually more relevant. To this end, they incorporate trainable position embeddings $(d_{p_1}, d_{p_2}, \\cdots , d_{p_n})$ into the computation of attention distribution, where $p_i$ is the relative distance between the $i$ -th word and the answer, and $d_{p_i}$ is the embedding of $p_i$ . This achieved an extra BLEU-4 gain of $0.89$ on SQuAD.", "To generate answer-related questions, extra answer indicators explicitly emphasize the importance of answer; however, it also increases the tendency that generated questions include words from the answer, resulting in useless questions, as observed by BIBREF52 . For example, given the input “John Francis O’Hara was elected president of Notre Dame in 1934.\", an improperly generated question would be “Who was elected John Francis?\", which exposes some words in the answer. To address this, they propose to replace the answer into a special token for passage encoding, and a separate RNN is used to encode the answer. The outputs from two encoders are concatenated as inputs to the decoder. BIBREF54 adopted a similar idea that separately encodes passage and answer, but they instead use the multi-perspective matching between two encodings as an extra input to the decoder.", "We forecast treating the passage and the target answer separately as a future trend, as it results in a more flexible model, which generalizes to the abstractive case when the answer is not a text span in the input passage. However, this inevitably increases the model complexity and difficulty in training." ], [ "Question words (e.g., “when”, “how”, and “why”) also play a vital role in QG; BIBREF53 observed that the mismatch between generated question words and answer type is common for current NQG systems. For example, a when-question should be triggered for answer “the end of the Mexican War\" while a why-question is generated by the model. A few works BIBREF49 , BIBREF53 considered question word generation separately in model design.", " BIBREF49 proposed to first generate a question template that contains question word (e.g., “how to #\", where # is the placeholder), before generating the rest of the question. To this end, they train two Seq2Seq models; the former learns to generate question templates for a given text , while the latter learns to fill the blank of template to form a complete question. Instead of a two-stage framework, BIBREF53 proposed a more flexible model by introducing an additional decoding mode that generates the question word. When entering this mode, the decoder produces a question word distribution based on a restricted set of vocabulary using the answer embedding, the decoder state, and the context vector. The switch between different modes is controlled by a discrete variable produced by a learnable module of the model in each decoding step.", "Determining the appropriate question word harks back to question type identification, which is correlated with the question intention, as different intents may yield different questions, even when presented with the same (passage, answer) input pair. This points to the direction of exploring question pragmatics, where external contextual information (such as intent) can inform and influence how questions should optimally be generated." ], [ "Leveraging rich paragraph-level contexts around the input text is another natural consideration to produce better questions. According to BIBREF47 , around 20% of questions in SQuAD require paragraph-level information to be answered. However, as input texts get longer, Seq2Seq models have a tougher time effectively utilizing relevant contexts, while avoiding irrelevant information.", "To address this challenge, BIBREF51 proposed a gated self-attention encoder to refine the encoded context by fusing important information with the context's self-representation properly, which has achieved state-of-the-art results on SQuAD. The long passage consisting of input texts and its context is first embedded via LSTM with answer position as an extra feature. The encoded representation is then fed through a gated self-matching network BIBREF55 to aggregate information from the entire passage and embed intra-passage dependencies. Finally, a feature fusion gate BIBREF56 chooses relevant information between the original and self-matching enhanced representations.", "Instead of leveraging the whole context, BIBREF57 performed a pre-filtering by running a coreference resolution system on the context passage to obtain coreference clusters for both the input sentence and the answer. The co-referred sentences are then fed into a gating network, from which the outputs serve as extra features to be concatenated with the original input vectors." ], [ "The aforementioned models require the target answer as an input, in which the answer essentially serves as the focus of asking. However, in the case that only the input passage is given, a QG system should automatically identify question-worthy parts within the passage. This task is synonymous with content selection in traditional QG. To date, only two works BIBREF58 , BIBREF59 have worked in this setting. They both follow the traditional decomposition of QG into content selection and question construction but implement each task using neural networks. For content selection, BIBREF58 learn a sentence selection task to identify question-worthy sentences from the input paragraph using a neural sequence tagging model. BIBREF59 train a neural keyphrase extractor to predict keyphrases of the passage. For question construction, they both employed the Seq2Seq model, for which the input is either the selected sentence or the input passage with keyphrases as target answer.", "However, learning what aspect to ask about is quite challenging when the question requires reasoning over multiple pieces of information within the passage; cf the Gollum question from the introduction. Beyond retrieving question-worthy information, we believe that studying how different reasoning patterns (e.g., inductive, deductive, causal and analogical) affects the generation process will be an aspect for future study." ], [ "Common techniques of NLG have also been considered in NQG model, summarized as 3 tactics:", "1. Copying Mechanism. Most NQG models BIBREF48 , BIBREF60 , BIBREF61 , BIBREF50 , BIBREF62 employ the copying mechanism of BIBREF23 , which directly copies relevant words from the source sentence to the question during decoding. This idea is widely accepted as it is common to refer back to phrases and entities appearing in the text when formulating factoid questions, and difficult for a RNN decoder to generate such rare words on its own.", "2. Linguistic Features. Approaches also seek to leverage additional linguistic features that complements word embeddings, including word case, POS and NER tags BIBREF48 , BIBREF61 as well as coreference BIBREF50 and dependency information BIBREF62 . These categorical features are vectorized and concatenated with word embeddings. The feature vectors can be either one-hot or trainable and serve as input to the encoder.", "3. Policy Gradient. Optimizing for just ground-truth log likelihood ignores the many equivalent ways of asking a question. Relevant QG work BIBREF60 , BIBREF63 have adopted policy gradient methods to add task-specific rewards (such as BLEU or ROUGE) to the original objective. This helps to diversify the questions generated, as the model learns to distribute probability mass among equivalent expressions rather than the single ground truth question." ], [ "In Table 2 , we summarize existing NQG models with their employed techniques and their best-reported performance on SQuAD. These methods achieve comparable results; as of this writing, BIBREF51 is the state-of-the-art.", "Two points deserve mention. First, while the copying mechanism has shown marked improvements, there exist shortcomings. BIBREF52 observed many invalid answer-revealing questions attributed to the use of the copying mechanism; cf the John Francis example in Section \"Emerging Trends\" . They abandoned copying but still achieved a performance rivaling other systems. In parallel application areas such as machine translation, the copy mechanism has been to a large extent replaced with self-attention BIBREF64 or transformer BIBREF65 . The future prospect of the copying mechanism requires further investigation. Second, recent approaches that employ paragraph-level contexts have shown promising results: not only boosting performance, but also constituting a step towards deep question generation, which requires reasoning over rich contexts." ], [ "We discuss three trends that we wish to call practitioners' attention to as NQG evolves to take the center stage in QG: Multi-task Learning, Wider Input Modalities and Deep Question Generation." ], [ "As QG has become more mature, work has started to investigate how QG can assist in other NLP tasks, and vice versa. Some NLP tasks benefit from enriching training samples by QG to alleviate the data shortage problem. This idea has been successfully applied to semantic parsing BIBREF66 and QA BIBREF67 . In the semantic parsing task that maps a natural language question to a SQL query, BIBREF66 achieved a 3 $\\%$ performance gain with an enlarged training set that contains pseudo-labeled $(SQL, question)$ pairs generated by a Seq2Seq QG model. In QA, BIBREF67 employed the idea of self-training BIBREF68 to jointly learn QA and QG. The QA and QG models are first trained on a labeled corpus. Then, the QG model is used to create more questions from an unlabeled text corpus and the QA model is used to answer these newly-created questions. The newly-generated question–answer pairs form an enlarged dataset to iteratively retrain the two models. The process is repeated while performance of both models improve.", "Investigating the core aspect of QG, we say that a well-trained QG system should have the ability to: (1) find the most salient information in the passage to ask questions about, and (2) given this salient information as target answer, to generate an answer related question. BIBREF69 leveraged the first characteristic to improve text summarization by performing multi-task learning of summarization with QG, as both these two tasks require the ability to search for salient information in the passage. BIBREF49 applied the second characteristic to improve QA. For an input question $q$ and a candidate answer $\\hat{a}$ , they generate a question $\\hat{q}$ for $\\hat{a}$ by way of QG system. Since the generated question $\\hat{q}$ is closely related to $\\hat{a}$ , the similarity between $q$ and $\\hat{q}$ helps to evaluate whether $\\hat{a}$ is the correct answer.", "Other works focus on jointly training to combine QG and QA. BIBREF70 simultaneously train the QG and QA models in the same Seq2Seq model by alternating input data between QA and QG examples. BIBREF71 proposed a training algorithm that generalizes Generative Adversarial Network (GANs) BIBREF72 under the question answering scenario. The model improves QG by incorporating an additional QA-specific loss, and improving QA performance by adding artificially generated training instances from QG. However, while joint training has shown some effectiveness, due to the mixed objectives, its performance on QG are lower than the state-of-the-art results, which leaves room for future exploration." ], [ "QG work now has incorporated input from knowledge bases (KBQG) and images (VQG).", "Inspired by the use of SQuAD as a question benchmark, BIBREF9 created a 30M large-scale dataset of (KB triple, question) pairs to spur KBQG work. They baselined an attention seq2seq model to generate the target factoid question. Due to KB sparsity, many entities and predicates are unseen or rarely seen at training time. BIBREF73 address these few-/zero-shot issues by applying the copying mechanism and incorporating textual contexts to enrich the information for rare entities and relations. Since a single KB triple provides only limited information, KB-generated questions also overgeneralize — a model asks “Who was born in New York?\" when given the triple (Donald_Trump, Place_of_birth, New_York). To solve this, BIBREF29 enrich the input with a sequence of keywords collected from its related triples.", "Visual Question Generation (VQG) is another emerging topic which aims to ask questions given an image. We categorize VQG into grounded- and open-ended VQG by the level of cognition. Grounded VQG generates visually grounded questions, i.e., all relevant information for the answer can be found in the input image BIBREF74 . A key purpose of grounded VQG is to support the dataset construction for VQA. To ensure the questions are grounded, existing systems rely on image captions to varying degrees. BIBREF75 and BIBREF76 simply convert image captions into questions using rule-based methods with textual patterns. BIBREF74 proposed a neural model that can generate questions with diverse types for a single image, using separate networks to construct dense image captions and to select question types.", "In contrast to grounded QG, humans ask higher cognitive level questions about what can be inferred rather than what can be seen from an image. Motivated by this, BIBREF10 proposed open-ended VQG that aims to generate natural and engaging questions about an image. These are deep questions that require high cognition such as analyzing and creation. With significant progress in deep generative models, marked by variational auto-encoders (VAEs) and GANs, such models are also used in open-ended VQG to bring “creativity” into generated questions BIBREF77 , BIBREF78 , showing promising results. This also brings hope to address deep QG from text, as applied in NLG: e.g., SeqGAN BIBREF79 and LeakGAN BIBREF80 ." ], [ "Endowing a QG system with the ability to ask deep questions will help us build curious machines that can interact with humans in a better manner. However, BIBREF81 pointed out that asking high-quality deep questions is difficult, even for humans. Citing the study from BIBREF82 to show that students in college asked only about 6 deep-reasoning questions per hour in a question–encouraging tutoring session. These deep questions are often about events, evaluation, opinions, syntheses or reasons, corresponding to higher-order cognitive levels.", "To verify the effectiveness of existing NQG models in generating deep questions, BIBREF8 conducted an empirical study that applies the attention Seq2Seq model on LearningQ, a deep-question centric dataset containing over 60 $\\%$ questions that require reasoning over multiple sentences or external knowledge to answer. However, the results were poor; the model achieved miniscule BLEU-4 scores of $< 4$ and METEOR scores of $< 9$ , compared with $> 12$ (BLEU-4) and $> 16$ (METEOR) on SQuAD. Despite further in-depth analysis are needed to explore the reasons behind, we believe there are two plausible explanations: (1) Seq2Seq models handle long inputs ineffectively, and (2) Seq2Seq models lack the ability to reason over multiple pieces of information.", "Despite still having a long way to go, some works have set out a path forward. A few early QG works attempted to solve this through building deep semantic representations of the entire text, using concept maps over keywords BIBREF83 or minimal recursion semantics BIBREF84 to reason over concepts in the text. BIBREF35 proposed a crowdsourcing-based workflow that involves building an intermediate ontology for the input text, soliciting question templates through crowdsourcing, and generating deep questions based on template retrieval and ranking. Although this process is semi-automatic, it provides a practical and efficient way towards deep QG. In a separate line of work, BIBREF1 proposed a framework that simulates how people ask deep questions by treating questions as formal programs that execute on the state of the world, outputting an answer.", "Based on our survey, we believe the roadmap towards deep NGQ points towards research that will (1) enhance the NGQ model with the ability to consider relationships among multiple source sentences, (2) explicitly model typical reasoning patterns, and (3) understand and simulate the mechanism behind human question asking." ], [ "We have presented a comprehensive survey of NQG, categorizing current NQG models based on different QG-specific and common technical variations, and summarizing three emerging trends in NQG: multi-task learning, wider input modalities, and deep question generation.", "What's next for NGQ? We end with future potential directions by applying past insights to current NQG models; the “unknown unknown\", promising directions yet explored.", "When to Ask: Besides learning what and how to ask, in many real-world applications that question plays an important role, such as automated tutoring and conversational systems, learning when to ask become an important issue. In contrast to general dialog management BIBREF85 , no research has explored when machine should ask an engaging question in dialog. Modeling question asking as an interactive and dynamic process may become an interesting topic ahead.", "Personalized QG: Question asking is quite personalized: people with different characters and knowledge background ask different questions. However, integrating QG with user modeling in dialog management or recommendation system has not yet been explored. Explicitly modeling user state and awareness leads us towards personalized QG, which dovetails deep, end-to-end QG with deep user modeling and pairs the dual of generation–comprehension much in the same vein as in the vision–image generation area." ] ] }
{ "question": [ "Do they cover data augmentation papers?", "What is the latest paper covered by this survey?", "Do they survey visual question generation work?", "Do they survey multilingual aspects?", "What learning paradigms do they cover in this survey?", "What are all the input modalities considered in prior work in question generation?", "Do they survey non-neural methods for question generation?" ], "question_id": [ "a12a08099e8193ff2833f79ecf70acf132eda646", "999b20dc14cb3d389d9e3ba5466bc3869d2d6190", "ca4b66ffa4581f9491442dcec78ca556253c8146", "b3ff166bd480048e099d09ba4a96e2e32b42422b", "3703433d434f1913307ceb6a8cfb9a07842667dd", "f7c34b128f8919e658ba4d5f1f3fc604fb7ff793", "d42031893fd4ba5721c7d37e1acb1c8d229ffc21" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no", "no", "no", "no" ], "search_query": [ "question generation", "question generation", "question generation", "question generation", "question generation", "question generation", "question generation" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "f0dca97a210535659f8db4ad400dd5871135086f" ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Kim et al. (2019)", "evidence": [ "FLOAT SELECTED: Table 2: Existing NQG models with their best-reported performance on SQuAD. Legend: QW: question word generation, PC: paragraph-level context, CP: copying mechanism, LF: linguistic features, PG: policy gradient." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Existing NQG models with their best-reported performance on SQuAD. Legend: QW: question word generation, PC: paragraph-level context, CP: copying mechanism, LF: linguistic features, PG: policy gradient." ] } ], "annotation_id": [ "033cfb982d9533ed483a2d149ef6b901908303c1" ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Visual Question Generation (VQG) is another emerging topic which aims to ask questions given an image. We categorize VQG into grounded- and open-ended VQG by the level of cognition. Grounded VQG generates visually grounded questions, i.e., all relevant information for the answer can be found in the input image BIBREF74 . A key purpose of grounded VQG is to support the dataset construction for VQA. To ensure the questions are grounded, existing systems rely on image captions to varying degrees. BIBREF75 and BIBREF76 simply convert image captions into questions using rule-based methods with textual patterns. BIBREF74 proposed a neural model that can generate questions with diverse types for a single image, using separate networks to construct dense image captions and to select question types.", "In contrast to grounded QG, humans ask higher cognitive level questions about what can be inferred rather than what can be seen from an image. Motivated by this, BIBREF10 proposed open-ended VQG that aims to generate natural and engaging questions about an image. These are deep questions that require high cognition such as analyzing and creation. With significant progress in deep generative models, marked by variational auto-encoders (VAEs) and GANs, such models are also used in open-ended VQG to bring “creativity” into generated questions BIBREF77 , BIBREF78 , showing promising results. This also brings hope to address deep QG from text, as applied in NLG: e.g., SeqGAN BIBREF79 and LeakGAN BIBREF80 ." ], "highlighted_evidence": [ "Visual Question Generation (VQG) is another emerging topic which aims to ask questions given an image.", "Motivated by this, BIBREF10 proposed open-ended VQG that aims to generate natural and engaging questions about an image." ] } ], "annotation_id": [ "39d19fc7612e27072ed9e84eda6fa43ba201a0bb" ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "2cfe5b5774f9893b33adef8c99a236f8bfa1183c" ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Considering \"What\" and \"How\" separately versus jointly optimizing for both.", "evidence": [ "Past research took a reductionist approach, separately considering these two problems of “what” and “how” via content selection and question construction. Given a sentence or a paragraph as input, content selection selects a particular salient topic worthwhile to ask about and determines the question type (What, When, Who, etc.). Approaches either take a syntactic BIBREF11 , BIBREF12 , BIBREF13 or semantic BIBREF14 , BIBREF3 , BIBREF15 , BIBREF16 tack, both starting by applying syntactic or semantic parsing, respectively, to obtain intermediate symbolic representations. Question construction then converts intermediate representations to a natural language question, taking either a tranformation- or template-based approach. The former BIBREF17 , BIBREF18 , BIBREF13 rearranges the surface form of the input sentence to produce the question; the latter BIBREF19 , BIBREF20 , BIBREF21 generates questions from pre-defined question templates. Unfortunately, such QG architectures are limiting, as their representation is confined to the variety of intermediate representations, transformation rules or templates.", "In contrast, neural models motivate an end-to-end architectures. Deep learned frameworks contrast with the reductionist approach, admitting approaches that jointly optimize for both the “what” and “how” in an unified framework. The majority of current NQG models follow the sequence-to-sequence (Seq2Seq) framework that use a unified representation and joint learning of content selection (via the encoder) and question construction (via the decoder). In this framework, traditional parsing-based content selection has been replaced by more flexible approaches such as attention BIBREF22 and copying mechanism BIBREF23 . Question construction has become completely data-driven, requiring far less labor compared to transformation rules, enabling better language flexibility compared to question templates." ], "highlighted_evidence": [ "Past research took a reductionist approach, separately considering these two problems of “what” and “how” via content selection and question construction. ", "In contrast, neural models motivate an end-to-end architectures. Deep learned frameworks contrast with the reductionist approach, admitting approaches that jointly optimize for both the “what” and “how” in an unified framework. " ] } ], "annotation_id": [ "33fe23afb062027041fcc9b9dc9eaac9d38258e1" ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Textual inputs, knowledge bases, and images.", "evidence": [ "Question generation is an NLG task for which the input has a wealth of possibilities depending on applications. While a host of input modalities have been considered in other NLG tasks, such as text summarization BIBREF24 , image captioning BIBREF25 and table-to-text generation BIBREF26 , traditional QG mainly focused on textual inputs, especially declarative sentences, explained by the original application domains of question answering and education, which also typically featured textual inputs.", "Recently, with the growth of various QA applications such as Knowledge Base Question Answering (KBQA) BIBREF27 and Visual Question Answering (VQA) BIBREF28 , NQG research has also widened the spectrum of sources to include knowledge bases BIBREF29 and images BIBREF10 . This trend is also spurred by the remarkable success of neural models in feature representation, especially on image features BIBREF30 and knowledge representations BIBREF31 . We discuss adapting NQG models to other input modalities in Section \"Wider Input Modalities\" ." ], "highlighted_evidence": [ "While a host of input modalities have been considered in other NLG tasks, such as text summarization BIBREF24 , image captioning BIBREF25 and table-to-text generation BIBREF26 , traditional QG mainly focused on textual inputs, especially declarative sentences, explained by the original application domains of question answering and education, which also typically featured textual inputs.\n\nRecently, with the growth of various QA applications such as Knowledge Base Question Answering (KBQA) BIBREF27 and Visual Question Answering (VQA) BIBREF28 , NQG research has also widened the spectrum of sources to include knowledge bases BIBREF29 and images BIBREF10 ." ] } ], "annotation_id": [ "8bffad2892b897cc62faaa4e8b63c452cb530ccf" ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "7f2e8aadea59b20f3df567dc0140fedb23f4a347" ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] } ] }
{ "caption": [ "Table 1: NQG datasets grouped by their cognitive level and answer type, where the number of documents, the number of questions, and the average number of questions per document (Q./Doc) for each corpus are listed.", "Table 2: Existing NQG models with their best-reported performance on SQuAD. Legend: QW: question word generation, PC: paragraph-level context, CP: copying mechanism, LF: linguistic features, PG: policy gradient." ], "file": [ "4-Table1-1.png", "7-Table2-1.png" ] }
1909.00170
Open Named Entity Modeling from Embedding Distribution
In this paper, we report our discovery on named entity distribution in general word embedding space, which helps an open definition on multilingual named entity definition rather than previous closed and constraint definition on named entities through a named entity dictionary, which is usually derived from huaman labor and replies on schedual update. Our initial visualization of monolingual word embeddings indicates named entities tend to gather together despite of named entity types and language difference, which enable us to model all named entities using a specific geometric structure inside embedding space,namely, the named entity hypersphere. For monolingual case, the proposed named entity model gives an open description on diverse named entity types and different languages. For cross-lingual case, mapping the proposed named entity model provides a novel way to build named entity dataset for resource-poor languages. At last, the proposed named entity model may be shown as a very useful clue to significantly enhance state-of-the-art named entity recognition systems generally.
{ "section_name": [ "Introduction", "Word Embeddings", "Model", "Open Monolingual NE Modeling", " Embedding Distribution Mapping", "Hypersphere features for NE Recognition ", "Experiment", "Setup", "Monolingual Embedding Distribution", " Hypersphere Mapping", "Off-the-shelf NE Recognition Systems", "Related Work", "Conclusion" ], "paragraphs": [ [ "Named Entity Recognition is a major natural language processing task that recognizes the proper labels such as LOC (Location), PER (Person), ORG (Organization), etc. Like words or phrase, being a sort of language constituent, named entities also benefit from better representation for better processing. Continuous word representations, known as word embeddings, well capture semantic and syntactic regularities of words BIBREF0 and perform well in monolingual NE recognition BIBREF1 , BIBREF2 . Word embeddings also exhibit isomorphism structure across languages BIBREF3 . On account of these characteristics above, we attempt to utilize word embeddings to improve NE recognition for resource-poor languages with the help of richer ones. The state-of-the-art cross-lingual NE recognition methods are mainly based on annotation projection methods according to parallel corpora, translations BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 and Wikipedia methods BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 .", "Most annotated corpus based NE recognition tasks can benefit a great deal from a known NE dictionary, as NEs are those words which carry common sense knowledge quite differ from the rest ones in any language vocabulary. This work will focus on the NE recognition from plain text instead of corpus based NE recognition. For a purpose of learning from limited annotated linguistic resources, our preliminary discovery shows that it is possible to build a geometric space projection between embedding spaces to help cross-lingual NE recognition. Our study contains two main steps: First, we explore the NE distribution in monolingual case. Next, we learn a hypersphere mapping between embedding spaces of languages with minimal supervision.", "Despite the simplicity of our model, we make the following contributions. First, for word embeddings generated by different dimensions and objective functions, all common NE types (PER, LOC, ORG) tend to be densely distributed in a hypersphere, which gives a better solution to characterize the general NE distribution rather than existing closed dictionary definition for NE. Second, with the help of the hypersphere mapping, it is possible to capture the NE distribution of resource-poor languages with only a small amount of annotated data. Third, our method is highly friendly to unregistered NEs, as the distance to each hypersphere center is the only factor needed to determine their NE categories. Finally, by adding hypersphere features we can significantly improve the performance of off-the-shelf named entity recognition (NER) systems." ], [ "Seok BIBREF2 proposed that similar words are more likely to occupy close spatial positions, since their word embeddings carries syntactical and semantical informative clues. For an intuitive understanding, they listed the nearest neighbors of words included in the PER and ORG tags under cosine similarity metric. To empirically verify this observation and explore the performance of this property in Euclidean space , we list Top-5 nearest neighbors under Euclidean distance metric in Table 1 and illustrate a standard t-SNE BIBREF12 2- $D$ projection of the embeddings of three entity types with a sample of 500 words for each type.", "Nearest neighbors are calculated by comparing the Euclidean distance between the embedding of each word (such as Fohnsdorf, Belgian, and Ltd.) and the embeddings of all other words in the vocabulary. We pre-train word embeddings using the continuous skip-gram model BIBREF13 with the tool, and obtain multi-word and single-word phrases with a maximum length of 8, and a minimum word frequency cutoff of 3. The examples in Table 1 and visualization in Figure 1 demonstrate that the above observation suits well under Euclidean distance metric for NE recognition either for monolingual or multilingual situations." ], [ "Encouraged by the verification of nearest neighbors of NEs still being NEs, we attempt to build a model which can represent this property with least parameters. Namely, given an NE dictionary on a monolingual, we build a model to describe the distribution of the word embeddings of these entities, then we can easily use these parameters as a decoder for any word to directly determine whether it belongs to a certain type of entity. In this section, we first introduce the open modeling from embedding distribution in monolingual cases, and then put forward the mapping of the distribution model between languages, and then use the mapping to build named entity dataset for resource-poor languages. Finally, we use the proposed named entity model to improve the performance of state-of-the-art NE recognition systems." ], [ "As illustrated is Figure 1, the embedding distribution of NEs is aggregated, and there exists a certain boundary between different types of NEs. We construct an open representation for each type of NEs – hypersphere, the NE type of any entity can be easily judged by checking whether it is inside a hypersphere, which makes a difference from the defining way of any limited and insufficient NE dictionary. The hypersphere can be expressed as follows: ", "$$E( X, O) \\le r$$ (Eq. 9) ", "where E represents the adopted Euclidean distance, X is referred to any point in the hypersphere, $ O $ and $ r $ are the center vector and radius. For each entity type, we attempt to construct a hypersphere which encompass as many congeneric NEs as possible, and as few as possible inhomogeneous NEs, we use $F_1$ score as a trade-off between these two concerns. We carefully tune the center and radius of the hypersphere to maximize its $F_1$ score: we first fix the center as the average of all NE embeddings from known NE dictionaries, and search the best radius in $[minDist, maxDist]$ , where $minDist/maxDist$ refers to the distance between the center and its nearest/farthest neighbors; Then, we kick NEs which are far from the center with the distance threshold $q$ (much larger than the radius) to generate a new center; Finally, we tune the threshold $q$ and repeat the above steps to find the most suitable center and radius.", "The mathematical intuition for using a hypersphere can be interpreted in a manner similar to support vector machine (SVM) BIBREF14 , which uses the kernel to obtain the optimal margin in very high dimensional spaces through linear hyperplane separation in Descartes coordination. We transfer the idea to the separation of NE distributions. The only difference is about boundary shape, what we need is a closed surface instead of an open hyperplane, and hypersphere is such a smooth, closed boundary (with least parameters as well) in polar coordinates as counterpart of hyperplane in Descartes coordinates. Using the least principle to model the mathematical objective also follows the Occam razor principle.", "Figure 1 also reveals that the distribution of PER NEs is compact, while ORG NE distribution is relatively sparse. Syntactically, PER NEs are more stable in terms of position and length in sentences compared to ORG NEs, so that they have a more accurate embedding representation with strong strong syntax and semantics, making the corresponding word embeddings closer to central region of the hypersphere." ], [ "As the isomorphism characteristic exists between languages BIBREF3 , BIBREF15 , we can apply the distributional modeling for every languages in the same way. For a target language without an NE dictionary, its NE distribution can be obtained from a source language with known NE distributions by learning the transforming function between these two languages. We construct the transformation matrix $W$ via a set of parallel word pairs (the set will be referred to seed pairs hereafter) and their word embeddings $\\lbrace X^{(i)}, Z^{(i)}\\rbrace _{i=1}^m$ BIBREF3 , $\\lbrace X^{(i)}\\rbrace _{i=1}^m$ , $\\lbrace Z^{(i)}\\rbrace _{i=1}^m$ are the source and target word embeddings respectively. $W$ can be learned by solving the matrix equation $XW = Z$ . Then, given the source center vector ${ O_1}$ , the mapping center vector ${O_2}$ can be expressed as: ", "$${ O_2} = W^T{O_1}$$ (Eq. 11) ", "Actually, the isomorphism (mapping) between embedding spaces is the type of affine isomorphism by furthermore considering embedding in continuous space. The invariant characteristics of relative position BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 in affine transformation is applied to correct transformation matrix errors caused by limited amount of parallel word pairs (the set will be referred to seed pairs hereafter). As shown in Figure 2, the ratio of the line segments keep constant when the distance is linearly enlarged or shortened. Recall that point $Q$ is an affine combination of two other noncoincident points $Q_1$ and $Q_2$ on the line: $Q = (1-t)Q_1 + tQ_2 $ .", "We apply the affine mapping $f$ and get: $f(Q) = f((1-t)Q_1 + tQ_2) = (1-t)f(Q_1) + tf(Q_2)$ Obviously, the constant ratio $t$ is not affected by the affine transformation $f$ . That is, $Q$ has the same relative distances between it and $Q_1$ and $Q_2$ during the process of transformation. Based on the above characteristic, for any point $X^{(i)}$ in the source space and its mapping point $Z^{(i)}$ , $X^{(i)}$ and $f(Q) = f((1-t)Q_1 + tQ_2) = (1-t)f(Q_1) + tf(Q_2)$0 cut off radiuses with the same ratio, namely, the ratio of the distance of these two points to their centers and their radiuses remains unchanged. ", "$$\\frac{E( O_1, X^{(i)})}{r_1} = \\frac{E( O_2, Z^{(i)})}{r_2}$$ (Eq. 15) ", "where $E$ represents the adopted Euclidean distance, ${O_1, O_2, r_1, r_2}$ are the centers and radii of hyperspheres. We convert the equation and learn the optimized mapping center ${O_2}$ and ratio $K$ via the seed pairs: ", "$${K = \\frac{r_2}{r_1} = \\frac{E( O_2, Z^{(i)})}{E( O_1, X^{(i)})}}$$ (Eq. 16) ", "$$\\begin{aligned}\nE( O_2, Z^{(i)}) &= K * E( O_1, X^{(i)}) \\quad r_2 &= K * r_1 \\\\\n\\end{aligned}$$ (Eq. 17) ", "Given the seed pairs $\\lbrace X^{(i)}, Z^{(i)}\\rbrace _{i=1}^m$ , the initialized center $O_2$ in Equation (3), the center $ O_1 $ and radius $ r_1 $ of the hypersphere in source language space, we may work out the optimized ratio $K$ , the mapping center $ O_2 $ and radius $ r_2 $ in target language space by solving the linear equation group (5)." ], [ "The Euclidean distance between word and hypersphere centers can be pre-computed as its NE likelihood, which may provide informative clues for NE recognition. We only consider three entity types in our experiment, and the Euclidean distance which is represented as a 3- $D$ vector and referred to HS vector hereafter) is added to four representative off-the-shelf NER systems to verify its effectiveness. We feed HS vector into different layers of the neural network: (1) input layer $[x_k; c_k; HS]$ ; (2) output layer of LSTM $[h_k; HS]$ , where $x_k$ , $w_k$ and $h_k$ represent word embeddings, char embeddings and the output of LSTM, respectively. All of these models are based on classical BiLSTM-CRF architecture BIBREF20 , except that BIBREF21 replaces CRF layer with softmax. These four baseline systems are introduced as follows.", " BIBREF22 concatenates ELMo with word embeddings as the input of LSTM to enhance word representations as it carries both syntactic and semantic information.", " BIBREF21 uses distant supervision for NER task and propose a new Tie or Break tagging scheme, where entity spans and entity types are encoded into two folds. They first build a binary classifier to distinguish Break from Tie, and then learn the entity types according to their occurrence and frequency in NE dictionary. The authors conduct their experiments on biomedical datasets rather than standard benchmark, so we extract the NEs in training data as the domain-specific dictionary. This work creates a promising prospect for using dictionary to replace the role of training data.", " BIBREF23 takes advantage of the power of the 120 entity types from annotated data in Wikipedia. Cosine similarity between the word embedding and the embedding of each entity type is concatenated as the 120- $D$ feature vector (which is called LS vector in their paper) and then fed into the input layer of LSTM. Lexical feature has been shown a key factor to NE recognition.", " BIBREF24 passes sentences as sequences of characters into a character-level language model to produce a novel type of word embedding, contextual string embeddings, where one word may have different embeddings as the embeddings are computed both on the characters of a word and its surrounding context. Such embeddings are then fed into the input layer of LSTM." ], [ "In this section, we evaluate the hypersphere model based on the three models introduced above: open monolingual NE modeling, embedding distribution mapping and refinement NE recognition." ], [ "In this experiment, we adopt pre-trained word embeddings from Wikipedia corpus. Our preliminary experiments will be conducted on English and Chinese. For the former, we use NLTK toolkit and LANGID toolkit to perform the pre-processing. For the latter, we first use OpenCC to simplify characters, and then use THULAC to perform word segmentation.", "In order to make the experimental results more accurate and credible, we manually annotate two large enough Chinese and English NE dictionaries for training and test. Table 2 lists the statistics of Wikipedia corpus and the annotated data. Our dictionary contains many multi-word NEs in LOC and ORG types as accounted in the second column for each language in Table 2, while we only include single-word PER NEs in our dictionary, since the English first name and last name are separated, and Chinese word segmentation cuts most of the PER entities together. We pre-train quality multi-word and single-word embeddings and aim to maximize the coverage of the NEs in the dictionary. The pre-trained word embeddings cover 82.3% / 82.51% of LOC NEs and 70.2% / 63.61% of ORG NEs in English and Chinese, respectively. For other multi-word NEs, we simply calculate the average vector of each word embedding as their representations." ], [ "The NE distribution is closely correlated to the dimension of the embedding space, we train the word embeddings from 2- $D$ to 300- $D$ and search for the most suitable dimension for each NE type. For each dimension, we carefully tune the center and radius of the hypersphere using the method introduced in section 3.1 for maximize $F_1$ score, and select the dimension with maximize $F_1$ score. The most suitable dimension for ORG, PER, LOC are 16- ${D}$ /16- ${D}$ /24- ${D}$ (these dimensions will be used as parameters in the following experiments), respectively . We discover that in low-dimensional space, the distributions of NEs are better. In high dimensions, the curse of dimension could be the main reason to limit the performance.", "Table 3 lists the final maximum $F_1$ score of three NE types. The results of the three types of NE are almost 50%, and PER type performs best. The main factor may be that PER NEs are represented as single-word in our dictionary, and word embeddings can better represents their meanings. The result also states that better representations for multi-word NEs which are not covered by the dictionary instead of the average of each word may help bring better results. Besides, the incompleteness of NE dictionaries and noises during pre-processing may cause a decrease on the performance. Overall, hypersphere model has shown been effectively used as the open modeling for NEs." ], [ "The following preparations were made for the mapping: $(i)$ A large enough NE dictionary in source (resource-rich) corpus; $(ii)$ A small amount of annotated seed pairs. We use $s$ to represent the number of seed pairs and $d$ to represent the number of unknown variables. With seed pair size $s < d$ , the matrix can be solved with much loose constraints, and $F_1$ score remarkably increases with more seed pairs. Once $s > d$ , the linear equation group will be always determined by strong enough constraints, which leads to a stable solution. Based on the characteristics, we only take two dozen of seed pairs on each type in following experiments. We combine human translation and online translation together for double verification for this small set of seed pairs. In this part, we utilize English and Chinese as the corpus of known NEs in turn, and predict the NE distribution of the other language.", "Evaluation In order to quantitatively represent the mapping effect, we present a new evaluation method to judge the hypersphere mapping between English and Chinese: ", "$$\\begin{aligned}\nP = \\frac{V_i}{V_m} \\quad R = \\frac{V_i}{V_t} \\quad F_1 = \\frac{2 * P * R}{P + R}\n\\end{aligned}$$ (Eq. 29) ", "where ${V_t, V_m, V_i}$ represent the volumes of the target, mapping and intersection hyperspheres. Due to the difficulty of calculating the volume of hyperspheres in high dimensions, we adopt Monte Carlo methods to simulate the volume BIBREF25 . we generate a great quantity of points in the embedding spaces, and take the amount of the points falling in each hypersphere as its volume.", "Mapping between English and Chinese Table 4 shows the comparisons of cross-lingual named entity extraction performance. We use the unsupervised method proposed in BIBREF26 to generate cross-lingual embeddings. $k$ -NN and SVM are the same as monolingual cases in Table 3 except for the training set. $k$ -NN $_{150}$ and SVM $_{150}$ use 20% of the NEs in source language and 150 NEs (50 LOC, PER and ORG) in target language for training, while $k$ -NN $_{2500}$ and SVM $_{2500}$ use 20% of the NEs in source language and 2500 NEs (1000 LOC and PER, 500 ORG) in target language. $k$ -NN and SVM depend much on the annotated training set, requiring more than $1K$ training samples to provide a performance as our model offers. Due to the instability of ORG type in length, taking the average of each word embedding may disobey the syntactic and semantic regularities of ORG NEs, thereby undermines the multilingual isomorphism characteristics, which causes the inferior performance of our model on this type of NEs. This suggests that build better representations NEs for multi-word NEs may contribute to a better performance in our model.", "Mapping to truly Low-resource Language We build named entity dataset for a truly resource-poor language, Indonesian, and manually examine the nearest words to the hypersphere center for 'gold-standard' evaluation. We take English as the source language, the settings of the dimension $D$ and the number of seed pairs $s$ are the same as the above experiments between Chinese and English. From the results listed in Table 5, we can see that even the precision of the top-100 NEs are 0.350 $F_1$ /0.440 $F_1$ /0.310 $F_1$ , respectively, which proves the this distribution can indeed serves as a candidate NE dictionary for Indonesian.", "[9] The authors of BIBREF24 publish an updated results (92.98) on CoNLL-2003 dataset in https://github.com/zalandoresearch/flair/issues/206 on their 0.3.2 version, and this is the best result at our most try. [10] This is the reported state-of-the-art result in their github. [11]We use the same parameters as the authors release in https://github.com/zalandoresearch/flair/issues/173 and obtain the result of 89.45 on ONTONOTES 5.0 dataset." ], [ "To evaluate the influence of our hypersphere feature for off-the-shelf NER systems, we perform the NE recognition on two standard NER benchmark datasets, CoNLL2003 and ONTONOTES 5.0. Our results in Table 6 and Table 7 demonstrate the power of hypersphere features, which contribute to nearly all of the three types of entities as shown in Table 6, except for a slight drop in the PER type of BIBREF22 on a strong baseline. HS features stably enhance all strong state-of-the-art baselines, BIBREF22 , BIBREF21 and BIBREF23 by 0.33/0.72/0.23 $F_1$ point and 0.13/0.3/0.1 $F_1$ point on both benchmark datasets, CoNLL-2003 and ONTONOTES 5.0. We show that our HS feature is also comparable with previous much more complicated LS feature, and our model surpasses their baseline (without LS feature) by 0.58/0.78 $F_1$ point with only HS features. We establish a new state-of-the-art $F_1$ score of 89.75 on ONTONOTES 5.0, while matching state-of-the-art performance with a $F_1$ score of 92.95 on CoNLL-2003 dataset." ], [ "In recent years, word embeddings have also been used as a feature to enhance the NE recognition, with the revealing of linguistic features in morphological, syntactic and semantic perspective. BIBREF1 clustered the word embeddings and combined multiple cluster granularities to improve the NE recognition performance. Our work likewise use word embeddings to help NE recognition, we make use of the characteristic that syntactically and semantically s are more likely to be neighbors in embedding spaces and construct a hypersphere model to encompass NEs.", "Cross-lingual knowledge transfer is a highly promising work for resource-poor languages, annotation projection and representation projection are widely used in NE recognition BIBREF27 , BIBREF5 , BIBREF4 , BIBREF28 , BIBREF29 , BIBREF30 . These works put forward inconvenient requirements for parallel or comparable corpora, a large amount of annotated or translation data or bilingual lexicon. Different from any existing work to the best of our knowledge, this is the first work that merely uses isomorphic mappings in low-dimensional embedding spaces to recognize NEs, and we introduce a mathematically simple model to describe NE embedding distribution from visualization results, showing it works for both monolingual and cross-lingual situations." ], [ "Named entities being an open set which keeps expanding are difficult to represent through a closed NE dictionary. This work mitigates significant defects in previous closed NE definitions and proposes a new open definition for NEs by modeling their embedding distributions with least parameters. We visualize NE distributions in monolingual case and perform an effective isomorphism spaces mapping in cross-lingual case. According to our work, we demonstrate that common named entity types (PER, LOC, ORG) tend to be densely distributed in a hypersphere and it is possible to build a mapping between the NE distributions in embedding spaces to help cross-lingual NE recognition. Experimental results show that the distribution of named entities via mapping can be used as a good enough replacement for the original distribution. Then the discovery is used to build an NE dictionary for Indonesian being a truly low-resource language, which also gives satisfactory precision. Finally, our simple hypersphere features being the representation of NE likelihood can be used for enhancing off-the-shelf NER systems by concatenating with word embeddings and the output of BiLSTM in the input layer and encode layer, respectively, and we achieve a new state-of-the-art $F_1$ score of 89.75 on ONTONOTES 5.0 benchmark. In this work, we also give a better solution for unregistered NEs. For any newly emerged NE together with its embedding, in case we obtain the hypersphere of each named entity, the corresponding named entity category can be determined by calculating the distance between its word embedding and the center of each hypersphere." ] ] }
{ "question": [ "What is their model?", "Do they evaluate on NER data sets?" ], "question_id": [ "a999761aa976458bbc7b4f330764796446d030ff", "f229069bcb05c2e811e4786c89b0208af90d9a25" ], "nlp_background": [ "infinity", "infinity" ], "topic_background": [ "familiar", "familiar" ], "paper_read": [ "no", "no" ], "search_query": [ "definition modeling", "definition modeling" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "cross-lingual NE recognition" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Most annotated corpus based NE recognition tasks can benefit a great deal from a known NE dictionary, as NEs are those words which carry common sense knowledge quite differ from the rest ones in any language vocabulary. This work will focus on the NE recognition from plain text instead of corpus based NE recognition. For a purpose of learning from limited annotated linguistic resources, our preliminary discovery shows that it is possible to build a geometric space projection between embedding spaces to help cross-lingual NE recognition. Our study contains two main steps: First, we explore the NE distribution in monolingual case. Next, we learn a hypersphere mapping between embedding spaces of languages with minimal supervision." ], "highlighted_evidence": [ "For a purpose of learning from limited annotated linguistic resources, our preliminary discovery shows that it is possible to build a geometric space projection between embedding spaces to help cross-lingual NE recognition." ] } ], "annotation_id": [ "5209901b84927a51c04e476925639db65d53d0a7" ], "worker_id": [ "594e0b1297abe0ad3e2555ad27eedfb59c442bb9" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "To evaluate the influence of our hypersphere feature for off-the-shelf NER systems, we perform the NE recognition on two standard NER benchmark datasets, CoNLL2003 and ONTONOTES 5.0. Our results in Table 6 and Table 7 demonstrate the power of hypersphere features, which contribute to nearly all of the three types of entities as shown in Table 6, except for a slight drop in the PER type of BIBREF22 on a strong baseline. HS features stably enhance all strong state-of-the-art baselines, BIBREF22 , BIBREF21 and BIBREF23 by 0.33/0.72/0.23 $F_1$ point and 0.13/0.3/0.1 $F_1$ point on both benchmark datasets, CoNLL-2003 and ONTONOTES 5.0. We show that our HS feature is also comparable with previous much more complicated LS feature, and our model surpasses their baseline (without LS feature) by 0.58/0.78 $F_1$ point with only HS features. We establish a new state-of-the-art $F_1$ score of 89.75 on ONTONOTES 5.0, while matching state-of-the-art performance with a $F_1$ score of 92.95 on CoNLL-2003 dataset." ], "highlighted_evidence": [ "To evaluate the influence of our hypersphere feature for off-the-shelf NER systems, we perform the NE recognition on two standard NER benchmark datasets, CoNLL2003 and ONTONOTES 5.0." ] } ], "annotation_id": [ "03481476921bcf31cdd24affad747bd14f3e4e0e" ], "worker_id": [ "594e0b1297abe0ad3e2555ad27eedfb59c442bb9" ] } ] }
{ "caption": [ "Table 1: Top-5 Nearest Neighbors.", "Figure 1: Graphical representation of the distribution of the NEs in zh (left) and en (right). Big Xs indicate the center of each entity type, while circles refer to words. Language code: zh-Chinese, en-English, same for all the figures and tables hereafter.", "Figure 2: Affine mappings preserve relative ratios.", "Table 2: Statistics of Wikipedia corpus and annotated data (the digit in parentheses indicates the proportion of the single-word NEs).", "Table 3: Maximum F1 scores for NE types.", "Table 4: Comparisons of NE extraction performance with cross-lingual embeddings.", "Table 5: Manually examine the precision on Top-100 nearest words to the hypersphere center.", "Table 6: F1 scores on CoNLL-2003 and ONTONOTES 5.0 datasets. HS represents hypersphere features. The title reported indicates the results reported from the original corresponding paper, while our run indicates the results from our re-implementation or re-run the code provided by the authors. ERR in the brackets is the relative error rate reduction of our models compared to the respective baselines.", "Table 7: Comparisons with state-of-the-art systems on CoNLL-2003 dataset (Peters et al., 2018; Ghaddar and Langlais, 2018) for each entity type." ], "file": [ "2-Table1-1.png", "3-Figure1-1.png", "4-Figure2-1.png", "5-Table2-1.png", "5-Table3-1.png", "6-Table4-1.png", "6-Table5-1.png", "7-Table6-1.png", "7-Table7-1.png" ] }
1701.03051
Efficient Twitter Sentiment Classification using Subjective Distant Supervision
As microblogging services like Twitter are becoming more and more influential in today's globalised world, its facets like sentiment analysis are being extensively studied. We are no longer constrained by our own opinion. Others opinions and sentiments play a huge role in shaping our perspective. In this paper, we build on previous works on Twitter sentiment analysis using Distant Supervision. The existing approach requires huge computation resource for analysing large number of tweets. In this paper, we propose techniques to speed up the computation process for sentiment analysis. We use tweet subjectivity to select the right training samples. We also introduce the concept of EFWS (Effective Word Score) of a tweet that is derived from polarity scores of frequently used words, which is an additional heuristic that can be used to speed up the sentiment classification with standard machine learning algorithms. We performed our experiments using 1.6 million tweets. Experimental evaluations show that our proposed technique is more efficient and has higher accuracy compared to previously proposed methods. We achieve overall accuracies of around 80% (EFWS heuristic gives an accuracy around 85%) on a training dataset of 100K tweets, which is half the size of the dataset used for the baseline model. The accuracy of our proposed model is 2-3% higher than the baseline model, and the model effectively trains at twice the speed of the baseline model.
{ "section_name": [ "Introduction", "Related Work", "Subjectivity", "Implementation", "Corpus", "Subjectivity Filtering", "Preprocessing", "Baseline model", "Effective Word Score (EFWS) Heuristic", "Training Model", "Evaluation", "Conclusion" ], "paragraphs": [ [ "A lot of work has been done in the field of Twitter sentiment analysis till date. Sentiment analysis has been handled as a Natural Language Processing task at many levels of granularity. Most of these techniques use Machine Learning algorithms with features such as unigrams, n-grams, Part-Of-Speech (POS) tags. However, the training datasets are often very large, and hence with such a large number of features, this process requires a lot of computation power and time. The following question arises: What to do if we do not have resources that provide such a great amount of computation power? The existing solution to this problem is to use a smaller sample of the dataset. For sentiment analysis, if we train the model using a smaller randomly chosen sample, then we get low accuracy [16, 17]. In this paper, we propose a novel technique to sample tweets for building a sentiment classification model so that we get higher accuracy than the state-of-the-art baseline method, namely Distant Supervision, using a smaller set of tweets. Our model has lower computation time and higher accuracy compared to baseline model.", "Users often express sentiment using subjective expression. Although objective expressions can also have sentiment, it is much rare. Determining subjectivity is quite efficient compared to determining sentiment. Subjectivity can be determined for individual tweets. But to do sentiment classification, we need to build a classification model with positive and negative sentiment tweets. The time to train a sentiment classification model increases with the increase in the number of training tweets. In this paper, we use tweet subjectivity to select the best training tweets. This not only lowers the computation time but also increases the accuracy because we have training data with less noise. Even the created features will be more relevant to the classification task. The computation cost will reduce due to small training data size and better set of features. Thus if users do not have enough computational resources, they can filter the training dataset using a high value of subjectivf ity threshold. This ensures reliable prediction on a smaller training dataset, and eventually requires less computational time. The above approach, and some of the intricacies that invariably seep in, need to be considered, and are described in the later sections of the paper. In this paper we also integrate a lot of meticulous preprocessing steps. This makes our model more robust, and hence leads to higher accuracy.", "Along with the machine learning algorithms being used, we use a heuristic-based classification of tweets. This is based on the EFWS of a tweet, which is described in later sections. This heuristic basically takes into account the polarity scores of frequently used words in tweets, and is able to achieve around 85% accuracy on our dataset, hence boosting the overall accuracy by a considerable amount.", "Our training data consists of generic (not topic-specific) Twitter messages with emoticons, which are used as noisy labels. We show that the accuracy obtained on a training dataset comprising 100K tweets, and a test dataset of 5000 tweets gives an accuracy of around 80% on the following classifiers: Naive Bayes, RBF-kernel Support Vector Machine, and Logistic Regression. Our model takes roughly half the time to train and achieves higher accuracy (than the baseline model) on all the classifiers. Because the amount of training time is expected to increase exponentially as the training data increases, we expect our model to outperform (in terms of higher accuracy) the baseline model at a speed which is at least twofold the speed of the baseline model on larger datasets." ], [ "There has been a large amount of prior research in sentiment analysis of tweets. Read [10] shows that using emoticons as labels for positive and sentiment is effective for reducing dependencies in machine learning techniques. Alec Go [1] used Naive Bayes, SVM, and MaxEnt classifiers to train their model. This, as mentioned earlier, is our baseline model. Our model builds on this and achieves higher accuracy on a much smaller training dataset.", "Ayushi Dalmia [6] proposed a model with a more involved preprocessing stage, and used features like scores from Bing Liu’s Opinion Lexicon, and number of positive, negative POS tags. This model achieved considerably high accuracies considering the fact that their features were the not the conventional bag-of-words, or any n-grams. The thought of using the polarity scores of frequently used tweet words (as described in our EFWS heuristic) was inspired from this work. [14] created prior probabilities using the datasets for the average sentiment of tweets in different spatial, temporal and authorial contexts. They then used a Bayesian approach to combine these priors with standard bigram language models.", "Another significant effort in sentiment analysis on Twitter data is by Barbosa [16]. They use polarity predictions from three websites as noisy labels to train a model and use 1000 manually labelled tweets for tuning and another 1000 for testing. They propose the use of syntax features of tweets like punctuation, retweet, hashtags, link, and exclamation marks in addition with features like prior polarity of words and POS of words.", "Some works leveraged the use of existing hashtags in the Twitter data for building the training data. (Davidov, Tsur, and Rappoport 2010) also use hashtags for creating training data, but they limit their experiments to sentiment/non-sentiment classification, rather than 3-way polarity classification, as [15] does. Our model integrates some of the preprocessing techniques this work used. Hassan Saif [9] introduced a novel approach of adding semantics as additional features into the training set for sentiment analysis. This approach works well for topic specific data. Hence, we thought of taking a different approach for a generic tweet dataset like ours." ], [ "Subjectivity refers to how someone's judgment is shaped by personal opinions and feelings instead of outside influences. An objective perspective is one that is not influenced by emotions, opinions, or personal feelings - it is a perspective based in fact, in things quantifiable and measurable. A subjective perspective is one open to greater interpretation based on personal feeling, emotion, aesthetics, etc.", "Subjectivity classification is another topic in the domain of text classification which is garnering more and more interest in the field of sentiment analysis. Since a single sentence may contain multiple opinions and subjective and factual clauses, this problem is not as straightforward as it seems. Below are some examples of subjective and objective sentences.", "Objective sentence with no sentiment: So, the Earth revolves around the Sun.", "Objective sentence with sentiment: The drug relieved my pain.", "Subjective sentence with no sentiment: I believe he went home yesterday.", "Subjective sentence with sentiment: I am so happy you got the scholarship.", "Classifying a sentence as subjective or objective provides certain conclusions. Purely objective sentences do not usually convey any sentiment, while most of the purely subjective sentences have a clear inclination towards either the positive or negative sentiment. Sentences which are not completely subjective or objective may or may not convey a sentiment. Libraries like TextBlob, and tools like Opinion Finder can be used to find the extent to which a sentence can be considered subjective.", "Since tweets are usually person-specific, or subjective, we use this intuition to reduce the size of the training set by filtering the sentences with a subjectivity level below a certain threshold (fairly objective tweets)." ], [ "In this section, we explain the various preprocessing techniques used for feature reduction, and also the additional step of filtering the training dataset using the subjectivity score of tweets. We further describe our approach of using different machine learning classifiers and feature extractors. We also propose an additional heuristic for sentiment classification which can be used as a tag-along with the learning heuristics." ], [ "Our training dataset has 1.6 million tweets, and 5000 tweets in the test dataset. Since the test dataset provided comprised only 500 tweets, we have taken part of the training data (exactly 5000 tweets, distinct from the training dataset) as the test dataset. We remove emoticons from our training and test data. The table below shows some sample tweets.", "" ], [ "This is a new step we propose to achieve higher accuracy on a smaller training dataset. We use TextBlob to classify each tweet as subjective or objective. We then remove all tweets which have a subjectivity level/score (score lies between 0 and 1) below a specified threshold. The remaining tweets are used for training purposes. We observe that a considerable number of tweets are removed as the subjectivity threshold increases. We show the effect of doing this procedure on the overall accuracy in the evaluation section of the paper." ], [ "The Twitter language model has many unique properties. We take advantage of the following properties to reduce the feature space. Most of the preprocessing steps are common to most of the previous works in the field. However, we have added some more steps to this stage of our model.", "We first strip off the emoticons from the data. Users often include twitter usernames in their tweets in order to direct their messages. We also strip off usernames (e.g. @Chinmay) and URLs present in tweets because they do not help us in sentiment classification. Apart from full stops, which are dealt in the next point, other punctuations and special symbols are also removed. Repeated whitespaces are replaced with a single space. We also perform stemming to reduce the size of the feature space.", "In the previous works, full stops are just usually replaced by a space. However, we have observed that casual language in tweets is often seen in form of repeated punctuations. For example, “this is so cool...wow\". We take into consideration this format, and replace two or more occurrences of “.\" and “-\" with a space. Also, full stops are also quite different in usage. Sometimes, there isn't any space in between sentences. For example, “It’s raining.Feeling awesome\". We replace a single occurrence of a full stop with a space to ensure correct feature incorporation.", "In the case of hashtags, most of the previous works just consider the case of hashtags followed by a single word; they just remove the hashtag and add the word to the feature vector. However, sometimes, there are multiple words after a hashtag, and more often than not, these words form an important, conclusive part of the Tweet. For example, #ThisSucks, or #BestMomentEver. These hashtags need to be dealt with in a proper fashion. We split the text after hashtags after before each capital letter, and add these as tokens to the feature vector. For hashtags followed by a single word, we just replace the pattern #word with the word, as conventional models do. The intuition behind this step is that quite often, the sentiment of a tweet is expressed in form of a hashtag. For example, #happy or #disappointed are frequently used hashtags, and we don’t want to lose this information during sentiment classification.", "Tweets contain very casual language as mentioned earlier. For example, if we search “wow\" with an arbitrary number of o's in the middle (e.g. wooow, woooow) on Twitter, there will most likely be a non-empty result set. We use preprocessing so that any letter occurring more than two times in a row is replaced with two occurrences. In the samples above, these words would be converted into the token “woow\". After all the above modifications, tweets are converted into lowercase to avoid confusion between features having same content, but are different in capitalization.", "We gather a list of 400 stopwords. These words, if present in the tweets, are not considered in the feature vector.", "We store an acronym dictionary which has over 5000, frequently-used acronyms and their abbreviations. We replace such acronyms in tweets with their abbreviation, since these can be of great use while sentiment classification.", "All negative words like 'cannot', 'can't', 'won't', 'don't' are replaced by 'not', which effectively keeps the sentiment stable. It is observed that doing this makes the training faster, since the model has to deal with a smaller feature vector." ], [ "The baseline model for our experiments is explained in the paper by Alec Go [1]. The model uses the Naive Bayes, SVM, and the Maximum Entropy classifiers for their experiment. Their feature vector is either composed of Unigrams, Bigrams, Unigrams + Bigrams, or Unigrams + POS tags.", "This work achieved the following maximum accuracies:", "a) 82.2 for the Unigram feature vector, using the SVM classifier,", "b) 83.0 for the Unigram + Bigram feature vector, using the MaxEnt classifier, and 82.7 using the Naive Bayes classifier.", "c) 81.9 for the Unigram + POS feature vector, using the SVM classifier.", "These baseline accuracies were on a training dataset of 1.6 million tweets, and a test dataset of 500 tweets. We are using the same training dataset for our experiments. We later present the baseline accuracies on a training set of 200K tweets, and a test dataset of 5000 tweets; we compare our model's accuracy with these baseline accuracy values on the same test data of 5000 tweets." ], [ "We have described our baseline model above. So the feature vectors we collate results for, are Unigram, Unigram + Bigram, and Unigram + POS. We have already made two major changes before the training starts on our dataset as compared to our baseline model. Firstly, our training dataset will be filtered according to the subjectivity threshold. And secondly, our preprocessing is much more robust as compared to their work.", "Now let us look at an additional heuristic we use to obtain labels for our test data. Along with dictionaries for stop words and acronyms, we also maintain a dictionary of a list of frequently used words and their polarity scores. This dictionary has around 2500 words and their polarity score ranging from -5 to 5. At runtime, we also use all synonyms of a word (from WordNet) present in a tweet and also the dictionary, and assign them the same score as the dictionary word. There is a reasonable assumption here, that the synonyms aren't very extremal in nature, that is, a word with a polarity score of 2 cannot have a synonym which has a polarity score of 5. Now, we calculate the Effective Word Scores of a tweet.", "We define the Effective Word Score of score x as", " ", "EFWS(x) = N(+x) - N(-x),", " ", "where N(x) is the number of words in the tweet with polarity score x.", "For example, if a tweet has one word with score 5, three words with score 4, two with score 2, three with with score -2, one with score -3, and finally two with score -4, then the effective word scores are:", "EFWS(5) = N(5) - N(-5) = 1 - 0 = 1", "EFWS(4) = N(4) - N(-4) = 3 - 2 = 1", "EFWS(3) = N(3) - N(-3) = 0 - 1 = -1", "EFWS(2) = N(2) - N(-2) = 2 - 3 = -1", "EFWS(1) = N(1) - N(-1) = 2 - 0 = 2", "We now define the heuristic for obtaining the label of a Tweet.", " (EFWS(5) INLINEFORM0 1 or EFWS(4) INLINEFORM1 1) and (EFWS(2) INLINEFORM2 1) Label = positive ", "Similarly,", " (EFWS(5) INLINEFORM0 -1 or EFWS(4) INLINEFORM1 -1) and (EFWS(2) INLINEFORM2 -1) Label = negative ", "The basic intuition behind such a heuristic is that we found tweets having one strongly positive and one moderately positive word more than the number of strongly negative and the moderately negative words respectively, usually conveyed a positive sentiment. Similar was the case for negative sentiments. The tweets getting a label from this heuristic are not sent into the training phase. After considerable amount of experimenting, and analyzing the nature of our dataset, which is not domain specific, we have reached the conclusion that the heuristic mentioned above is optimal for obtaining labels. We found that the heuristic accuracy was around 85% for a training dataset of 100K and a test dataset of 5K, where the total number of test tweets labelled by the heuristic were around 500. This means that around 425 out of the 500 tweets received a correct prediction of sentiment using this heuristic.", "Thus, using this heuristic improves the overall accuracy, as well as saves time by reducing the number of tweets to be tested by the ML algorithms." ], [ "We use the following classifiers for our model.", "Naive Bayes is a simple model which works well on text categorization. We use a Naive Bayes model. Class c* is assigned to tweet d, where c* = argmax P(c INLINEFORM0 d). INLINEFORM1 ", "And INLINEFORM0 is calculated using Bayes Rule. In this formula, f represents a feature and INLINEFORM1 represents the count of feature INLINEFORM2 found in tweet d. There are a total of m features. Parameters P(c) and INLINEFORM3 are obtained through maximum likelihood estimates.", "Support vector machines are based on the Structural Risk Minimization principle from computational learning theory. SVM classification algorithms for binary classification is based on finding a separation between hyperplanes defined by classes of data. One remarkable property of SVMs is that their ability to learn can be independent of the dimensionality of the feature space. SVMs can generalize even in the presence of many features as in the case of text data classification. We use a non-linear Support Vector Machine with an RBF kernel.", "Maximum Entropy Model belongs to the family of discriminative classifiers also known as the exponential or log-linear classifiers.. In the naive Bayes classifier, Bayes rule is used to estimate this best y indirectly from the likelihood INLINEFORM0 (and the prior INLINEFORM1 ) but a discriminative model takes this direct approach, computing INLINEFORM2 by discriminating among the different possible values of the class y rather than first computing a likelihood. INLINEFORM3 ", "Logistic regression estimates INLINEFORM0 by combining the feature set linearly (multiplying each feature by a weight and adding them up), and then applying a function to this combination." ], [ "In this section, we present the collated results of our experiments. To show that our model achieves higher accuracy than the baseline model and on a smaller training dataset, we first fix the test dataset. Our test dataset, as mentioned before, consists of 5000 tweets. We conducted our experiments on an Intel Core i5 machine (4 cores), with 8 GB RAM. The following are the accuracies of the baseline model on a training set of 200K tweets:", "", "We filtered the training set with a subjectivity threshold of 0.5. By doing this, we saw that the number of tweets reduced to approximately 0.6 million tweets from an earlier total of 1.6 million. We then trained our model described in earlier sections on a 100K tweets randomly picked from this filtered training dataset, and observed the following accuracies:", "", "Note that all the accuracies in the tables above have been recorded as the average of 3 iterations of our experiment. We achieve higher accuracy for all feature vectors, on all classifiers, and that too from a training dataset half the size of the baseline one.", "We now see the intricacies of the subjectivity threshold parameter. It is clear that more and more tweets get filtered as the subjectivity threshold parameter increases. This can be seen in the Figure 1 shown below. We have plotted the number of tweets that remain after filtering from two sources: TextBlob, Opinion Finder Tool. TextBlob has an inbuilt function that provides us the subjectivity level of a tweet. On the other hand, Opinion Finder only provides the information of which parts of the text are subjective, and which are objective. From that, we define the subjectivity level of that text as:", "Subjectivity level = INLINEFORM0 ", "[ xlabel=Subjectivity Threshold, ylabel=Tweets (in millions), xmin=0, xmax=1, ymin=0, ymax=2000000, xtick=0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1, ytick=0,200000,400000,600000,800000,1000000,1200000,1400000,1600000,1800000, legend pos=north east, ] [color=red] coordinates (0, 1600000) (0.1, 939785) (0.2, 873054) (0.3, 804820) (0.4, 712485) (0.5, 571864) (0.6, 449286) (0.7, 304874) (0.8, 211217) (0.9, 135788) ;", "[color=blue] coordinates (0, 1600000) (0.1, 602313) (0.2, 499173) (0.3, 392223) (0.4, 262109) (0.5, 169477) (0.6, 154667) (0.7, 139613) (0.8, 126148) (0.9, 116842) ; Textblob, Opinion Finder", "Figure 1: Number of tweets with subjectivity greater than the subjectivity threshold", "[ xlabel=Subjectivity Threshold, ylabel=Accuracy (from 0 to 1), xmin=0, xmax=1, ymin=0.7, ymax=1, xtick=0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1, ytick=0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1, legend pos=north east, ] [color=red] coordinates (0.1, 0.753871866) (0.2, 0.779442897) (0.3, 0.763421155) (0.4, 0.783231198) (0.5,0.805132645) (0.6,0.807373259) (0.7,0.808587744) (0.8,0.817799443) (0.9,0.823872989) ; Figure 2: Variation of accuracy (*Training data of 100K, Test data of 5K) with subjectivity threshold. *TextBlob is used to filter the tweets to form the training dataset.", "We now focus on the issue of choosing the optimum threshold value. As the subjectivity threshold parameter increases, our model trains on tweets with a higher subjectivity level, and the overall accuracy increases. We observed the following accuracies on subjectivity level 0.8 (Unigrams as features):", "Naive Bayes: 80.32%", "Non-linear SVM: 80.15 %", "Logistic Regression: 81.77%", "We should consider the fact that a lot of useful tweets are also lost in the process of gradually increasing the parameter, and this could cause a problem in cases when the test data is very large, because the model will not train on a generic dataset. Researchers may use a higher subjectivity threshold for their experiments if they are confident that most of the important information would be retained. This is most likely to happen in case of topic-specific or domain-specific data.", "[ ybar, enlargelimits=0.15, legend style=anchor=north, legend pos= north east, ylabel=Training time (in minutes), symbolic x coords=baseline,subjectivity=0.5,subjectivity=0.8, xtick=data, ] coordinates (baseline,17.4) (subjectivity=0.5,12.55) (subjectivity=0.8,10.68); coordinates (baseline,16.23) (subjectivity=0.5,12.31) (subjectivity=0.8,10.34); coordinates (baseline,31.9) (subjectivity=0.5,18.24) (subjectivity=0.8,16.3); Logistic Regression,Naive Bayes,SVM", "Figure 3: Comparison of training times for Unigrams", "[ ybar, enlargelimits=0.15, legend style=anchor=north, legend pos= north east, ylabel=Training time (in minutes), symbolic x coords=baseline,subjectivity=0.5,subjectivity=0.8, xtick=data, ] coordinates (baseline,28.41) (subjectivity=0.5,14.09) (subjectivity=0.8,11.3); coordinates (baseline,16.6) (subjectivity=0.5,13.51) (subjectivity=0.8,12.66); coordinates (baseline,35.2) (subjectivity=0.5,20.6) (subjectivity=0.8,19.2); Logistic Regression,Naive Bayes,SVM Figure 4: Comparison of training times for Unigrams + Bigrams", "We use Logistic regression for classification and unigrams as the feature vector with K-fold cross validation for determining the accuracy. We choose an optimal threshold value of 0.5 for our experiment, considering the fact that the model should train on a more generic dataset. Figure 2 shows the variation of accuracy with the subjectivity threshold. The training size is fixed at 100K and the test dataset (5K tweets) is also same for all the experiments.", "We also measure the time taken to train our model, and compare it to the baseline model. Our observation was that our model took roughly half the amount of time in some cases and yet obtained a higher accuracy. Figures 3 and 4 show the difference in training time of the baseline model, our model on a 0.5 subjectivity-filtered dataset, and our model on a 0.8 subjectivity-filtered dataset on unigrams and unigrams + bigrams respectively. The times recorded are on a training dataset of 100K for our model and 200K for the baseline model, and a test dataset of 5K was fixed in all the recordings. The winning point, which can be seen from the plots, is that our model is considerably faster, and even has twofold speed in some cases. And alongside saving computation time, it achieves higher accuracy. This can be attributed to the fact that as the subjectivity threshold increases, only the tweets with highly polar words are retained in the training set and this makes the whole process faster." ], [ "We show that a higher accuracy can be obtained in sentiment classification of Twitter messages training on a smaller dataset and with a much faster computation time, and hence the issue of constraint on computation power is resolved to a certain extent. This can be achieved using a subjectivity threshold to selectively filter the training data, incorporating a more complex preprocessing stage, and using an additional heuristic for sentiment classification, along with the conventional machine learning techniques. As Twitter data is abundant, our subjectivity filtering process can achieve a better generalised model for sentiment classification." ] ] }
{ "question": [ "What previously proposed methods is this method compared against?", "How is effective word score calculated?", "How is tweet subjectivity measured?" ], "question_id": [ "6b55b558ed581759425ede5d3a6fcdf44b8082ac", "3e3f5254b729beb657310a5561950085fa690e83", "5bb96b255dab3e47a8a68b1ffd7142d0e21ebe2a" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "twitter", "twitter", "twitter" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Naive Bayes", "SVM", "Maximum Entropy classifiers" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The baseline model for our experiments is explained in the paper by Alec Go [1]. The model uses the Naive Bayes, SVM, and the Maximum Entropy classifiers for their experiment. Their feature vector is either composed of Unigrams, Bigrams, Unigrams + Bigrams, or Unigrams + POS tags." ], "highlighted_evidence": [ "The baseline model for our experiments is explained in the paper by Alec Go [1]. The model uses the Naive Bayes, SVM, and the Maximum Entropy classifiers for their experiment." ] } ], "annotation_id": [ "285a3bb556f5ed56113b8ea178c104624b8db80e" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "We define the Effective Word Score of score x as\n\nEFWS(x) = N(+x) - N(-x),\n\nwhere N(x) is the number of words in the tweet with polarity score x." ], "yes_no": null, "free_form_answer": "", "evidence": [ "We define the Effective Word Score of score x as", "EFWS(x) = N(+x) - N(-x),", "where N(x) is the number of words in the tweet with polarity score x." ], "highlighted_evidence": [ "We define the Effective Word Score of score x as\n\nEFWS(x) = N(+x) - N(-x),\n\nwhere N(x) is the number of words in the tweet with polarity score x." ] } ], "annotation_id": [ "0368782a4ee744662a817c59fa35a7bd14e3fc1c" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "7a7e46b5cd58d7c87bd4b3271f9c5dea13809d08" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Figure 1: Number of tweets with subjectivity greater than the subjectivity threshold", "Figure 3: Comparison of training times for Unigrams", "Figure 2: Variation of accuracy (*Training data of 100K, Test data of 5K) with subjectivity threshold. *TextBlob is used to filter the tweets to form the training dataset.", "Figure 4: Comparison of training times for Unigrams + Bigrams" ], "file": [ "5-Figure1-1.png", "5-Figure3-1.png", "5-Figure2-1.png", "5-Figure4-1.png" ] }
1603.01417
Dynamic Memory Networks for Visual and Textual Question Answering
Neural network architectures with memory and attention mechanisms exhibit certain reasoning capabilities required for question answering. One such architecture, the dynamic memory network (DMN), obtained high accuracy on a variety of language tasks. However, it was not shown whether the architecture achieves strong results for question answering when supporting facts are not marked during training or whether it could be applied to other modalities such as images. Based on an analysis of the DMN, we propose several improvements to its memory and input modules. Together with these changes we introduce a novel input module for images in order to be able to answer visual questions. Our new DMN+ model improves the state of the art on both the Visual Question Answering dataset and the \babi-10k text question-answering dataset without supporting fact supervision.
{ "section_name": [ "Introduction", "Dynamic Memory Networks", "Improved Dynamic Memory Networks: DMN+", "Input Module for Text QA", "Input Module for VQA", "The Episodic Memory Module", "Related Work", "Datasets", "bAbI-10k", "DAQUAR-ALL visual dataset", "Visual Question Answering", "Model Analysis", "Comparison to state of the art using bAbI-10k", "Comparison to state of the art using VQA", "Conclusion" ], "paragraphs": [ [ "Neural network based methods have made tremendous progress in image and text classification BIBREF0 , BIBREF1 . However, only recently has progress been made on more complex tasks that require logical reasoning. This success is based in part on the addition of memory and attention components to complex neural networks. For instance, memory networks BIBREF2 are able to reason over several facts written in natural language or (subject, relation, object) triplets. Attention mechanisms have been successful components in both machine translation BIBREF3 , BIBREF4 and image captioning models BIBREF5 .", "The dynamic memory network BIBREF6 (DMN) is one example of a neural network model that has both a memory component and an attention mechanism. The DMN yields state of the art results on question answering with supporting facts marked during training, sentiment analysis, and part-of-speech tagging.", "We analyze the DMN components, specifically the input module and memory module, to improve question answering. We propose a new input module which uses a two level encoder with a sentence reader and input fusion layer to allow for information flow between sentences. For the memory, we propose a modification to gated recurrent units (GRU) BIBREF7 . The new GRU formulation incorporates attention gates that are computed using global knowledge over the facts. Unlike before, the new DMN+ model does not require that supporting facts (i.e. the facts that are relevant for answering a particular question) are labeled during training. The model learns to select the important facts from a larger set.", "In addition, we introduce a new input module to represent images. This module is compatible with the rest of the DMN architecture and its output is fed into the memory module. We show that the changes in the memory module that improved textual question answering also improve visual question answering. Both tasks are illustrated in Fig. 1 ." ], [ "We begin by outlining the DMN for question answering and the modules as presented in BIBREF6 .", "The DMN is a general architecture for question answering (QA). It is composed of modules that allow different aspects such as input representations or memory components to be analyzed and improved independently. The modules, depicted in Fig. 1 , are as follows:", "Input Module: This module processes the input data about which a question is being asked into a set of vectors termed facts, represented as $F=[f_1,\\hdots ,f_N]$ , where $N$ is the total number of facts. These vectors are ordered, resulting in additional information that can be used by later components. For text QA in BIBREF6 , the module consists of a GRU over the input words.", "As the GRU is used in many components of the DMN, it is useful to provide the full definition. For each time step $i$ with input $x_i$ and previous hidden state $h_{i-1}$ , we compute the updated hidden state $h_i = GRU(x_i,h_{i-1})$ by ", "$$u_i &=& \\sigma \\left(W^{(u)}x_{i} + U^{(u)} h_{i-1} + b^{(u)} \\right)\\\\\nr_i &=& \\sigma \\left(W^{(r)}x_{i} + U^{(r)} h_{i-1} + b^{(r)} \\right)\\\\\n\\tilde{h}_i &=& \\tanh \\left(Wx_{i} + r_i \\circ U h_{i-1} + b^{(h)}\\right)\\\\\nh_i &=& u_i\\circ \\tilde{h}_i + (1-u_i) \\circ h_{i-1}$$ (Eq. 2) ", "where $\\sigma $ is the sigmoid activation function, $\\circ $ is an element-wise product, $W^{(z)}, W^{(r)}, W \\in \\mathbb {R}^{n_H \\times n_I}$ , $U^{(z)}, U^{(r)}, U \\in \\mathbb {R}^{n_H \\times n_H}$ , $n_H$ is the hidden size, and $n_I$ is the input size.", "Question Module: This module computes a vector representation $q$ of the question, where $q \\in \\mathbb {R}^{n_H}$ is the final hidden state of a GRU over the words in the question.", "Episodic Memory Module: Episode memory aims to retrieve the information required to answer the question $q$ from the input facts. To improve our understanding of both the question and input, especially if questions require transitive reasoning, the episode memory module may pass over the input multiple times, updating episode memory after each pass. We refer to the episode memory on the $t^{th}$ pass over the inputs as $m^t$ , where $m^t \\in \\mathbb {R}^{n_H}$ , the initial memory vector is set to the question vector: $m^0 = q$ .", "The episodic memory module consists of two separate components: the attention mechanism and the memory update mechanism. The attention mechanism is responsible for producing a contextual vector $c^t$ , where $c^t \\in \\mathbb {R}^{n_H}$ is a summary of relevant input for pass $t$ , with relevance inferred by the question $q$ and previous episode memory $m^{t-1}$ . The memory update mechanism is responsible for generating the episode memory $m^t$ based upon the contextual vector $c^t$ and previous episode memory $m^{t-1}$ . By the final pass $T$ , the episodic memory $m^T$ should contain all the information required to answer the question $c^t \\in \\mathbb {R}^{n_H}$0 .", "Answer Module: The answer module receives both $q$ and $m^T$ to generate the model's predicted answer. For simple answers, such as a single word, a linear layer with softmax activation may be used. For tasks requiring a sequence output, an RNN may be used to decode $a = [q ; m^T]$ , the concatenation of vectors $q$ and $m^T$ , to an ordered set of tokens. The cross entropy error on the answers is used for training and backpropagated through the entire network." ], [ "We propose and compare several modeling choices for two crucial components: input representation, attention mechanism and memory update. The final DMN+ model obtains the highest accuracy on the bAbI-10k dataset without supporting facts and the VQA dataset BIBREF8 . Several design choices are motivated by intuition and accuracy improvements on that dataset." ], [ "In the DMN specified in BIBREF6 , a single GRU is used to process all the words in the story, extracting sentence representations by storing the hidden states produced at the end of sentence markers. The GRU also provides a temporal component by allowing a sentence to know the content of the sentences that came before them. Whilst this input module worked well for bAbI-1k with supporting facts, as reported in BIBREF6 , it did not perform well on bAbI-10k without supporting facts (Sec. \"Model Analysis\" ).", "We speculate that there are two main reasons for this performance disparity, all exacerbated by the removal of supporting facts. First, the GRU only allows sentences to have context from sentences before them, but not after them. This prevents information propagation from future sentences. Second, the supporting sentences may be too far away from each other on a word level to allow for these distant sentences to interact through the word level GRU.", "Input Fusion Layer", "For the DMN+, we propose replacing this single GRU with two different components. The first component is a sentence reader, responsible only for encoding the words into a sentence embedding. The second component is the input fusion layer, allowing for interactions between sentences. This resembles the hierarchical neural auto-encoder architecture of BIBREF9 and allows content interaction between sentences. We adopt the bi-directional GRU for this input fusion layer because it allows information from both past and future sentences to be used. As gradients do not need to propagate through the words between sentences, the fusion layer also allows for distant supporting sentences to have a more direct interaction.", "Fig. 2 shows an illustration of an input module, where a positional encoder is used for the sentence reader and a bi-directional GRU is adopted for the input fusion layer. Each sentence encoding $f_i$ is the output of an encoding scheme taking the word tokens $[w^i_1, \\hdots , w^i_{M_i}]$ , where $M_i$ is the length of the sentence.", "The sentence reader could be based on any variety of encoding schemes. We selected positional encoding described in BIBREF10 to allow for a comparison to their work. GRUs and LSTMs were also considered but required more computational resources and were prone to overfitting if auxiliary tasks, such as reconstructing the original sentence, were not used.", "For the positional encoding scheme, the sentence representation is produced by $f_i = \\sum ^{j=1}_M l_j \\circ w^i_j$ , where $\\circ $ is element-wise multiplication and $l_j$ is a column vector with structure $l_{jd} = (1 - j / M) - (d / D) (1 - 2j / M)$ , where $d$ is the embedding index and $D$ is the dimension of the embedding.", "The input fusion layer takes these input facts and enables an information exchange between them by applying a bi-directional GRU. ", "$$\\overrightarrow{f_i} = GRU_{fwd}(f_i, \\overrightarrow{f_{i-1}}) \\\\\n\\overleftarrow{f_{i}} = GRU_{bwd}(f_{i}, \\overleftarrow{f_{i+1}}) \\\\\n\\overleftrightarrow{f_i} = \\overleftarrow{f_i} + \\overrightarrow{f_i}$$ (Eq. 5) ", "where $f_i$ is the input fact at timestep $i$ , $ \\overrightarrow{f_i}$ is the hidden state of the forward GRU at timestep $i$ , and $\\overleftarrow{f_i}$ is the hidden state of the backward GRU at timestep $i$ . This allows contextual information from both future and past facts to impact $\\overleftrightarrow{f_i}$ .", "We explored a variety of encoding schemes for the sentence reader, including GRUs, LSTMs, and the positional encoding scheme described in BIBREF10 . For simplicity and speed, we selected the positional encoding scheme. GRUs and LSTMs were also considered but required more computational resources and were prone to overfitting if auxiliary tasks, such as reconstructing the original sentence, were not used." ], [ "To apply the DMN to visual question answering, we introduce a new input module for images. The module splits an image into small local regions and considers each region equivalent to a sentence in the input module for text. The input module for VQA is composed of three parts, illustrated in Fig. 3 : local region feature extraction, visual feature embedding, and the input fusion layer introduced in Sec. \"Input Module for Text QA\" .", "Local region feature extraction: To extract features from the image, we use a convolutional neural network BIBREF0 based upon the VGG-19 model BIBREF11 . We first rescale the input image to $448 \\times 448$ and take the output from the last pooling layer which has dimensionality $d = 512 \\times 14 \\times 14$ . The pooling layer divides the image into a grid of $14 \\times 14$ , resulting in 196 local regional vectors of $d = 512$ .", "Visual feature embedding: As the VQA task involves both image features and text features, we add a linear layer with tanh activation to project the local regional vectors to the textual feature space used by the question vector $q$ .", "Input fusion layer: The local regional vectors extracted from above do not yet have global information available to them. Without global information, their representational power is quite limited, with simple issues like object scaling or locational variance causing accuracy problems.", "To solve this, we add an input fusion layer similar to that of the textual input module described in Sec. \"Input Module for Text QA\" . First, to produce the input facts $F$ , we traverse the image in a snake like fashion, as seen in Figure 3 . We then apply a bi-directional GRU over these input facts $F$ to produce the globally aware input facts $\\overleftrightarrow{F}$ . The bi-directional GRU allows for information propagation from neighboring image patches, capturing spatial information." ], [ "The episodic memory module, as depicted in Fig. 4 , retrieves information from the input facts $\\overleftrightarrow{F} = [\\overleftrightarrow{f_1}, \\hdots , \\overleftrightarrow{f_N}]$ provided to it by focusing attention on a subset of these facts. We implement this attention by associating a single scalar value, the attention gate $g^t_i$ , with each fact $\\overleftrightarrow{f}_i$ during pass $t$ . This is computed by allowing interactions between the fact and both the question representation and the episode memory state. ", "$$z^t_i &=& [\\overleftrightarrow{f_i} \\circ q; \\overleftrightarrow{f_i} \\circ m^{t-1}; \\vert \\overleftrightarrow{f_i} - q \\vert ; \\vert \\overleftrightarrow{f_i} - m^{t-1} \\vert ] \\\\\nZ^t_i &=& W^{(2)} \\tanh \\left(W^{(1)}z^t_i + b^{(1)} \\right)+ b^{(2)} \\\\\ng^t_i &=& \\frac{\\exp (Z^t_i)}{\\sum _{k=1}^{M_i} \\exp (Z^t_k)} $$ (Eq. 10) ", "where $\\overleftrightarrow{f_i}$ is the $i^{th}$ fact, $m^{t-1}$ is the previous episode memory, $q$ is the original question, $\\circ $ is the element-wise product, $|\\cdot |$ is the element-wise absolute value, and $;$ represents concatenation of the vectors.", "The DMN implemented in BIBREF6 involved a more complex set of interactions within $z$ , containing the additional terms $[f; m^{t-1}; q; f^T W^{(b)} q; f^T W^{(b)} m^{t-1}]$ . After an initial analysis, we found these additional terms were not required.", "Attention Mechanism", "Once we have the attention gate $g^t_i$ we use an attention mechanism to extract a contextual vector $c^t$ based upon the current focus. We focus on two types of attention: soft attention and a new attention based GRU. The latter improves performance and is hence the final modeling choice for the DMN+.", "Soft attention: Soft attention produces a contextual vector $c^t$ through a weighted summation of the sorted list of vectors $\\overleftrightarrow{F}$ and corresponding attention gates $g_i^t$ : $c^t = \\sum _{i=1}^N g^t_i \\overleftrightarrow{f}_i$ This method has two advantages. First, it is easy to compute. Second, if the softmax activation is spiky it can approximate a hard attention function by selecting only a single fact for the contextual vector whilst still being differentiable. However the main disadvantage to soft attention is that the summation process loses both positional and ordering information. Whilst multiple attention passes can retrieve some of this information, this is inefficient.", "Attention based GRU: For more complex queries, we would like for the attention mechanism to be sensitive to both the position and ordering of the input facts $\\overleftrightarrow{F}$ . An RNN would be advantageous in this situation except they cannot make use of the attention gate from Equation .", "We propose a modification to the GRU architecture by embedding information from the attention mechanism. The update gate $u_i$ in Equation 2 decides how much of each dimension of the hidden state to retain and how much should be updated with the transformed input $x_i$ from the current timestep. As $u_i$ is computed using only the current input and the hidden state from previous timesteps, it lacks any knowledge from the question or previous episode memory.", "By replacing the update gate $u_i$ in the GRU (Equation 2 ) with the output of the attention gate $g^t_i$ (Equation ) in Equation , the GRU can now use the attention gate for updating its internal state. This change is depicted in Fig 5 . ", "$$h_i &=& g^t_i \\circ \\tilde{h}_i + (1-g^t_i) \\circ h_{i-1}$$ (Eq. 12) ", "An important consideration is that $g^t_i$ is a scalar, generated using a softmax activation, as opposed to the vector $u_i \\in \\mathbb {R}^{n_H}$ , generated using a sigmoid activation. This allows us to easily visualize how the attention gates activate over the input, later shown for visual QA in Fig. 6 . Though not explored, replacing the softmax activation in Equation with a sigmoid activation would result in $g^t_i \\in \\mathbb {R}^{n_H}$ . To produce the contextual vector $c^t$ used for updating the episodic memory state $m^t$ , we use the final hidden state of the attention based GRU.", "Episode Memory Updates", "After each pass through the attention mechanism, we wish to update the episode memory $m^{t-1}$ with the newly constructed contextual vector $c^t$ , producing $m^t$ . In the DMN, a GRU with the initial hidden state set to the question vector $q$ is used for this purpose. The episodic memory for pass $t$ is computed by ", "$$m^t = GRU(c^t, m^{t-1})$$ (Eq. 13) ", "The work of BIBREF10 suggests that using different weights for each pass through the episodic memory may be advantageous. When the model contains only one set of weights for all episodic passes over the input, it is referred to as a tied model, as in the “Mem Weights” row in Table 1 .", "Following the memory update component used in BIBREF10 and BIBREF12 we experiment with using a ReLU layer for the memory update, calculating the new episode memory state by ", "$$m^t = ReLU\\left(W^t [m^{t-1} ; c^t ; q] + b\\right)$$ (Eq. 14) ", "where $;$ is the concatenation operator, $W^t \\in \\mathbb {R}^{n_H \\times n_H}$ , $b \\in \\mathbb {R}^{n_H}$ , and $n_H$ is the hidden size. The untying of weights and using this ReLU formulation for the memory update improves accuracy by another 0.5% as shown in Table 1 in the last column. The final output of the memory network is passed to the answer module as in the original DMN." ], [ "The DMN is related to two major lines of recent work: memory and attention mechanisms. We work on both visual and textual question answering which have, until now, been developed in separate communities.", "Neural Memory Models The earliest recent work with a memory component that is applied to language processing is that of memory networks BIBREF2 which adds a memory component for question answering over simple facts. They are similar to DMNs in that they also have input, scoring, attention and response mechanisms. However, unlike the DMN their input module computes sentence representations independently and hence cannot easily be used for other tasks such as sequence labeling. Like the original DMN, this memory network requires that supporting facts are labeled during QA training. End-to-end memory networks BIBREF10 do not have this limitation. In contrast to previous memory models with a variety of different functions for memory attention retrieval and representations, DMNs BIBREF6 have shown that neural sequence models can be used for input representation, attention and response mechanisms. Sequence models naturally capture position and temporality of both the inputs and transitive reasoning steps.", "Neural Attention Mechanisms Attention mechanisms allow neural network models to use a question to selectively pay attention to specific inputs. They can benefit image classification BIBREF13 , generating captions for images BIBREF5 , among others mentioned below, and machine translation BIBREF14 , BIBREF3 , BIBREF4 . Other recent neural architectures with memory or attention which have proposed include neural Turing machines BIBREF15 , neural GPUs BIBREF16 and stack-augmented RNNs BIBREF17 .", "Question Answering in NLP Question answering involving natural language can be solved in a variety of ways to which we cannot all do justice. If the potential input is a large text corpus, QA becomes a combination of information retrieval and extraction BIBREF18 . Neural approaches can include reasoning over knowledge bases, BIBREF19 , BIBREF20 or directly via sentences for trivia competitions BIBREF21 .", "Visual Question Answering (VQA) In comparison to QA in NLP, VQA is still a relatively young task that is feasible only now that objects can be identified with high accuracy. The first large scale database with unconstrained questions about images was introduced by BIBREF8 . While VQA datasets existed before they did not include open-ended, free-form questions about general images BIBREF22 . Others are were too small to be viable for a deep learning approach BIBREF23 . The only VQA model which also has an attention component is the stacked attention network BIBREF24 . Their work also uses CNN based features. However, unlike our input fusion layer, they use a single layer neural network to map the features of each patch to the dimensionality of the question vector. Hence, the model cannot easily incorporate adjacency of local information in its hidden state. A model that also uses neural modules, albeit logically inspired ones, is that by BIBREF25 who evaluate on knowledgebase reasoning and visual question answering. We compare directly to their method on the latter task and dataset.", "Related to visual question answering is the task of describing images with sentences BIBREF26 . BIBREF27 used deep learning methods to map images and sentences into the same space in order to describe images with sentences and to find images that best visualize a sentence. This was the first work to map both modalities into a joint space with deep learning methods, but it could only select an existing sentence to describe an image. Shortly thereafter, recurrent neural networks were used to generate often novel sentences based on images BIBREF28 , BIBREF29 , BIBREF30 , BIBREF5 ." ], [ "To analyze our proposed model changes and compare our performance with other architectures, we use three datasets." ], [ "For evaluating the DMN on textual question answering, we use bAbI-10k English BIBREF31 , a synthetic dataset which features 20 different tasks. Each example is composed of a set of facts, a question, the answer, and the supporting facts that lead to the answer. The dataset comes in two sizes, referring to the number of training examples each task has: bAbI-1k and bAbI-10k. The experiments in BIBREF10 found that their lowest error rates on the smaller bAbI-1k dataset were on average three times higher than on bAbI-10k." ], [ "The DAtaset for QUestion Answering on Real-world images (DAQUAR) BIBREF23 consists of 795 training images and 654 test images. Based upon these images, 6,795 training questions and 5,673 test questions were generated. Following the previously defined experimental method, we exclude multiple word answers BIBREF32 , BIBREF33 . The resulting dataset covers 90% of the original data. The evaluation method uses classification accuracy over the single words. We use this as a development dataset for model analysis (Sec. \"Model Analysis\" )." ], [ "The Visual Question Answering (VQA) dataset was constructed using the Microsoft COCO dataset BIBREF34 which contained 123,287 training/validation images and 81,434 test images. Each image has several related questions with each question answered by multiple people. This dataset contains 248,349 training questions, 121,512 validation questions, and 244,302 for testing. The testing data was split into test-development, test-standard and test-challenge in BIBREF8 .", "Evaluation on both test-standard and test-challenge are implemented via a submission system. test-standard may only be evaluated 5 times and test-challenge is only evaluated at the end of the competition. To the best of our knowledge, VQA is the largest and most complex image dataset for the visual question answering task." ], [ "To understand the impact of the proposed module changes, we analyze the performance of a variety of DMN models on textual and visual question answering datasets.", "The original DMN (ODMN) is the architecture presented in BIBREF6 without any modifications. DMN2 only replaces the input module with the input fusion layer (Sec. \"Input Module for Text QA\" ). DMN3, based upon DMN2, replaces the soft attention mechanism with the attention based GRU proposed in Sec. \"The Episodic Memory Module\" . Finally, DMN+, based upon DMN3, is an untied model, using a unique set of weights for each pass and a linear layer with a ReLU activation to compute the memory update. We report the performance of the model variations in Table 1 .", "A large improvement to accuracy on both the bAbI-10k textual and DAQUAR visual datasets results from updating the input module, seen when comparing ODMN to DMN2. On both datasets, the input fusion layer improves interaction between distant facts. In the visual dataset, this improvement is purely from providing contextual information from neighboring image patches, allowing it to handle objects of varying scale or questions with a locality aspect. For the textual dataset, the improved interaction between sentences likely helps the path finding required for logical reasoning when multiple transitive steps are required.", "The addition of the attention GRU in DMN3 helps answer questions where complex positional or ordering information may be required. This change impacts the textual dataset the most as few questions in the visual dataset are likely to require this form of logical reasoning. Finally, the untied model in the DMN+ overfits on some tasks compared to DMN3, but on average the error rate decreases.", "From these experimental results, we find that the combination of all the proposed model changes results, culminating in DMN+, achieves the highest performance across both the visual and textual datasets." ], [ "We trained our models using the Adam optimizer BIBREF35 with a learning rate of 0.001 and batch size of 128. Training runs for up to 256 epochs with early stopping if the validation loss had not improved within the last 20 epochs. The model from the epoch with the lowest validation loss was then selected. Xavier initialization was used for all weights except for the word embeddings, which used random uniform initialization with range $[-\\sqrt{3}, \\sqrt{3}]$ . Both the embedding and hidden dimensions were of size $d = 80$ . We used $\\ell _2$ regularization on all weights except bias and used dropout on the initial sentence encodings and the answer module, keeping the input with probability $p=0.9$ . The last 10% of the training data on each task was chosen as the validation set. For all tasks, three passes were used for the episodic memory module, allowing direct comparison to other state of the art methods. Finally, we limited the input to the last 70 sentences for all tasks except QA3 for which we limited input to the last 130 sentences, similar to BIBREF10 .", "On some tasks, the accuracy was not stable across multiple runs. This was particularly problematic on QA3, QA17, and QA18. To solve this, we repeated training 10 times using random initializations and evaluated the model that achieved the lowest validation set loss.", "Text QA Results", "We compare our best performing approach, DMN+, to two state of the art question answering architectures: the end to end memory network (E2E) BIBREF10 and the neural reasoner framework (NR) BIBREF12 . Neither approach use supporting facts for training.", "The end-to-end memory network is a form of memory network BIBREF2 tested on both textual question answering and language modeling. The model features both explicit memory and a recurrent attention mechanism. We select the model from the paper that achieves the lowest mean error over the bAbI-10k dataset. This model utilizes positional encoding for input, RNN-style tied weights for the episode module, and a ReLU non-linearity for the memory update component.", "The neural reasoner framework is an end-to-end trainable model which features a deep architecture for logical reasoning and an interaction-pooling mechanism for allowing interaction over multiple facts. While the neural reasoner framework was only tested on QA17 and QA19, these were two of the most challenging question types at the time.", "In Table 2 we compare the accuracy of these question answering architectures, both as mean error and error on individual tasks. The DMN+ model reduces mean error by 1.4% compared to the the end-to-end memory network, achieving a new state of the art for the bAbI-10k dataset.", "One notable deficiency in our model is that of QA16: Basic Induction. In BIBREF10 , an untied model using only summation for memory updates was able to achieve a near perfect error rate of $0.4$ . When the memory update was replaced with a linear layer with ReLU activation, the end-to-end memory network's overall mean error decreased but the error for QA16 rose sharply. Our model experiences the same difficulties, suggesting that the more complex memory update component may prevent convergence on certain simpler tasks.", "The neural reasoner model outperforms both the DMN and end-to-end memory network on QA17: Positional Reasoning. This is likely as the positional reasoning task only involves minimal supervision - two sentences for input, yes/no answers for supervision, and only 5,812 unique examples after removing duplicates from the initial 10,000 training examples. BIBREF12 add an auxiliary task of reconstructing both the original sentences and question from their representations. This auxiliary task likely improves performance by preventing overfitting." ], [ "For the VQA dataset, each question is answered by multiple people and the answers may not be the same, the generated answers are evaluated using human consensus. For each predicted answer $a_i$ for the $i_{th}$ question with target answer set $T^{i}$ , the accuracy of VQA: $Acc_{VQA} = \\frac{1}{N}\\sum _{i=1}^Nmin(\\frac{\\sum _{t\\in T^i}{1}_{(a_i==t)}}{3},1)$ where ${1}_{(\\cdot )}$ is the indicator function. Simply put, the answer $a_i$ is only 100 $\\%$ accurate if at least 3 people provide that exact answer.", "Training Details We use the Adam optimizer BIBREF35 with a learning rate of 0.003 and batch size of 100. Training runs for up to 256 epochs with early stopping if the validation loss has not improved in the last 10 epochs. For weight initialization, we sampled from a random uniform distribution with range $[-0.08, 0.08]$ . Both the word embedding and hidden layers were vectors of size $d=512$ . We apply dropout on the initial image output from the VGG convolutional neural network BIBREF11 as well as the input to the answer module, keeping input with probability $p=0.5$ .", "Results and Analysis", "The VQA dataset is composed of three question domains: Yes/No, Number, and Other. This enables us to analyze the performance of the models on various tasks that require different reasoning abilities.", "The comparison models are separated into two broad classes: those that utilize a full connected image feature for classification and those that perform reasoning over multiple small image patches. Only the SAN and DMN approach use small image patches, while the rest use the fully-connected whole image feature approach.", "Here, we show the quantitative and qualitative results in Table 3 and Fig. 6 , respectively. The images in Fig. 6 illustrate how the attention gate $g^t_i$ selectively activates over relevant portions of the image according to the query. In Table 3 , our method outperforms baseline and other state-of-the-art methods across all question domains (All) in both test-dev and test-std, and especially for Other questions, achieves a wide margin compared to the other architectures, which is likely as the small image patches allow for finely detailed reasoning over the image.", "However, the granularity offered by small image patches does not always offer an advantage. The Number questions may be not solvable for both the SAN and DMN architectures, potentially as counting objects is not a simple task when an object crosses image patch boundaries." ], [ "We have proposed new modules for the DMN framework to achieve strong results without supervision of supporting facts. These improvements include the input fusion layer to allow interactions between input facts and a novel attention based GRU that allows for logical reasoning over ordered inputs. Our resulting model obtains state of the art results on both the VQA dataset and the bAbI-10k text question-answering dataset, proving the framework can be generalized across input domains." ] ] }
{ "question": [ "Why is supporting fact supervision necessary for DMN?", "What does supporting fact supervision mean?", "What changes they did on input module?", "What improvements they did for DMN?", "How does the model circumvent the lack of supporting facts during training?", "Does the DMN+ model establish state-of-the-art ?" ], "question_id": [ "129c03acb0963ede3915415953317556a55f34ee", "58b3b630a31fcb9bffb510390e1ec30efe87bfbf", "141dab98d19a070f1ce7e7dc384001d49125d545", "afdad4c9bdebf88630262f1a9a86ac494f06c4c1", "bfd4fc82ffdc5b2b32c37f4222e878106421ce2a", "1ce26783f0ff38925bfc07bbbb65d206e52c2d21" ], "nlp_background": [ "two", "two", "two", "two", "five", "five" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no", "somewhat", "somewhat" ], "search_query": [ "Question Answering", "Question Answering", "Question Answering", "Question Answering", "question answering", "question answering" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "First, the GRU only allows sentences to have context from sentences before them, but not after them. This prevents information propagation from future sentences. Second, the supporting sentences may be too far away from each other on a word level to allow for these distant sentences to interact through the word level GRU." ], "yes_no": null, "free_form_answer": "", "evidence": [ "We speculate that there are two main reasons for this performance disparity, all exacerbated by the removal of supporting facts. First, the GRU only allows sentences to have context from sentences before them, but not after them. This prevents information propagation from future sentences. Second, the supporting sentences may be too far away from each other on a word level to allow for these distant sentences to interact through the word level GRU." ], "highlighted_evidence": [ "First, the GRU only allows sentences to have context from sentences before them, but not after them. This prevents information propagation from future sentences. Second, the supporting sentences may be too far away from each other on a word level to allow for these distant sentences to interact through the word level GRU." ] } ], "annotation_id": [ "38b133b57dfd6847af3f22af637c07e901d6397b" ], "worker_id": [ "101dbdd2108b3e676061cb693826f0959b47891b" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " the facts that are relevant for answering a particular question) are labeled during training." ], "yes_no": null, "free_form_answer": "", "evidence": [ "We analyze the DMN components, specifically the input module and memory module, to improve question answering. We propose a new input module which uses a two level encoder with a sentence reader and input fusion layer to allow for information flow between sentences. For the memory, we propose a modification to gated recurrent units (GRU) BIBREF7 . The new GRU formulation incorporates attention gates that are computed using global knowledge over the facts. Unlike before, the new DMN+ model does not require that supporting facts (i.e. the facts that are relevant for answering a particular question) are labeled during training. The model learns to select the important facts from a larger set." ], "highlighted_evidence": [ "the facts that are relevant for answering a particular question) are labeled during training." ] } ], "annotation_id": [ "921e8a3a62df5b0aa5475529e01788e038769db3" ], "worker_id": [ "101dbdd2108b3e676061cb693826f0959b47891b" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "For the DMN+, we propose replacing this single GRU with two different components. The first component is a sentence reader", "The second component is the input fusion layer" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For the DMN+, we propose replacing this single GRU with two different components. The first component is a sentence reader, responsible only for encoding the words into a sentence embedding. The second component is the input fusion layer, allowing for interactions between sentences. This resembles the hierarchical neural auto-encoder architecture of BIBREF9 and allows content interaction between sentences. We adopt the bi-directional GRU for this input fusion layer because it allows information from both past and future sentences to be used. As gradients do not need to propagate through the words between sentences, the fusion layer also allows for distant supporting sentences to have a more direct interaction." ], "highlighted_evidence": [ "replacing this single GRU with two different components", "first component is a sentence reader", "second component is the input fusion layer" ] } ], "annotation_id": [ "c6ed87cf56b9655ae8a9aabab17854771698ca6f" ], "worker_id": [ "101dbdd2108b3e676061cb693826f0959b47891b" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "the new DMN+ model does not require that supporting facts (i.e. the facts that are relevant for answering a particular question) are labeled during training.", "In addition, we introduce a new input module to represent images." ], "yes_no": null, "free_form_answer": "", "evidence": [ "We analyze the DMN components, specifically the input module and memory module, to improve question answering. We propose a new input module which uses a two level encoder with a sentence reader and input fusion layer to allow for information flow between sentences. For the memory, we propose a modification to gated recurrent units (GRU) BIBREF7 . The new GRU formulation incorporates attention gates that are computed using global knowledge over the facts. Unlike before, the new DMN+ model does not require that supporting facts (i.e. the facts that are relevant for answering a particular question) are labeled during training. The model learns to select the important facts from a larger set.", "In addition, we introduce a new input module to represent images. This module is compatible with the rest of the DMN architecture and its output is fed into the memory module. We show that the changes in the memory module that improved textual question answering also improve visual question answering. Both tasks are illustrated in Fig. 1 ." ], "highlighted_evidence": [ "the new DMN+ model does not require that supporting facts", "In addition, we introduce a new input module to represent images." ] } ], "annotation_id": [ "5937adbfa0344daf8d249b3b85b49ee5505996f7" ], "worker_id": [ "101dbdd2108b3e676061cb693826f0959b47891b" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "the input fusion layer to allow interactions between input facts and a novel attention based GRU that allows for logical reasoning over ordered inputs. " ], "yes_no": null, "free_form_answer": "", "evidence": [ "We have proposed new modules for the DMN framework to achieve strong results without supervision of supporting facts. These improvements include the input fusion layer to allow interactions between input facts and a novel attention based GRU that allows for logical reasoning over ordered inputs. Our resulting model obtains state of the art results on both the VQA dataset and the bAbI-10k text question-answering dataset, proving the framework can be generalized across input domains." ], "highlighted_evidence": [ " the input fusion layer to allow interactions between input facts and a novel attention based GRU that allows for logical reasoning over ordered inputs." ] } ], "annotation_id": [ "0382cef1e5dfe7c6b4fca9400027af1e0e7618f2" ], "worker_id": [ "101dbdd2108b3e676061cb693826f0959b47891b" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "We have proposed new modules for the DMN framework to achieve strong results without supervision of supporting facts. These improvements include the input fusion layer to allow interactions between input facts and a novel attention based GRU that allows for logical reasoning over ordered inputs. Our resulting model obtains state of the art results on both the VQA dataset and the bAbI-10k text question-answering dataset, proving the framework can be generalized across input domains." ], "highlighted_evidence": [ "Our resulting model obtains state of the art results on both the VQA dataset and the bAbI-10k text question-answering dataset, proving the framework can be generalized across input domains." ] } ], "annotation_id": [ "43ef5b1ac333f9158146b6f130fa0311d2e358f0" ], "worker_id": [ "101dbdd2108b3e676061cb693826f0959b47891b" ] } ] }
{ "caption": [ "Figure 1. Question Answering over text and images using a Dynamic Memory Network.", "Figure 2. The input module with a “fusion layer”, where the sentence reader encodes the sentence and the bi-directional GRU allows information to flow between sentences.", "Figure 3. VQA input module to represent images for the DMN.", "Figure 5. (a) The traditional GRU model, and (b) the proposed attention-based GRU model", "Figure 4. The episodic memory module of the DMN+ when using two passes. The ←→ F is the output of the input module.", "Table 2. Test error rates of various model architectures on tasks from the the bAbI English 10k dataset. E2E = End-To-End Memory Network results from Sukhbaatar et al. (2015). NR = Neural Reasoner with original auxiliary task from Peng et al. (2015). DMN+ and E2E achieve an error of 0 on bAbI question sets (1,4,10,12,13,15,20).", "Table 1. Test error rates of various model architectures on the bAbI-10k dataset, and accuracy performance on the DAQUAR-ALL visual dataset. The skipped bAbI questions (1,4,11,12,13,15,19) achieved 0 error across all models.", "Table 3. Performance of various architectures and approaches on VQA test-dev and test-standard data. Baseline only uses the spatial mean of the last pooling layer without input fusion and episoidic memory; VQA numbers are from Antol et al. (2015); ACK Wu et al. (2015); iBOWIMG -Zhou et al. (2015); DPPnet - Noh et al. (2015); D-NMN - Andreas et al. (2016); SMem-VQA -Xu & Saenko (2015); SAN -Yang et al. (2015)", "Figure 6. Examples of qualitative results of attention for VQA. The original images are shown on the left. On the right we show how the attention gate gti activates given one pass over the image and query. White regions are the most active. Answers are given by the DMN+." ], "file": [ "1-Figure1-1.png", "3-Figure2-1.png", "3-Figure3-1.png", "4-Figure5-1.png", "4-Figure4-1.png", "7-Table2-1.png", "7-Table1-1.png", "8-Table3-1.png", "8-Figure6-1.png" ] }
1911.03385
Low-Level Linguistic Controls for Style Transfer and Content Preservation
Despite the success of style transfer in image processing, it has seen limited progress in natural language generation. Part of the problem is that content is not as easily decoupled from style in the text domain. Curiously, in the field of stylometry, content does not figure prominently in practical methods of discriminating stylistic elements, such as authorship and genre. Rather, syntax and function words are the most salient features. Drawing on this work, we model style as a suite of low-level linguistic controls, such as frequency of pronouns, prepositions, and subordinate clause constructions. We train a neural encoder-decoder model to reconstruct reference sentences given only content words and the setting of the controls. We perform style transfer by keeping the content words fixed while adjusting the controls to be indicative of another style. In experiments, we show that the model reliably responds to the linguistic controls and perform both automatic and manual evaluations on style transfer. We find we can fool a style classifier 84% of the time, and that our model produces highly diverse and stylistically distinctive outputs. This work introduces a formal, extendable model of style that can add control to any neural text generation system.
{ "section_name": [ "Introduction", "Related Work ::: Style Transfer with Parallel Data", "Related Work ::: Style Transfer without Parallel Data", "Related Work ::: Controlling Linguistic Features", "Related Work ::: Stylometry and the Digital Humanities", "Models ::: Preliminary Classification Experiments", "Models ::: Formal Model of Style", "Models ::: Formal Model of Style ::: Reconstruction Task", "Models ::: Neural Architecture", "Models ::: Neural Architecture ::: Baseline Genre Model", "Models ::: Neural Architecture ::: Training", "Models ::: Neural Architecture ::: Selecting Controls for Style Transfer", "Automatic Evaluations ::: BLEU Scores & Perplexity", "Automatic Evaluations ::: Feature Control", "Automatic Evaluations ::: Automatic Classification", "Human Evaluation", "Human Evaluation ::: Fluency Evaluation", "Human Evaluation ::: Human Classification", "Human Evaluation ::: The `Vampires in Space' Problem", "Conclusion and Future Work", "Acknowledgements" ], "paragraphs": [ [ "All text has style, whether it be formal or informal, polite or aggressive, colloquial, persuasive, or even robotic. Despite the success of style transfer in image processing BIBREF0, BIBREF1, there has been limited progress in the text domain, where disentangling style from content is particularly difficult.", "To date, most work in style transfer relies on the availability of meta-data, such as sentiment, authorship, or formality. While meta-data can provide insight into the style of a text, it often conflates style with content, limiting the ability to perform style transfer while preserving content. Generalizing style transfer requires separating style from the meaning of the text itself. The study of literary style can guide us. For example, in the digital humanities and its subfield of stylometry, content doesn't figure prominently in practical methods of discriminating authorship and genres, which can be thought of as style at the level of the individual and population, respectively. Rather, syntactic and functional constructions are the most salient features.", "In this work, we turn to literary style as a test-bed for style transfer, and build on work from literature scholars using computational techniques for analysis. In particular we draw on stylometry: the use of surface level features, often counts of function words, to discriminate between literary styles. Stylometry first saw success in attributing authorship to the disputed Federalist Papers BIBREF2, but is recently used by scholars to study things such as the birth of genres BIBREF3 and the change of author styles over time BIBREF4. The use of function words is likely not the way writers intend to express style, but they appear to be downstream realizations of higher-level stylistic decisions.", "We hypothesize that surface-level linguistic features, such as counts of personal pronouns, prepositions, and punctuation, are an excellent definition of literary style, as borne out by their use in the digital humanities, and our own style classification experiments. We propose a controllable neural encoder-decoder model in which these features are modelled explicitly as decoder feature embeddings. In training, the model learns to reconstruct a text using only the content words and the linguistic feature embeddings. We can then transfer arbitrary content words to a new style without parallel data by setting the low-level style feature embeddings to be indicative of the target style.", "This paper makes the following contributions:", "A formal model of style as a suite of controllable, low-level linguistic features that are independent of content.", "An automatic evaluation showing that our model fools a style classifier 84% of the time.", "A human evaluation with English literature experts, including recommendations for dealing with the entanglement of content with style." ], [ "Following in the footsteps of machine translation, style transfer in text has seen success by using parallel data. BIBREF5 use modern translations of Shakespeare plays to build a modern-to-Shakespearan model. BIBREF6 compile parallel data for formal and informal sentences, allowing them to successfully use various machine translation techniques. While parallel data may work for very specific styles, the difficulty of finding parallel texts dramatically limits this approach." ], [ "There has been a decent amount of work on this approach in the past few years BIBREF7, BIBREF8, mostly focusing on variations of an encoder-decoder framework in which style is modeled as a monolithic style embedding. The main obstacle is often to disentangle style and content. However, it remains a challenging problem.", "Perhaps the most successful is BIBREF9, who use a de-noising auto encoder and back translation to learn style without parallel data. BIBREF10 outline the benefits of automatically extracting style, and suggest there is a formal weakness of using linguistic heuristics. In contrast, we believe that monolithic style embeddings don't capture the existing knowledge we have about style, and will struggle to disentangle content." ], [ "Several papers have worked on controlling style when generating sentences from restaurant meaning representations BIBREF11, BIBREF12. In each of these cases, the diversity in outputs is quite small given the constraints of the meaning representation, style is often constrained to interjections (like “yeah”), and there is no original style from which to transfer.", "BIBREF13 investigate using stylistic parameters and content parameters to control text generation using a movie review dataset. Their stylistic parameters are created using word-level heuristics and they are successful in controlling these parameters in the outputs. Their success bodes well for our related approach in a style transfer setting, in which the content (not merely content parameters) is held fixed." ], [ "Style, in literary research, is anything but a stable concept, but it nonetheless has a long tradition of study in the digital humanities. In a remarkably early quantitative study of literature, BIBREF14 charts sentence-level stylistic attributes specific to a number of novelists. Half a century later, BIBREF15 builds on earlier work in information theory by BIBREF16, and defines a literary text as consisting of two “materials\": “the vocabulary, and some structural properties, the style, of its author.\"", "Beginning with BIBREF2, statistical approaches to style, or stylometry, join the already-heated debates over the authorship of literary works. A noteable example of this is the “Delta\" measure, which uses z-scores of function word frequencies BIBREF17. BIBREF18 find that Shakespeare added some material to a later edition of Thomas Kyd's The Spanish Tragedy, and that Christopher Marlowe collaborated with Shakespeare on Henry VI." ], [ "The stylometric research cited above suggests that the most frequently used words, e.g. function words, are most discriminating of authorship and literary style. We investigate these claims using three corpora that have distinctive styles in the literary community: gothic novels, philosophy books, and pulp science fiction, hereafter sci-fi.", "We retrieve gothic novels and philosophy books from Project Gutenberg and pulp sci-fi from Internet Archive's Pulp Magazine Archive. We partition this corpus into train, validation, and test sets the sizes of which can be found in Table TABREF12.", "In order to validate the above claims, we train five different classifiers to predict the literary style of sentences from our corpus. Each classifier has gradually more content words replaced with part-of-speech (POS) tag placeholder tokens. The All model is trained on sentences with all proper nouns replaced by `PROPN'. The models Ablated N, Ablated NV, and Ablated NVA replace nouns, nouns & verbs, and nouns, verbs, & adjectives with the corresponding POS tag respectively. Finally, Content-only is trained on sentences with all words that are not tagged as NOUN, VERB, ADJ removed; the remaining words are not ablated.", "We train the classifiers on the training set, balancing the class distribution to make sure there are the same number of sentences from each style. Classifiers are trained using fastText BIBREF19, using tri-gram features with all other settings as default. table:classifiers shows the accuracies of the classifiers.", "The styles are highly distinctive: the All classifier has an accuracy of 86%. Additionally, even the Ablated NVA is quite successful, with 75% accuracy, even without access to any content words. The Content only classifier is also quite successful, at 80% accuracy. This indicates that these stylistic genres are distinctive at both the content level and at the syntactic level." ], [ "Given that non-content words are distinctive enough for a classifier to determine style, we propose a suite of low-level linguistic feature counts (henceforth, controls) as our formal, content-blind definition of style. The style of a sentence is represented as a vector of counts of closed word classes (like personal pronouns) as well as counts of syntactic features like the number of SBAR non-terminals in its constituency parse, since clause structure has been shown to be indicative of style BIBREF20. Controls are extracted heuristically, and almost all rely on counts of pre-defined word lists. For constituency parses we use the Stanford Parser BIBREF21. table:controlexamples lists all the controls along with examples." ], [ "Models are trained with a reconstruction task, in which a distorted version of a reference sentence is input and the goal is to output the original reference.", "fig:sentenceinput illustrates the process. Controls are calculated heuristically. All words found in the control word lists are then removed from the reference sentence. The remaining words, which represent the content, are used as input into the model, along with their POS tags and lemmas.", "In this way we encourage models to construct a sentence using content and style independently. This will allow us to vary the stylistic controls while keeping the content constant, and successfully perform style transfer. When generating a new sentence, the controls correspond to the counts of the corresponding syntactic features that we expect to be realized in the output." ], [ "We implement our feature controlled language model using a neural encoder-decoder with attention BIBREF22, using 2-layer uni-directional gated recurrent units (GRUs) for the encoder and decoder BIBREF23.", "The input to the encoder is a sequence of $M$ content words, along with their lemmas, and fine and coarse grained part-of-speech (POS) tags, i.e. $X_{.,j} = (x_{1,j},\\ldots ,x_{M,j})$ for $j \\in \\mathcal {T} = \\lbrace \\textrm {word, lemma, fine-pos, coarse-pos}\\rbrace $. We embed each token (and its lemma and POS) before concatenating, and feeding into the encoder GRU to obtain encoder hidden states, $ c_i = \\operatorname{gru}(c_{i-1}, \\left[E_j(X_{i,j}), \\; j\\in \\mathcal {T} \\right]; \\omega _{enc}) $ for $i \\in {1,\\ldots ,M},$ where initial state $c_0$, encoder GRU parameters $\\omega _{enc}$ and embedding matrices $E_j$ are learned parameters.", "The decoder sequentially generates the outputs, i.e. a sequence of $N$ tokens $y =(y_1,\\ldots ,y_N)$, where all tokens $y_i$ are drawn from a finite output vocabulary $\\mathcal {V}$. To generate the each token we first embed the previously generated token $y_{i-1}$ and a vector of $K$ control features $z = ( z_1,\\ldots , z_K)$ (using embedding matrices $E_{dec}$ and $E_{\\textrm {ctrl-1}}, \\ldots , E_{\\textrm {ctrl-K}}$ respectively), before concatenating them into a vector $\\rho _i,$ and feeding them into the decoder side GRU along with the previous decoder state $h_{i-1}$:", "where $\\omega _{dec}$ are the decoder side GRU parameters.", "Using the decoder hidden state $h_i$ we then attend to the encoder context vectors $c_j$, computing attention scores $\\alpha _{i,j}$, where", "before passing $h_i$ and the attention weighted context $\\bar{c}_i=\\sum _{j=1}^M \\alpha _{i,j} c_j$ into a single hidden-layer perceptron with softmax output to compute the next token prediction probability,", "where $W,U,V$ and $u,v, \\nu $ are parameter matrices and vectors respectively.", "Crucially, the controls $z$ remain fixed for all input decoder steps. Each $z_k$ represents the frequency of one of the low-level features described in sec:formalstyle. During training on the reconstruction task, we can observe the full output sequence $y,$ and so we can obtain counts for each control feature directly. Controls receive a different embedding depending on their frequency, where counts of 0-20 each get a unique embedding, and counts greater than 20 are assigned to the same embedding. At test time, we set the values of the controls according to procedure described in Section SECREF25.", "We use embedding sizes of 128, 128, 64, and 32 for token, lemma, fine, and coarse grained POS embedding matrices respectively. Output token embeddings $E_{dec}$ have size 512, and 50 for the control feature embeddings. We set 512 for all GRU and perceptron output sizes. We refer to this model as the StyleEQ model. See fig:model for a visual depiction of the model." ], [ "We compare the above model to a similar model, where rather than explicitly represent $K$ features as input, we have $K$ features in the form of a genre embedding, i.e. we learn a genre specific embedding for each of the gothic, scifi, and philosophy genres, as studied in BIBREF8 and BIBREF7. To generate in a specific style, we simply set the appropriate embedding. We use genre embeddings of size 850 which is equivalent to the total size of the $K$ feature embeddings in the StyleEQ model." ], [ "We train both models with minibatch stochastic gradient descent with a learning rate of 0.25, weight decay penalty of 0.0001, and batch size of 64. We also apply dropout with a drop rate of 0.25 to all embedding layers, the GRUs, and preceptron hidden layer. We train for a maximum of 200 epochs, using validation set BLEU score BIBREF26 to select the final model iteration for evaluation." ], [ "In the Baseline model, style transfer is straightforward: given an input sentence in one style, fix the encoder content features while selecting a different genre embedding. In contrast, the StyleEQ model requires selecting the counts for each control. Although there are a variety of ways to do this, we use a method that encourages a diversity of outputs.", "In order to ensure the controls match the reference sentence in magnitude, we first find all sentences in the target style with the same number of words as the reference sentence. Then, we add the following constraints: the same number of proper nouns, the same number of nouns, the same number of verbs, and the same number of adjectives. We randomly sample $n$ of the remaining sentences, and for each of these `sibling' sentences, we compute the controls. For each of the new controls, we generate a sentence using the original input sentence content features. The generated sentences are then reranked using the length normalized log-likelihood under the model. We can then select the highest scoring sentence as our style-transferred output, or take the top-$k$ when we need a diverse set of outputs.", "The reason for this process is that although there are group-level distinctive controls for each style, e.g. the high use of punctuation in philosophy books or of first person pronouns in gothic novels, at the sentence level it can understandably be quite varied. This method matches sentences between styles, capturing the natural distribution of the corpora." ], [ "In tab:blueperpl we report BLEU scores for the reconstruction of test set sentences from their content and feature representations, as well as the model perplexities of the reconstruction. For both models, we use beam decoding with a beam size of eight. Beam candidates are ranked according to their length normalized log-likelihood. On these automatic measures we see that StyleEQ is better able to reconstruct the original sentences. In some sense this evaluation is mostly a sanity check, as the feature controls contain more locally specific information than the genre embeddings, which say very little about how many specific function words one should expect to see in the output." ], [ "Designing controllable language models is often difficult because of the various dependencies between tokens; when changing one control value it may effect other aspects of the surface realization. For example, increasing the number of conjunctions may effect how the generator places prepositions to compensate for structural changes in the sentence. Since our features are deterministically recoverable, we can perturb an individual control value and check to see that the desired change was realized in the output. Moreover, we can check the amount of change in the other non-perturbed features to measure the independence of the controls.", "We sample 50 sentences from each genre from the test set. For each sample, we create a perturbed control setting for each control by adding $\\delta $ to the original control value. This is done for $\\delta \\in \\lbrace -3, -2, -1, 0, 1, 2, 3\\rbrace $, skipping any settings where the new control value would be negative.", "table:autoeval:ctrl shows the results of this experiment. The Exact column displays the percentage of generated texts that realize the exact number of control features specified by the perturbed control. High percentages in the Exact column indicate greater one-to-one correspondence between the control and surface realization. For example, if the input was “Dracula and Frankenstein and the mummy,” and we change the conjunction feature by $\\delta =-1$, an output of “Dracula, Frankenstein and the mummy,” would count towards the Exact category, while “Dracula, Frankenstein, the mummy,” would not.", "The Direction column specifies the percentage of cases where the generated text produces a changed number of the control features that, while not exactly matching the specified value of the perturbed control, does change from the original in the correct direction. For example, if the input again was “Dracula and Frankenstein and the mummy,” and we change the conjunction feature by $\\delta =-1$, both outputs of “Dracula, Frankenstein and the mummy,” and “Dracula, Frankenstein, the mummy,” would count towards Direction. High percentages in Direction mean that we could roughly ensure desired surface realizations by modifying the control by a larger $\\delta $.", "Finally, the Atomic column specifies the percentage of cases where the generated text with the perturbed control only realizes changes to that specific control, while other features remain constant. For example, if the input was “Dracula and Frankenstein in the castle,” and we set the conjunction feature to $\\delta =-1$, an output of “Dracula near Frankenstein in the castle,” would not count as Atomic because, while the number of conjunctions did decrease by one, the number of simple preposition changed. An output of “Dracula, Frankenstein in the castle,” would count as Atomic. High percentages in the Atomic column indicate this feature is only loosely coupled to the other features and can be changed without modifying other aspects of the sentence.", "Controls such as conjunction, determiner, and punctuation are highly controllable, with Exact rates above 80%. But with the exception of the constituency parse features, all controls have high Direction rates, many in the 90s. These results indicate our model successfully controls these features. The fact that the Atomic rates are relatively low is to be expected, as controls are highly coupled – e.g. to increase 1stPer, it is likely another pronoun control will have to decrease." ], [ "For each model we look at the classifier prediction accuracy of reconstructed and transferred sentences. In particular we use the Ablated NVA classifier, as this is the most content-blind one.", "We produce 16 outputs from both the Baseline and StyleEq models. For the Baseline, we use a beam search of size 16. For the StyleEQ model, we use the method described in Section SECREF25 to select 16 `sibling' sentences in the target style, and generated a transferred sentence for each. We look at three different methods for selection: all, which uses all output sentences; top, which selects the top ranked sentence based on the score from the model; and oracle, which selects the sentence with the highest classifier likelihood for the intended style.", "The reason for the third method, which indeed acts as an oracle, is that using the score from the model didn't always surface a transferred sentence that best reflected the desired style. Partially this was because the model score was mostly a function of how well a transferred sentence reflected the distribution of the training data. But additionally, some control settings are more indicative of a target style than others. The use of the classifier allows us to identify the most suitable control setting for a target style that was roughly compatible with the number of content words.", "In table:fasttext-results we see the results. Note that for both models, the all and top classification accuracy tends to be quite similar, though for the Baseline they are often almost exactly the same when the Baseline has little to no diversity in the outputs.", "However, the oracle introduces a huge jump in accuracy for the StyleEQ model, especially compared to the Baseline, partially because the diversity of outputs from StyleEQ is much higher; often the Baseline model produces no diversity – the 16 output sentences may be nearly identical, save a single word or two. It's important to note that neither model uses the classifier in any way except to select the sentence from 16 candidate outputs.", "What this implies is that lurking within the StyleEQ model outputs are great sentences, even if they are hard to find. In many cases, the StyleEQ model has a classification accuracy above the base rate from the test data, which is 75% (see table:classifiers)." ], [ "table:cherrypicking shows example outputs for the StyleEQ and Baseline models. Through inspection we see that the StyleEQ model successfully changes syntactic constructions in stylistically distinctive ways, such as increasing syntactic complexity when transferring to philosophy, or changing relevant pronouns when transferring to sci-fi. In contrast, the Baseline model doesn't create outputs that move far from the reference sentence, making only minor modifications such changing the type of a single pronoun.", "To determine how readers would classify our transferred sentences, we recruited three English Literature PhD candidates, all of whom had passed qualifying exams that included determining both genre and era of various literary texts." ], [ "To evaluate the fluency of our outputs, we had the annotators score reference sentences, reconstructed sentences, and transferred sentences on a 0-5 scale, where 0 was incoherent and 5 was a well-written human sentence.", "table:fluency shows the average fluency of various conditions from all three annotators. Both models have fluency scores around 3. Upon inspection of the outputs, it is clear that many have fluency errors, resulting in ungrammatical sentences.", "Notably the Baseline often has slightly higher fluency scores than the StyleEQ model. This is likely because the Baseline model is far less constrained in how to construct the output sentence, and upon inspection often reconstructs the reference sentence even when performing style transfer. In contrast, the StyleEQ is encouraged to follow the controls, but can struggle to incorporate these controls into a fluent sentence.", "The fluency of all outputs is lower than desired. We expect that incorporating pre-trained language models would increase the fluency of all outputs without requiring larger datasets." ], [ "Each annotator annotated 90 reference sentences (i.e. from the training corpus) with which style they thought the sentence was from. The accuracy on this baseline task for annotators A1, A2, and A3 was 80%, 88%, and 80% respectively, giving us an upper expected bound on the human evaluation.", "In discussing this task with the annotators, they noted that content is a heavy predictor of genre, and that would certainly confound their annotations. To attempt to mitigate this, we gave them two annotation tasks: which-of-3 where they simply marked which style they thought a sentence was from, and which-of-2 where they were given the original style and marked which style they thought the sentence was transferred into.", "For each task, each annotator marked 180 sentences: 90 from each model, with an even split across the three genres. Annotators were presented the sentences in a random order, without information about the models. In total, each marked 270 sentences. (Note there were no reconstructions in this annotation task.)", "table:humanclassifiers shows the results. In both tasks, accuracy of annotators classifying the sentence as its intended style was low. In which-of-3, scores were around 20%, below the chance rate of 33%. In which-of-2, scores were in the 50s, slightly above the chance rate of 50%. This was the case for both models. There was a slight increase in accuracy for the StyleEQ model over the Baseline for which-of-3, but the opposite trend for which-of-2, suggesting these differences are not significant.", "It's clear that it's hard to fool the annotators. Introspecting on their approach, the annotators expressed having immediate responses based on key words – for instance any references of `space' implied `sci-fi'. We call this the `vampires in space' problem, because no matter how well a gothic sentence is rewritten as a sci-fi one, it's impossible to ignore the fact that there is a vampire in space. The transferred sentences, in the eyes of the Ablated NVA classifier (with no access to content words), did quite well transferring into their intended style. But people are not blind to content." ], [ "Working with the annotators, we regularly came up against the 'vampires in space' problem: while syntactic constructions account for much of the distinction of literary styles, these constructions often co-occur with distinctive content.", "Stylometrics finds syntactic constructions are great at fingerprinting, but suggests that these constructions are surface realizations of higher-level stylistic decisions. The number and type of personal pronouns is a reflection of how characters feature in a text. A large number of positional prepositions may be the result of a writer focusing on physical descriptions of scenes. In our attempt to decouple these, we create Frankenstein sentences, which piece together features of different styles – we are putting vampires in space.", "Another way to validate our approach would be to select data that is stylistically distinctive but with similar content: perhaps genres in which content is static but language use changes over time, stylistically distinct authors within a single genre, or parodies of a distinctive genre." ], [ "We present a formal, extendable model of style that can add control to any neural text generation system. We model style as a suite of low-level linguistic controls, and train a neural encoder-decoder model to reconstruct reference sentences given only content words and the setting of the controls. In automatic evaluations, we show that our model can fool a style classifier 84% of the time and outperforms a baseline genre-embedding model. In human evaluations, we encounter the `vampires in space' problem in which content and style are equally discriminative but people focus more on the content.", "In future work we would like to model higher-level syntactic controls. BIBREF20 show that differences in clausal constructions, for instance having a dependent clause before an independent clause or vice versa, is a marker of style appreciated by the reader. Such features would likely interact with our lower-level controls in an interesting way, and provide further insight into style transfer in text." ], [ "Katy Gero is supported by an NSF GRF (DGE - 1644869). We would also like to thank Elsbeth Turcan for her helpful comments." ] ] }
{ "question": [ "Is this style generator compared to some baseline?", "How they perform manual evaluation, what is criteria?", "What metrics are used for automatic evaluation?", "How they know what are content words?", "How they model style as a suite of low-level linguistic controls, such as frequency of pronouns, prepositions, and subordinate clause constructions?" ], "question_id": [ "9213159f874b3bdd9b4de956a88c703aac988411", "5f4e6ce4a811c4b3ab07335d89db2fd2a8d8d8b2", "a234bcbf2e41429422adda37d9e926b49ef66150", "c383fa9170ae00a4a24a8e39358c38395c5f034b", "83251fd4a641cea8b180b49027e74920bca2699a" ], "nlp_background": [ "zero", "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "We compare the above model to a similar model, where rather than explicitly represent $K$ features as input, we have $K$ features in the form of a genre embedding, i.e. we learn a genre specific embedding for each of the gothic, scifi, and philosophy genres, as studied in BIBREF8 and BIBREF7. To generate in a specific style, we simply set the appropriate embedding. We use genre embeddings of size 850 which is equivalent to the total size of the $K$ feature embeddings in the StyleEQ model." ], "highlighted_evidence": [ "We compare the above model to a similar model, where rather than explicitly represent $K$ features as input, we have $K$ features in the form of a genre embedding, i.e. we learn a genre specific embedding for each of the gothic, scifi, and philosophy genres, as studied in BIBREF8 and BIBREF7." ] } ], "annotation_id": [ "f41f15bf8e41494dab10017afde3453191e74deb" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "accuracy" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Each annotator annotated 90 reference sentences (i.e. from the training corpus) with which style they thought the sentence was from. The accuracy on this baseline task for annotators A1, A2, and A3 was 80%, 88%, and 80% respectively, giving us an upper expected bound on the human evaluation." ], "highlighted_evidence": [ "Each annotator annotated 90 reference sentences (i.e. from the training corpus) with which style they thought the sentence was from. The accuracy on this baseline task for annotators A1, A2, and A3 was 80%, 88%, and 80% respectively, giving us an upper expected bound on the human evaluation." ] } ], "annotation_id": [ "946e5804d75a81b97f01e3be218d5e54cf0740a5" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "classification accuracy", "BLEU scores", "model perplexities of the reconstruction" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In tab:blueperpl we report BLEU scores for the reconstruction of test set sentences from their content and feature representations, as well as the model perplexities of the reconstruction. For both models, we use beam decoding with a beam size of eight. Beam candidates are ranked according to their length normalized log-likelihood. On these automatic measures we see that StyleEQ is better able to reconstruct the original sentences. In some sense this evaluation is mostly a sanity check, as the feature controls contain more locally specific information than the genre embeddings, which say very little about how many specific function words one should expect to see in the output.", "In table:fasttext-results we see the results. Note that for both models, the all and top classification accuracy tends to be quite similar, though for the Baseline they are often almost exactly the same when the Baseline has little to no diversity in the outputs." ], "highlighted_evidence": [ "In tab:blueperpl we report BLEU scores for the reconstruction of test set sentences from their content and feature representations, as well as the model perplexities of the reconstruction. For both models, we use beam decoding with a beam size of eight.", "Note that for both models, the all and top classification accuracy tends to be quite similar, though for the Baseline they are often almost exactly the same when the Baseline has little to no diversity in the outputs." ] } ], "annotation_id": [ "fd284690b47477ec98685906945ce0de5feecd1f" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " words found in the control word lists are then removed", "The remaining words, which represent the content" ], "yes_no": null, "free_form_answer": "", "evidence": [ "fig:sentenceinput illustrates the process. Controls are calculated heuristically. All words found in the control word lists are then removed from the reference sentence. The remaining words, which represent the content, are used as input into the model, along with their POS tags and lemmas.", "In this way we encourage models to construct a sentence using content and style independently. This will allow us to vary the stylistic controls while keeping the content constant, and successfully perform style transfer. When generating a new sentence, the controls correspond to the counts of the corresponding syntactic features that we expect to be realized in the output." ], "highlighted_evidence": [ "Controls are calculated heuristically. All words found in the control word lists are then removed from the reference sentence. The remaining words, which represent the content, are used as input into the model, along with their POS tags and lemmas.\n\nIn this way we encourage models to construct a sentence using content and style independently." ] } ], "annotation_id": [ "03ab50b16e33b5ef6d1c45d0a93fd94f2e41af1c" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "style of a sentence is represented as a vector of counts of closed word classes (like personal pronouns) as well as counts of syntactic features like the number of SBAR non-terminals in its constituency parse, since clause structure has been shown to be indicative of style" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Given that non-content words are distinctive enough for a classifier to determine style, we propose a suite of low-level linguistic feature counts (henceforth, controls) as our formal, content-blind definition of style. The style of a sentence is represented as a vector of counts of closed word classes (like personal pronouns) as well as counts of syntactic features like the number of SBAR non-terminals in its constituency parse, since clause structure has been shown to be indicative of style BIBREF20. Controls are extracted heuristically, and almost all rely on counts of pre-defined word lists. For constituency parses we use the Stanford Parser BIBREF21. table:controlexamples lists all the controls along with examples." ], "highlighted_evidence": [ "Given that non-content words are distinctive enough for a classifier to determine style, we propose a suite of low-level linguistic feature counts (henceforth, controls) as our formal, content-blind definition of style. The style of a sentence is represented as a vector of counts of closed word classes (like personal pronouns) as well as counts of syntactic features like the number of SBAR non-terminals in its constituency parse, since clause structure has been shown to be indicative of style BIBREF20. Controls are extracted heuristically, and almost all rely on counts of pre-defined word lists. For constituency parses we use the Stanford Parser BIBREF21. table:controlexamples lists all the controls along with examples." ] } ], "annotation_id": [ "0faddfe64a3e09ea568ae7e74423f5b69962890b" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Table 1: The size of the data across the three different styles investigated.", "Table 2: Accuracy of five classifiers trained using trigrams with fasttext, for all test data and split by genre. Despite heavy ablation, the Ablated NVA classifier has an accuracy of 75%, suggesting synactic and functional features alone can be fully predictive of style.", "Table 3: All controls, their source, and examples. Punctuation doesn’t include end punctuation.", "Figure 1: How a reference sentence from the dataset is prepared for input to the model. Controls are calculated heuristically, and then removed from the sentence. The remaining words, as well as their lemmatized versions and part-of-speech tags, are used as input separately.", "Figure 2: A schematic depiction of our style control model.", "Table 4: Test set reconstruction BLEU score and perplexity (in nats).", "Table 5: Percentage rates of Exact, Direction, and Atomic feature control changes. See subsection 4.2 for explanation.", "Table 6: Ablated NVA classifier accuracy using three different methods of selecting an output sentence. This is additionally split into the nine transfer possibilities, given the three source styles. The StyleEQ model produces far more diverse outputs, allowing the oracle method to have a very high accuracy compared to the Baseline model.", "Table 7: Example outputs (manually selected) from both models. The StyleEQ model successfully rewrites the sentence with very different syntactic constructions that reflect style, while the Baseline model rarely moves far from the reference.", "Table 8: Fluency scores (0-5, where 0 is incoherent) of sentences from three annotators. The Baseline model tends to produce slightly more fluent sentences than the StyleEQ model, likely because it is less constrained.", "Table 9: Accuracy of three annotators in selecting the correct style for transferred sentences. In this evaluation there is little difference between the models." ], "file": [ "3-Table1-1.png", "3-Table2-1.png", "3-Table3-1.png", "4-Figure1-1.png", "5-Figure2-1.png", "6-Table4-1.png", "6-Table5-1.png", "8-Table6-1.png", "8-Table7-1.png", "8-Table8-1.png", "9-Table9-1.png" ] }
1902.06843
Fusing Visual, Textual and Connectivity Clues for Studying Mental Health
With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions.
{ "section_name": [ null, "Introduction", "Related Work", "Dataset", "Data Modality Analysis", "Demographic Prediction", "Multi-modal Prediction Framework" ], "paragraphs": [ [ "0pt*0*0", "0pt*0*0", "0pt*0*0 0.95", "1]Amir Hossein Yazdavar 1]Mohammad Saeid Mahdavinejad 2]Goonmeet Bajaj", " 3]William Romine 1]Amirhassan Monadjemi 1]Krishnaprasad Thirunarayan", " 1]Amit Sheth 4]Jyotishman Pathak [1]Department of Computer Science & Engineering, Wright State University, OH, USA [2]Ohio State University, Columbus, OH, USA [3]Department of Biological Science, Wright State University, OH, USA [4] Division of Health Informatics, Weill Cornell University, New York, NY, USA", "[1] yazdavar.2@wright.edu", "With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions." ], [ "Depression is a highly prevalent public health challenge and a major cause of disability worldwide. Depression affects 6.7% (i.e., about 16 million) Americans each year . According to the World Mental Health Survey conducted in 17 countries, on average, about 5% of people reported having an episode of depression in 2011 BIBREF0 . Untreated or under-treated clinical depression can lead to suicide and other chronic risky behaviors such as drug or alcohol addiction.", "Global efforts to curb clinical depression involve identifying depression through survey-based methods employing phone or online questionnaires. These approaches suffer from under-representation as well as sampling bias (with very small group of respondents.) In contrast, the widespread adoption of social media where people voluntarily and publicly express their thoughts, moods, emotions, and feelings, and even share their daily struggles with mental health problems has not been adequately tapped into studying mental illnesses, such as depression. The visual and textual content shared on different social media platforms like Twitter offer new opportunities for a deeper understanding of self-expressed depression both at an individual as well as community-level. Previous research efforts have suggested that language style, sentiment, users' activities, and engagement expressed in social media posts can predict the likelihood of depression BIBREF1 , BIBREF2 . However, except for a few attempts BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , these investigations have seldom studied extraction of emotional state from visual content of images in posted/profile images. Visual content can express users' emotions more vividly, and psychologists noted that imagery is an effective medium for communicating difficult emotions.", "According to eMarketer, photos accounted for 75% of content posted on Facebook worldwide and they are the most engaging type of content on Facebook (87%). Indeed, \"a picture is worth a thousand words\" and now \"photos are worth a million likes.\" Similarly, on Twitter, the tweets with image links get twice as much attention as those without , and video-linked tweets drive up engagement . The ease and naturalness of expression through visual imagery can serve to glean depression-indicators in vulnerable individuals who often seek social support through social media BIBREF7 . Further, as psychologist Carl Rogers highlights, we often pursue and promote our Ideal-Self . In this regard, the choice of profile image can be a proxy for the online persona BIBREF8 , providing a window into an individual's mental health status. For instance, choosing emaciated legs of girls covered with several cuts as profile image portrays negative self-view BIBREF9 .", "Inferring demographic information like gender and age can be crucial for stratifying our understanding of population-level epidemiology of mental health disorders. Relying on electronic health records data, previous studies explored gender differences in depressive behavior from different angles including prevalence, age at onset, comorbidities, as well as biological and psychosocial factors. For instance, women have been diagnosed with depression twice as often as men BIBREF10 and national psychiatric morbidity survey in Britain has shown higher risk of depression in women BIBREF11 . On the other hand, suicide rates for men are three to five times higher compared to that of the women BIBREF12 .", "Although depression can affect anyone at any age, signs and triggers of depression vary for different age groups . Depression triggers for children include parental depression, domestic violence, and loss of a pet, friend or family member. For teenagers (ages 12-18), depression may arise from hormonal imbalance, sexuality concerns and rejection by peers. Young adults (ages 19-29) may develop depression due to life transitions, poverty, trauma, and work issues. Adult (ages 30-60) depression triggers include caring simultaneously for children and aging parents, financial burden, work and relationship issues. Senior adults develop depression from common late-life issues, social isolation, major life loses such as the death of a spouse, financial stress and other chronic health problems (e.g., cardiac disease, dementia). Therefore, inferring demographic information while studying depressive behavior from passively sensed social data, can shed better light on the population-level epidemiology of depression.", "The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users.", "We address and derive answers to the following research questions: 1) How well do the content of posted images (colors, aesthetic and facial presentation) reflect depressive behavior? 2) Does the choice of profile picture show any psychological traits of depressed online persona? Are they reliable enough to represent the demographic information such as age and gender? 3) Are there any underlying common themes among depressed individuals generated using multimodal content that can be used to detect depression reliably?" ], [ "Mental Health Analysis using Social Media:", "Several efforts have attempted to automatically detect depression from social media content utilizing machine/deep learning and natural language processing approaches. Conducting a retrospective study over tweets, BIBREF14 characterizes depression based on factors such as language, emotion, style, ego-network, and user engagement. They built a classifier to predict the likelihood of depression in a post BIBREF14 , BIBREF15 or in an individual BIBREF1 , BIBREF16 , BIBREF17 , BIBREF18 . Moreover, there have been significant advances due to the shared task BIBREF19 focusing on methods for identifying depressed users on Twitter at the Computational Linguistics and Clinical Psychology Workshop (CLP 2015). A corpus of nearly 1,800 Twitter users was built for evaluation, and the best models employed topic modeling BIBREF20 , Linguistic Inquiry and Word Count (LIWC) features, and other metadata BIBREF21 . More recently, a neural network architecture introduced by BIBREF22 combined posts into a representation of user's activities for detecting depressed users. Another active line of research has focused on capturing suicide and self-harm signals BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF2 , BIBREF27 . Moreover, the CLP 2016 BIBREF28 defined a shared task on detecting the severity of the mental health from forum posts. All of these studies derive discriminative features to classify depression in user-generated content at message-level, individual-level or community-level. Recent emergence of photo-sharing platforms such as Instagram, has attracted researchers attention to study people's behavior from their visual narratives – ranging from mining their emotions BIBREF29 , and happiness trend BIBREF30 , to studying medical concerns BIBREF31 . Researchers show that people use Instagram to engage in social exchange and storytelling about their difficult experiences BIBREF4 . The role of visual imagery as a mechanism of self-disclosure by relating visual attributes to mental health disclosures on Instagram was highlighted by BIBREF3 , BIBREF5 where individual Instagram profiles were utilized to build a prediction framework for identifying markers of depression. The importance of data modality to understand user behavior on social media was highlighted by BIBREF32 . More recently, a deep neural network sequence modeling approach that marries audio and text data modalities to analyze question-answer style interviews between an individual and an agent has been developed to study mental health BIBREF32 . Similarly, a multimodal depressive dictionary learning was proposed to detect depressed users on Twitter BIBREF33 . They provide a sparse user representations by defining a feature set consisting of social network features, user profile features, visual features, emotional features BIBREF34 , topic-level features, and domain-specific features. Particularly, our choice of multi-model prediction framework is intended to improve upon the prior works involving use of images in multimodal depression analysis BIBREF33 and prior works on studying Instagram photos BIBREF6 , BIBREF35 .", "Demographic information inference on Social Media: ", "There is a growing interest in understanding online user's demographic information due to its numerous applications in healthcare BIBREF36 , BIBREF37 . A supervised model developed by BIBREF38 for determining users' gender by employing features such as screen-name, full-name, profile description and content on external resources (e.g., personal blog). Employing features including emoticons, acronyms, slangs, punctuations, capitalization, sentence length and included links/images, along with online behaviors such as number of friends, post time, and commenting activity, a supervised model was built for predicting user's age group BIBREF39 . Utilizing users life stage information such as secondary school student, college student, and employee, BIBREF40 builds age inference model for Dutch Twitter users. Similarly, relying on profile descriptions while devising a set of rules and patterns, a novel model introduced for extracting age for Twitter users BIBREF41 . They also parse description for occupation by consulting the SOC2010 list of occupations and validating it through social surveys. A novel age inference model was developed while relying on homophily interaction information and content for predicting age of Twitter users BIBREF42 . The limitations of textual content for predicting age and gender was highlighted by BIBREF43 . They distinguish language use based on social gender, age identity, biological sex and chronological age by collecting crowdsourced signals using a game in which players (crowd) guess the biological sex and age of a user based only on their tweets. Their findings indicate how linguistic markers can misguide (e.g., a heart represented as <3 can be misinterpreted as feminine when the writer is male.) Estimating age and gender from facial images by training a convolutional neural networks (CNN) for face recognition is an active line of research BIBREF44 , BIBREF13 , BIBREF45 ." ], [ "Self-disclosure clues have been extensively utilized for creating ground-truth data for numerous social media analytic studies e.g., for predicting demographics BIBREF36 , BIBREF41 , and user's depressive behavior BIBREF46 , BIBREF47 , BIBREF48 . For instance, vulnerable individuals may employ depressive-indicative terms in their Twitter profile descriptions. Others may share their age and gender, e.g., \"16 years old suicidal girl\"(see Figure FIGREF15 ). We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 . This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions, followers_count, created_at, and profile_image_url.", "Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as \"17 years old, self-harm, anxiety, depression\") BIBREF41 . We compile \"age prefixes\" and \"age suffixes\", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a \"date\" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51 ", "Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter." ], [ "We now provide an in-depth analysis of visual and textual content of vulnerable users.", "Visual Content Analysis: We show that the visual content in images from posts as well as profiles provide valuable psychological cues for understanding a user's depression status. Profile/posted images can surface self-stigmatization BIBREF53 . Additionally, as opposed to typical computer vision framework for object recognition that often relies on thousands of predetermined low-level features, what matters more for assessing user's online behavior is the emotions reflected in facial expressions BIBREF54 , attributes contributing to the computational aesthetics BIBREF55 , and sentimental quotes they may subscribe to (Figure FIGREF15 ) BIBREF8 .", "Facial Presence: ", "For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization. We identify facial presentation, emotion from facial expression, and demographic features from profile/posted images . Table TABREF21 illustrates facial presentation differences in both profile and posted images (media) for depressed and control users in INLINEFORM0 . With control class showing significantly higher in both profile and media (8%, 9% respectively) compared to that for the depressed class. In contrast with age and gender disclosure, vulnerable users are less likely to disclose their facial identity, possibly due to lack of confidence or fear of stigma.", "Facial Expression:", "Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images. Positive emotions are joy and surprise, and negative emotions are anger, disgust, fear, and sadness. In general, for each user u in INLINEFORM0 , we process profile/shared images for both the depressed and the control groups with at least one face from the shared images (Table TABREF23 ). For the photos that contain multiple faces, we measure the average emotion.", "Figure FIGREF27 illustrates the inter-correlation of these features. Additionally, we observe that emotions gleaned from facial expressions correlated with emotional signals captured from textual content utilizing LIWC. This indicates visual imagery can be harnessed as a complementary channel for measuring online emotional signals.", "General Image Features:", "The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . Color, as a pillar of the human vision system, has a strong association with conceptual ideas like emotion BIBREF58 , BIBREF59 . We measured the normalized red, green, blue and the mean of original colors, and brightness and contrast relative to variations of luminance. We represent images in Hue-Saturation-Value color space that seems intuitive for humans, and measure mean and variance for saturation and hue. Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color. Although hue is not interpretable, high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 . Colorfulness is measured as a difference against gray background BIBREF60 . Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 . In color reproduction, naturalness is measured from the mental recollection of the colors of familiar objects. Additionally, there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions. We performed optical character recognition (OCR) with python-tesseract to extract text and their sentiment score. As illustrated in Table TABREF26 , vulnerable users tend to use less colorful (higher grayscale) profile as well as shared images to convey their negative feelings, and share images that are less natural (Figure FIGREF15 ). With respect to the aesthetic quality of images (saturation, brightness, and hue), depressed users use images that are less appealing to the human eye. We employ independent t-test, while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals. Overall, we have 223 features, and choose Bonferroni-corrected INLINEFORM0 level of INLINEFORM1 (*** INLINEFORM2 , ** INLINEFORM3 ).", "** alpha= 0.05, *** alpha = 0.05/223", "Demographics Inference & Language Cues: LIWC has been used extensively for examining the latent dimensions of self-expression for analyzing personality BIBREF61 , depressive behavior, demographic differences BIBREF43 , BIBREF40 , etc. Several studies highlight that females employ more first-person singular pronouns BIBREF62 , and deictic language BIBREF63 , while males tend to use more articles BIBREF64 which characterizes concrete thinking, and formal, informational and affirmation words BIBREF65 . For age analysis, the salient findings include older individuals using more future tense verbs BIBREF62 triggering a shift in focus while aging. They also show positive emotions BIBREF66 and employ fewer self-references (i.e. 'I', 'me') with greater first person plural BIBREF62 . Depressed users employ first person pronouns more frequently BIBREF67 , repeatedly use negative emotions and anger words. We analyzed psycholinguistic cues and language style to study the association between depressive behavior as well as demographics. Particularly, we adopt Levinson's adult development grouping that partitions users in INLINEFORM0 into 5 age groups: (14,19],(19,23], (23,34],(34,46], and (46,60]. Then, we apply LIWC for characterizing linguistic styles for each age group for users in INLINEFORM1 .", "Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)", "Thinking Style:", "Measuring people's natural ways of trying to analyze, and organize complex events have strong association with analytical thinking. LIWC relates higher analytic thinking to more formal and logical reasoning whereas a lower value indicates focus on narratives. Also, cognitive processing measures problem solving in mind. Words such as \"think,\" \"realize,\" and \"know\" indicates the degree of \"certainty\" in communications. Critical thinking ability relates to education BIBREF68 , and is impacted by different stages of cognitive development at different ages . It has been shown that older people communicate with greater cognitive complexity while comprehending nuances and subtle differences BIBREF62 . We observe a similar pattern in our data (Table TABREF40 .) A recent study highlights how depression affects brain and thinking at molecular level using a rat model BIBREF69 . Depression can promote cognitive dysfunction including difficulty in concentrating and making decisions. We observed a notable differences in the ability to think analytically in depressed and control users in different age groups (see Figure FIGREF39 - A, F and Table TABREF40 ). Overall, vulnerable younger users are not logical thinkers based on their relative analytical score and cognitive processing ability.", "Authenticity:", "Authenticity measures the degree of honesty. Authenticity is often assessed by measuring present tense verbs, 1st person singular pronouns (I, me, my), and by examining the linguistic manifestations of false stories BIBREF70 . Liars use fewer self-references and fewer complex words. Psychologists often see a child's first successfull lie as a mental growth. There is a decreasing trend of the Authenticity with aging (see Figure FIGREF39 -B.) Authenticity for depressed youngsters is strikingly higher than their control peers. It decreases with age (Figure FIGREF39 -B.)", "Clout:", "People with high clout speak more confidently and with certainty, employing more social words with fewer negations (e.g., no, not) and swear words. In general, midlife is relatively stable w.r.t. relationships and work. A recent study shows that age 60 to be best for self-esteem BIBREF71 as people take on managerial roles at work and maintain a satisfying relationship with their spouse. We see the same pattern in our data (see Figure FIGREF39 -C and Table TABREF40 ). Unsurprisingly, lack of confidence (the 6th PHQ-9 symptom) is a distinguishable characteristic of vulnerable users, leading to their lower clout scores, especially among depressed users before middle age (34 years old).", "Self-references:", "First person singular words are often seen as indicating interpersonal involvement and their high usage is associated with negative affective states implying nervousness and depression BIBREF66 . Consistent with prior studies, frequency of first person singular for depressed people is significantly higher compared to that of control class. Similarly to BIBREF66 , youngsters tend to use more first-person (e.g. I) and second person singular (e.g. you) pronouns (Figure FIGREF39 -G).", "Informal Language Markers; Swear, Netspeak:", "Several studies highlighted the use of profanity by young adults has significantly increased over the last decade BIBREF72 . We observed the same pattern in both the depressed and the control classes (Table TABREF40 ), although it's rate is higher for depressed users BIBREF1 . Psychologists have also shown that swearing can indicate that an individual is not a fragmented member of a society. Depressed youngsters, showing higher rate of interpersonal involvement and relationships, have a higher rate of cursing (Figure FIGREF39 -E). Also, Netspeak lexicon measures the frequency of terms such as lol and thx.", "Sexual, Body: ", "Sexual lexicon contains terms like \"horny\", \"love\" and \"incest\", and body terms like \"ache\", \"heart\", and \"cough\". Both start with a higher rate for depressed users while decreasing gradually while growing up, possibly due to changes in sexual desire as we age (Figure FIGREF39 -H,I and Table TABREF40 .)", "Quantitative Language Analysis:", "We employ one-way ANOVA to compare the impact of various factors and validate our findings above. Table TABREF40 illustrates our findings, with a degree of freedom (df) of 1055. The null hypothesis is that the sample means' for each age group are similar for each of the LIWC features.", "*** alpha = 0.001, ** alpha = 0.01, * alpha = 0.05" ], [ "We leverage both the visual and textual content for predicting age and gender.", "Prediction with Textual Content:", "We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender. The predictive power of this lexica was evaluated on Twitter, blog, and Facebook, showing promising results BIBREF73 . Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2 ", "where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 . As our data is biased toward young people, we report age prediction performance for each age group separately (Table TABREF42 ). Moreover, to measure the average accuracy of this model, we build a balanced dataset (keeping all the users above 23 -416 users), and then randomly sampling the same number of users from the age ranges (11,19] and (19,23]. The average accuracy of this model is 0.63 for depressed users and 0.64 for control class. Table TABREF44 illustrates the performance of gender prediction for each class. The average accuracy is 0.82 on INLINEFORM5 ground-truth dataset.", "Prediction with Visual Imagery:", "Inspired by BIBREF56 's approach for facial landmark localization, we use their pretrained CNN consisting of convolutional layers, including unshared and fully-connected layers, to predict gender and age from both the profile and shared images. We evaluate the performance for gender and age prediction task on INLINEFORM0 and INLINEFORM1 respectively as shown in Table TABREF42 and Table TABREF44 .", "Demographic Prediction Analysis:", "We delve deeper into the benefits and drawbacks of each data modality for demographic information prediction. This is crucial as the differences between language cues between age groups above age 35 tend to become smaller (see Figure FIGREF39 -A,B,C) and making the prediction harder for older people BIBREF74 . In this case, the other data modality (e.g., visual content) can play integral role as a complementary source for age inference. For gender prediction (see Table TABREF44 ), on average, the profile image-based predictor provides a more accurate prediction for both the depressed and control class (0.92 and 0.90) compared to content-based predictor (0.82). For age prediction (see Table TABREF42 ), textual content-based predictor (on average 0.60) outperforms both of the visual-based predictors (on average profile:0.51, Media:0.53).", "However, not every user provides facial identity on his account (see Table TABREF21 ). We studied facial presentation for each age-group to examine any association between age-group, facial presentation and depressive behavior (see Table TABREF43 ). We can see youngsters in both depressed and control class are not likely to present their face on profile image. Less than 3% of vulnerable users between 11-19 years reveal their facial identity. Although content-based gender predictor was not as accurate as image-based one, it is adequate for population-level analysis." ], [ "We use the above findings for predicting depressive behavior. Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features. As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling, this model reduces the learning effort and shows promising results BIBREF75 . To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data. Iteratively, it checks whether the actual feature has a higher Z-score than its shadow feature (See Algorithm SECREF6 and Figure FIGREF45 ) BIBREF76 .", "Main each Feature INLINEFORM0 INLINEFORM1 ", "RndForrest( INLINEFORM0 ) Calculate Imp INLINEFORM1 INLINEFORM2 Generate next hypothesis , INLINEFORM3 Once all hypothesis generated Perform Statistical Test INLINEFORM4 //Binomial Distribution INLINEFORM5 Feature is important Feature is important", " Ensemble Feature Selection", "Next, we adopt an ensemble learning method that integrates the predictive power of multiple learners with two main advantages; its interpretability with respect to the contributions of each feature and its high predictive power. For prediction we have INLINEFORM0 where INLINEFORM1 is a weak learner and INLINEFORM2 denotes the final prediction.", "In particular, we optimize the loss function: INLINEFORM0 where INLINEFORM1 incorporates INLINEFORM2 and INLINEFORM3 regularization. In each iteration, the new INLINEFORM4 is obtained by fitting weak learner to the negative gradient of loss function. Particularly, by estimating the loss function with Taylor expansion : INLINEFORM5 where its first expression is constant, the second and the third expressions are first ( INLINEFORM6 ) and second order derivatives ( INLINEFORM7 ) of the loss. INLINEFORM8 ", "For exploring the weak learners, assume INLINEFORM0 has k leaf nodes, INLINEFORM1 be subset of users from INLINEFORM2 belongs to the node INLINEFORM3 , and INLINEFORM4 denotes the prediction for node INLINEFORM5 . Then, for each user INLINEFORM6 belonging to INLINEFORM7 , INLINEFORM8 and INLINEFORM9 INLINEFORM10 ", "Next, for each leaf node INLINEFORM0 , deriving w.r.t INLINEFORM1 : INLINEFORM2 ", "and by substituting weights: INLINEFORM0 ", "which represents the loss for fixed weak learners with INLINEFORM0 nodes. The trees are built sequentially such that each subsequent tree aims to reduce the errors of its predecessor tree. Although, the weak learners have high bias, the ensemble model produces a strong learner that effectively integrate the weak learners by reducing bias and variance (the ultimate goal of supervised models) BIBREF77 . Table TABREF48 illustrates our multimodal framework outperform the baselines for identifying depressed users in terms of average specificity, sensitivity, F-Measure, and accuracy in 10-fold cross-validation setting on INLINEFORM1 dataset. Figure FIGREF47 shows how the likelihood of being classified into the depressed class varies with each feature addition to the model for a sample user in the dataset. The prediction bar (the black bar) shows that the log-odds of prediction is 0.31, that is, the likelihood of this person being a depressed user is 57% (1 / (1 + exp(-0.3))). The figure also sheds light on the impact of each contributing feature. The waterfall charts represent how the probability of being depressed changes with the addition of each feature variable. For instance, the \"Analytic thinking\" of this user is considered high 48.43 (Median:36.95, Mean: 40.18) and this decreases the chance of this person being classified into the depressed group by the log-odds of -1.41. Depressed users have significantly lower \"Analytic thinking\" score compared to control class. Moreover, the 40.46 \"Clout\" score is a low value (Median: 62.22, Mean: 57.17) and it decreases the chance of being classified as depressed. With respect to the visual features, for instance, the mean and the median of 'shared_colorfulness' is 112.03 and 113 respectively. The value of 136.71 would be high; thus, it decreases the chance of being depressed for this specific user by log-odds of -0.54. Moreover, the 'profile_naturalness' of 0.46 is considered high compared to 0.36 as the mean for the depressed class which justifies pull down of the log-odds by INLINEFORM2 . For network features, for instance, 'two_hop_neighborhood' for depressed users (Mean : 84) are less than that of control users (Mean: 154), and is reflected in pulling down the log-odds by -0.27.", "Baselines:", "To test the efficacy of our multi-modal framework for detecting depressed users, we compare it against existing content, content-network, and image-based models (based on the aforementioned general image feature, facial presence, and facial expressions.)" ] ] }
{ "question": [ "Do they report results only on English data?", "What insights into the relationship between demographics and mental health are provided?", "What model is used to achieve 5% improvement on F1 for identifying depressed individuals on Twitter?", "How do this framework facilitate demographic inference from social media?", "What types of features are used from each data type?", "How is the data annotated?", "Where does the information on individual-level demographics come from?", "What is the source of the user interaction data? ", "What is the source of the textual data? ", "What is the source of the visual data? " ], "question_id": [ "5d70c32137e82943526911ebdf78694899b3c28a", "97dac7092cf8082a6238aaa35f4b185343b914af", "195611926760d1ceec00bd043dfdc8eba2df5ad1", "445e792ce7e699e960e2cb4fe217aeacdd88d392", "a3b1520e3da29d64af2b6e22ff15d330026d0b36", "2cf8825639164a842c3172af039ff079a8448592", "36b25021464a9574bf449e52ae50810c4ac7b642", "98515bd97e4fae6bfce2d164659cd75e87a9fc89", "53bf6238baa29a10f4ff91656c470609c16320e1", "b27f7993b1fe7804c5660d1a33655e424cea8d10" ], "nlp_background": [ "five", "five", "five", "five", "five", "five", "five", "five", "five", "five" ], "topic_background": [ "", "", "", "", "", "", "", "", "", "" ], "paper_read": [ "", "", "", "", "", "", "", "", "", "" ], "search_query": [ "", "", "", "", "", "", "", "", "", "" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "9069ef5e523b402dc27ab4c3defb1b547af8c8f2" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age", "more women than men were given a diagnosis of depression" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as \"17 years old, self-harm, anxiety, depression\") BIBREF41 . We compile \"age prefixes\" and \"age suffixes\", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a \"date\" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51", "Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter." ], "highlighted_evidence": [ "The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.)", "Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression." ] } ], "annotation_id": [ "03c66dab424666d2bf7457daa5023bb03bbbc691" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Random Forest classifier" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use the above findings for predicting depressive behavior. Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features. As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling, this model reduces the learning effort and shows promising results BIBREF75 . To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data. Iteratively, it checks whether the actual feature has a higher Z-score than its shadow feature (See Algorithm SECREF6 and Figure FIGREF45 ) BIBREF76 ." ], "highlighted_evidence": [ "To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data." ] } ], "annotation_id": [ "6f84296097eea6526dcfb59e23889bc1f5d592da" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Demographic information is predicted using weighted lexicon of terms.", "evidence": [ "We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender. The predictive power of this lexica was evaluated on Twitter, blog, and Facebook, showing promising results BIBREF73 . Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2", "where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 . As our data is biased toward young people, we report age prediction performance for each age group separately (Table TABREF42 ). Moreover, to measure the average accuracy of this model, we build a balanced dataset (keeping all the users above 23 -416 users), and then randomly sampling the same number of users from the age ranges (11,19] and (19,23]. The average accuracy of this model is 0.63 for depressed users and 0.64 for control class. Table TABREF44 illustrates the performance of gender prediction for each class. The average accuracy is 0.82 on INLINEFORM5 ground-truth dataset." ], "highlighted_evidence": [ "We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender.", "Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2\n\nwhere INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 ." ] } ], "annotation_id": [ "ea594b61eb07e9789c7d05668b77afa1a5f339b6" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "facial presence", "Facial Expression", "General Image Features", " textual content", "analytical thinking", "clout", "authenticity", "emotional tone", "Sixltr", " informal language markers", "1st person singular pronouns" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization. We identify facial presentation, emotion from facial expression, and demographic features from profile/posted images . Table TABREF21 illustrates facial presentation differences in both profile and posted images (media) for depressed and control users in INLINEFORM0 . With control class showing significantly higher in both profile and media (8%, 9% respectively) compared to that for the depressed class. In contrast with age and gender disclosure, vulnerable users are less likely to disclose their facial identity, possibly due to lack of confidence or fear of stigma.", "Facial Expression:", "Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images. Positive emotions are joy and surprise, and negative emotions are anger, disgust, fear, and sadness. In general, for each user u in INLINEFORM0 , we process profile/shared images for both the depressed and the control groups with at least one face from the shared images (Table TABREF23 ). For the photos that contain multiple faces, we measure the average emotion.", "General Image Features:", "The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . Color, as a pillar of the human vision system, has a strong association with conceptual ideas like emotion BIBREF58 , BIBREF59 . We measured the normalized red, green, blue and the mean of original colors, and brightness and contrast relative to variations of luminance. We represent images in Hue-Saturation-Value color space that seems intuitive for humans, and measure mean and variance for saturation and hue. Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color. Although hue is not interpretable, high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 . Colorfulness is measured as a difference against gray background BIBREF60 . Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 . In color reproduction, naturalness is measured from the mental recollection of the colors of familiar objects. Additionally, there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions. We performed optical character recognition (OCR) with python-tesseract to extract text and their sentiment score. As illustrated in Table TABREF26 , vulnerable users tend to use less colorful (higher grayscale) profile as well as shared images to convey their negative feelings, and share images that are less natural (Figure FIGREF15 ). With respect to the aesthetic quality of images (saturation, brightness, and hue), depressed users use images that are less appealing to the human eye. We employ independent t-test, while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals. Overall, we have 223 features, and choose Bonferroni-corrected INLINEFORM0 level of INLINEFORM1 (*** INLINEFORM2 , ** INLINEFORM3 ).", "Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)" ], "highlighted_evidence": [ "For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization.", "Facial Expression:\n\nFollowing BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images.", "General Image Features:\n\nThe importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . ", "Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. ", "It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)" ] } ], "annotation_id": [ "1f209244d8f3c63649ee96ec3d4a58e2314a81b2" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "The data are self-reported by Twitter users and then verified by two human experts.", "evidence": [ "Self-disclosure clues have been extensively utilized for creating ground-truth data for numerous social media analytic studies e.g., for predicting demographics BIBREF36 , BIBREF41 , and user's depressive behavior BIBREF46 , BIBREF47 , BIBREF48 . For instance, vulnerable individuals may employ depressive-indicative terms in their Twitter profile descriptions. Others may share their age and gender, e.g., \"16 years old suicidal girl\"(see Figure FIGREF15 ). We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 . This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions, followers_count, created_at, and profile_image_url." ], "highlighted_evidence": [ "We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 ." ] } ], "annotation_id": [ "e277b34d09834dc7c33e8096d7b560b7fe686f52" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "From Twitter profile descriptions of the users.", "evidence": [ "Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as \"17 years old, self-harm, anxiety, depression\") BIBREF41 . We compile \"age prefixes\" and \"age suffixes\", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a \"date\" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51", "Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter." ], "highlighted_evidence": [ "We extract user's age by applying regular expression patterns to profile descriptions (such as \"17 years old, self-harm, anxiety, depression\") BIBREF41 . We compile \"age prefixes\" and \"age suffixes\", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a \"date\" or age (e.g., 1994).", "We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description." ] } ], "annotation_id": [ "c4695e795080ba25f33c4becee24aea803ee068c" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Sociability from ego-network on Twitter", "evidence": [ "The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users." ], "highlighted_evidence": [ "We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users." ] } ], "annotation_id": [ "deedf2e223758db6f59cc8eeb41e7f258749e794" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Users' tweets", "evidence": [ "The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users." ], "highlighted_evidence": [ "We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users." ] } ], "annotation_id": [ "e8c7a7ff219abef43c0444bb270cf20d3bfcb5f6" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Profile pictures from the Twitter users' profiles.", "evidence": [ "The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users." ], "highlighted_evidence": [ "We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users." ] } ], "annotation_id": [ "06fbe4ab4db9860966cc6a49627d3554a01ee590" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ] }
{ "caption": [ "Figure 1: Self-disclosure on Twitter from likely depressed users discovered by matching depressiveindicative terms", "Figure 2: The age distribution for depressed and control users in ground-truth dataset", "Figure 3: Gender and Depressive Behavior Association (Chi-square test: color-code: (blue:association), (red: repulsion), size: amount of each cell’s contribution)", "Table 3: Statistical significance (t-statistic) of the mean of salient features for depressed and control classes 20", "Figure 4: The Pearson correlation between the average emotions derived from facial expressions through the shared images and emotions from textual content for depressed-(a) and control users-(b). Pairs without statistically significant correlation are crossed (p-value <0.05)", "Figure 5: Characterizing Linguistic Patterns in two aspects: Depressive-behavior and Age Distribution", "Table 4: Statistical Significance Test of Linguistic Patterns/Visual Attributes for Different Age Groups with one-way ANOVA 31", "Figure 6: Ranking Features obtained from Different Modalities with an Ensemble Algorithm", "Table 7: Gender Prediction Performance through Visual and Textual Content", "Figure 7: The explanation of the log-odds prediction of outcome (0.31) for a sample user (y-axis shows the outcome probability (depressed or control), the bar labels indicate the log-odds impact of each feature)", "Table 8: Model’s Performance for Depressed User Identification from Twitter using different data modalities" ], "file": [ "2-Figure1-1.png", "4-Figure2-1.png", "5-Figure3-1.png", "6-Table3-1.png", "7-Figure4-1.png", "7-Figure5-1.png", "8-Table4-1.png", "9-Figure6-1.png", "10-Table7-1.png", "10-Figure7-1.png", "11-Table8-1.png" ] }
1905.06512
Incorporating Sememes into Chinese Definition Modeling
Chinese definition modeling is a challenging task that generates a dictionary definition in Chinese for a given Chinese word. To accomplish this task, we construct the Chinese Definition Modeling Corpus (CDM), which contains triples of word, sememes and the corresponding definition. We present two novel models to improve Chinese definition modeling: the Adaptive-Attention model (AAM) and the Self- and Adaptive-Attention Model (SAAM). AAM successfully incorporates sememes for generating the definition with an adaptive attention mechanism. It has the capability to decide which sememes to focus on and when to pay attention to sememes. SAAM further replaces recurrent connections in AAM with self-attention and relies entirely on the attention mechanism, reducing the path length between word, sememes and definition. Experiments on CDM demonstrate that by incorporating sememes, our best proposed model can outperform the state-of-the-art method by +6.0 BLEU.
{ "section_name": [ "Introduction", "Methodology", "Baseline Model", "Adaptive-Attention Model", "Self- and Adaptive-Attention Model", "Experiments", "Dataset", "Settings", "Results", "Definition Modeling", "Knowledge Bases", "Self-Attention", "Conclusion" ], "paragraphs": [ [ "Chinese definition modeling is the task of generating a definition in Chinese for a given Chinese word. This task can benefit the compilation of dictionaries, especially dictionaries for Chinese as a foreign language (CFL) learners.", "In recent years, the number of CFL learners has risen sharply. In 2017, 770,000 people took the Chinese Proficiency Test, an increase of 38% from 2016. However, most Chinese dictionaries are for native speakers. Since these dictionaries usually require a fairly high level of Chinese, it is necessary to build a dictionary specifically for CFL learners. Manually writing definitions relies on the knowledge of lexicographers and linguists, which is expensive and time-consuming BIBREF0 , BIBREF1 , BIBREF2 . Therefore, the study on writing definitions automatically is of practical significance.", "Definition modeling was first proposed by BIBREF3 as a tool to evaluate different word embeddings. BIBREF4 extended the work by incorporating word sense disambiguation to generate context-aware word definition. Both methods are based on recurrent neural network encoder-decoder framework without attention. In contrast, this paper formulates the definition modeling task as an automatic way to accelerate dictionary compilation.", "In this work, we introduce a new dataset for the Chinese definition modeling task that we call Chinese Definition Modeling Corpus cdm(CDM). CDM consists of 104,517 entries, where each entry contains a word, the sememes of a specific word sense, and the definition in Chinese of the same word sense. Sememes are minimum semantic units of word meanings, and the meaning of each word sense is typically composed of several sememes, as is illustrated in Figure 1 . For a given word sense, CDM annotates the sememes according to HowNet BIBREF5 , and the definition according to Chinese Concept Dictionary (CCD) BIBREF6 . Since sememes have been widely used in improving word representation learning BIBREF7 and word similarity computation BIBREF8 , we argue that sememes can benefit the task of definition modeling.", "We propose two novel models to incorporate sememes into Chinese definition modeling: the Adaptive-Attention Model (AAM) and the Self- and Adaptive-Attention Model (SAAM). Both models are based on the encoder-decoder framework. The encoder maps word and sememes into a sequence of continuous representations, and the decoder then attends to the output of the encoder and generates the definition one word at a time. Different from the vanilla attention mechanism, the decoder of both models employs the adaptive attention mechanism to decide which sememes to focus on and when to pay attention to sememes at one time BIBREF9 . Following BIBREF3 , BIBREF4 , the AAM is built using recurrent neural networks (RNNs). However, recent works demonstrate that attention-based architecture that entirely eliminates recurrent connections can obtain new state-of-the-art in neural machine translation BIBREF10 , constituency parsing BIBREF11 and semantic role labeling BIBREF12 . In the SAAM, we replace the LSTM-based encoder and decoder with an architecture based on self-attention. This fully attention-based model allows for more parallelization, reduces the path length between word, sememes and the definition, and can reach a new state-of-the-art on the definition modeling task. To the best of our knowledge, this is the first work to introduce the attention mechanism and utilize external resource for the definition modeling task.", "In experiments on the CDM dataset we show that our proposed AAM and SAAM outperform the state-of-the-art approach with a large margin. By efficiently incorporating sememes, the SAAM achieves the best performance with improvement over the state-of-the-art method by +6.0 BLEU." ], [ "The definition modeling task is to generate an explanatory sentence for the interpreted word. For example, given the word “旅馆” (hotel), a model should generate a sentence like this: “给旅行者提供食宿和其他服务的地方” (A place to provide residence and other services for tourists). Since distributed representations of words have been shown to capture lexical syntax and semantics, it is intuitive to employ word embeddings to generate natural language definitions.", "Previously, BIBREF3 proposed several model architectures to generate a definition according to the distributed representation of a word. We briefly summarize their model with the best performance in Section \"Experiments\" and adopt it as our baseline model.", "Inspired by the works that use sememes to improve word representation learning BIBREF7 and word similarity computation BIBREF8 , we propose the idea of incorporating sememes into definition modeling. Sememes can provide additional semantic information for the task. As shown in Figure 1 , sememes are highly correlated to the definition. For example, the sememe “场所” (place) is related with the word “地方” (place) of the definition, and the sememe “旅游” (tour) is correlated to the word “旅行者” (tourists) of the definition.", "Therefore, to make full use of the sememes in CDM dataset, we propose AAM and SAAM for the task, in Section \"Adaptive-Attention Model\" and Section \"Self- and Adaptive-Attention Model\" , respectively." ], [ "The baseline model BIBREF3 is implemented with a recurrent neural network based encoder-decoder framework. Without utilizing the information of sememes, it learns a probabilistic mapping $P(y | x)$ from the word $x$ to be defined to a definition $y = [y_1, \\dots , y_T ]$ , in which $y_t$ is the $t$ -th word of definition $y$ .", "More concretely, given a word $x$ to be defined, the encoder reads the word and generates its word embedding $\\mathbf {x}$ as the encoded information. Afterward, the decoder computes the conditional probability of each definition word $y_t$ depending on the previous definition words $y_{<t}$ , as well as the word being defined $x$ , i.e., $P(y_t|y_{<t},x)$ . $P(y_t|y_{<t},x)$ is given as: ", "$$& P(y_t|y_{<t},x) \\propto \\exp {(y_t;\\mathbf {z}_t,\\mathbf {x})} & \\\\\n& \\mathbf {z}_t = f(\\mathbf {z}_{t-1},y_{t-1},\\mathbf {x}) &$$ (Eq. 4) ", "where $\\mathbf {z}_t$ is the decoder's hidden state at time $t$ , $f$ is a recurrent nonlinear function such as LSTM and GRU, and $\\mathbf {x}$ is the embedding of the word being defined. Then the probability of $P(y | x)$ can be computed according to the probability chain rule: ", "$$P(y | x) = \\prod _{t=1}^{T} P(y_t|y_{<t},x)$$ (Eq. 5) ", "We denote all the parameters in the model as $\\theta $ and the definition corpus as $D_{x,y}$ , which is a set of word-definition pairs. Then the model parameters can be learned by maximizing the log-likelihood: ", "$$\\hat{\\theta } = \\mathop {\\rm argmax}_{\\theta } \\sum _{\\langle x, y \\rangle \\in D_{x,y}}\\log P(y | x; \\theta ) $$ (Eq. 6) " ], [ "Our proposed model aims to incorporate sememes into the definition modeling task. Given the word to be defined $x$ and its corresponding sememes $s=[s_1, \\dots , s_N ]$ , we define the probability of generating the definition $y=[y_1, \\dots , y_t ]$ as: ", "$$P(y | x, s) = \\prod _{t=1}^{T} P(y_t|y_{<t},x,s) $$ (Eq. 8) ", "Similar to Eq. 6 , we can maximize the log-likelihood with the definition corpus $D_{x,s,y}$ to learn model parameters: ", "$$\\hat{\\theta } = \\mathop {\\rm argmax}_{\\theta } \\sum _{\\langle x,s,y \\rangle \\in D_{x,s,y}}\\log P(y | x, s; \\theta ) $$ (Eq. 9) ", "The probability $P(y | x, s)$ can be implemented with an adaptive attention based encoder-decoder framework, which we call Adaptive-Attention Model (AAM). The new architecture consists of a bidirectional RNN as the encoder and a RNN decoder that adaptively attends to the sememes during decoding a definition.", "Similar to BIBREF13 , the encoder is a bidirectional RNN, consisting of forward and backward RNNs. Given the word to be defined $x$ and its corresponding sememes $s=[s_1, \\dots , s_N ]$ , we define the input sequence of vectors for the encoder as $\\mathbf {v}=[\\mathbf {v}_1,\\dots ,\\mathbf {v}_{N}]$ . The vector $\\mathbf {v}_n$ is given as follows: ", "$$\\mathbf {v}_n = [\\mathbf {x}; \\mathbf {s}_n ]$$ (Eq. 11) ", "where $\\mathbf {x}$ is the vector representation of the word $x$ , $\\mathbf {s}_n$ is the vector representation of the $n$ -th sememe $s_n$ , and $[\\mathbf {a};\\mathbf {b}]$ denote concatenation of vector $\\mathbf {a}$ and $\\mathbf {b}$ .", "The forward RNN $\\overrightarrow{f}$ reads the input sequence of vectors from $\\mathbf {v}_1$ to $\\mathbf {v}_N$ and calculates a forward hidden state for position $n$ as: ", "$$\\overrightarrow{\\mathbf {h}_{n}} &=& f(\\mathbf {v}_n, \\overrightarrow{\\mathbf {h}_{n-1}})$$ (Eq. 12) ", "where $f$ is an LSTM or GRU. Similarly, the backward RNN $\\overleftarrow{f}$ reads the input sequence of vectors from $\\mathbf {v}_N$ to $\\mathbf {v}_1$ and obtain a backward hidden state for position $n$ as: ", "$$\\overleftarrow{\\mathbf {h}_{n}} &=& f(\\mathbf {h}_n, \\overleftarrow{\\mathbf {h}_{n+1}})$$ (Eq. 13) ", "In this way, we obtain a sequence of encoder hidden states $\\mathbf {h}=\\left[\\mathbf {h}_1,...,\\mathbf {h}_N\\right]$ , by concatenating the forward hidden state $\\overrightarrow{\\mathbf {h}_{n}}$ and the backward one $\\overleftarrow{\\mathbf {h}_{n}}$ at each position $n$ : ", "$$\\mathbf {h}_n=\\left[\\overrightarrow{\\mathbf {h}_{n}}, \\overleftarrow{\\mathbf {h}_{n}}\\right]$$ (Eq. 14) ", "The hidden state $\\mathbf {h}_n$ captures the sememe- and word-aware information of the $n$ -th sememe.", "As attention-based neural encoder-decoder frameworks have shown great success in image captioning BIBREF14 , document summarization BIBREF15 and neural machine translation BIBREF13 , it is natural to adopt the attention-based recurrent decoder in BIBREF13 as our decoder. The vanilla attention attends to the sememes at every time step. However, not all words in the definition have corresponding sememes. For example, sememe “住下” (reside) could be useful when generating “食宿” (residence), but none of the sememes is useful when generating “提供” (provide). Besides, language correlations make the sememes unnecessary when generating words like “和” (and) and “给” (for).", "Inspired by BIBREF9 , we introduce the adaptive attention mechanism for the decoder. At each time step $t$ , we summarize the time-varying sememes' information as sememe context, and the language model's information as LM context. Then, we use another attention to obtain the context vector, relying on either the sememe context or LM context.", "More concretely, we define each conditional probability in Eq. 8 as: ", "$$& P(y_t|y_{<t},x,s) \\propto \\exp {(y_t;\\mathbf {z}_t,\\mathbf {c}_t)} & \\\\\n& \\mathbf {z}_t = f(\\mathbf {z}_{t-1},y_{t-1},\\mathbf {c}_t) & $$ (Eq. 17) ", "where $\\mathbf {c}_t$ is the context vector from the output of the adaptive attention module at time $t$ , $\\mathbf {z}_t$ is a decoder's hidden state at time $t$ .", "To obtain the context vector $\\mathbf {c}_t$ , we first compute the sememe context vector $\\hat{\\mathbf {c}_t}$ and the LM context $\\mathbf {o}_t$ . Similar to the vanilla attention, the sememe context $\\hat{\\mathbf {c}_t}$ is obtained with a soft attention mechanism as: ", "$$\\hat{\\mathbf {c}_t} = \\sum _{n=1}^{N} \\alpha _{tn} \\mathbf {h}_n,$$ (Eq. 18) ", "where ", "$$\\alpha _{tn} &=& \\frac{\\mathrm {exp}(e_{tn})}{\\sum _{i=1}^{N} \\mathrm {exp}(e_{ti})} \\nonumber \\\\\ne_{tn} &=& \\mathbf {w}_{\\hat{c}}^T[\\mathbf {h}_n; \\mathbf {z}_{t-1}].$$ (Eq. 19) ", "Since the decoder's hidden states store syntax and semantic information for language modeling, we compute the LM context $\\mathbf {o}_t$ with a gated unit, whose input is the definition word $y_t$ and the previous hidden state $\\mathbf {z}_{t-1}$ : ", "$$\\mathbf {g}_t &=& \\sigma (\\mathbf {W}_g [y_{t-1}; \\mathbf {z}_{t-1}] + \\mathbf {b}_g) \\nonumber \\\\\n\\mathbf {o}_t &=& \\mathbf {g}_t \\odot \\mathrm {tanh} (\\mathbf {z}_{t-1}) $$ (Eq. 20) ", "Once the sememe context vector $\\hat{\\mathbf {c}_t}$ and the LM context $\\mathbf {o}_t$ are ready, we can generate the context vector with an adaptive attention layer as: ", "$$\\mathbf {c}_t = \\beta _t \\mathbf {o}_t + (1-\\beta _t)\\hat{\\mathbf {c}_t}, $$ (Eq. 21) ", "where ", "$$\\beta _{t} &=& \\frac{\\mathrm {exp}(e_{to})}{\\mathrm {exp}(e_{to})+\\mathrm {exp}(e_{t\\hat{c}})} \\nonumber \\\\\ne_{to} &=& (\\mathbf {w}_c)^T[\\mathbf {o}_t;\\mathbf {z}_t] \\nonumber \\\\\ne_{t\\hat{c}} &=& (\\mathbf {w}_c)^T[\\hat{\\mathbf {c}_t};\\mathbf {z}_t] $$ (Eq. 22) ", " $\\beta _{t}$ is a scalar in range $[0,1]$ , which controls the relative importance of LM context and sememe context.", "Once we obtain the context vector $\\mathbf {c}_t$ , we can update the decoder's hidden state and generate the next word according to Eq. and Eq. 17 , respectively." ], [ "Recent works demonstrate that an architecture entirely based on attention can obtain new state-of-the-art in neural machine translation BIBREF10 , constituency parsing BIBREF11 and semantic role labeling BIBREF12 . SAAM adopts similar architecture and replaces the recurrent connections in AAM with self-attention. Such architecture not only reduces the training time by allowing for more parallelization, but also learns better the dependency between word, sememes and tokens of the definition by reducing the path length between them.", "Given the word to be defined $x$ and its corresponding ordered sememes $s=[s_1, \\dots , s_{N}]$ , we combine them as the input sequence of embeddings for the encoder, i.e., $\\mathbf {v}=[\\mathbf {v}_0, \\mathbf {v}_1, \\dots , \\mathbf {v}_{N}]$ . The $n$ -th vector $\\mathbf {v}_n$ is defined as: ", "$$\\mathbf {v}_n =\n{\\left\\lbrace \\begin{array}{ll}\n\\mathbf {x}, &n=0 \\cr \\mathbf {s}_n, &n>0\n\\end{array}\\right.}$$ (Eq. 25) ", "where $\\mathbf {x}$ is the vector representation of the given word $x$ , and $\\mathbf {s}_n$ is the vector representation of the $n$ -th sememe $s_n$ .", "Although the input sequence is not time ordered, position $n$ in the sequence carries some useful information. First, position 0 corresponds to the word to be defined, while other positions correspond to the sememes. Secondly, sememes are sorted into a logical order in HowNet. For example, as the first sememe of the word “旅馆” (hotel), the sememe “场所” (place) describes its most important aspect, namely, the definition of “旅馆” (hotel) should be “…… 的地方” (a place for ...). Therefore, we add learned position embedding to the input embeddings for the encoder: ", "$$\\mathbf {v}_n = \\mathbf {v}_n + \\mathbf {p}_n$$ (Eq. 26) ", "where $\\mathbf {p}_n$ is the position embedding that can be learned during training.", "Then the vectors $\\mathbf {v}=[\\mathbf {v}_0, \\mathbf {v}_1, \\dots , \\mathbf {v}_{N}]$ are transformed by a stack of identical layers, where each layers consists of two sublayers: multi-head self-attention layer and position-wise fully connected feed-forward layer. Each of the layers are connected by residual connections, followed by layer normalization BIBREF16 . We refer the readers to BIBREF10 for the detail of the layers. The output of the encoder stack is a sequence of hidden states, denoted as $\\mathbf {h}=[\\mathbf {h}_0, \\mathbf {h}_1, \\dots , \\mathbf {h}_{N}]$ .", "The decoder is also composed of a stack of identical layers. In BIBREF10 , each layer includes three sublayers: masked multi-head self-attention layer, multi-head attention layer that attends over the output of the encoder stack and position-wise fully connected feed-forward layer. In our model, we replace the two multi-head attention layers with an adaptive multi-head attention layer. Similarly to the adaptive attention layer in AAM, the adaptive multi-head attention layer can adaptivelly decide which sememes to focus on and when to attend to sememes at each time and each layer. Figure 2 shows the architecture of the decoder.", "Different from the adaptive attention layer in AAM that uses single head attention to obtain the sememe context and gate unit to obtain the LM context, the adaptive multi-head attention layer utilizes multi-head attention to obtain both contexts. Multi-head attention performs multiple single head attentions in parallel with linearly projected keys, values and queries, and then combines the outputs of all heads to obtain the final attention result. We omit the detail here and refer the readers to BIBREF10 . Formally, given the hidden state $\\mathbf {z}_t^{l-1}$ at time $t$ , layer $l-1$ of the decoder, we obtain the LM context with multi-head self-attention: ", "$$\\mathbf {o}_t^l = \\textit {MultiHead}(\\mathbf {z}_t^{l-1},\\mathbf {z}_{\\le t}^{l-1},\\mathbf {z}_{\\le t}^{l-1})$$ (Eq. 28) ", "where the decoder's hidden state $\\mathbf {z}_t^{l-1}$ at time $t$ , layer $l-1$ is the query, and $\\mathbf {z}_{\\le t}^{l-1}=[\\mathbf {z}_1^{l-1},...,\\mathbf {z}_t^{l-1}]$ , the decoder's hidden states from time 1 to time $t$ at layer $l-1$ , are the keys and values. To obtain better LM context, we employ residual connection and layer normalization after the multi-head self-attention. Similarly, the sememe context can be computed by attending to the encoder's outputs with multi-head attention: ", "$$\\hat{\\mathbf {c}_t}^l = \\textit {MultiHead}(\\mathbf {o}_t^l,\\mathbf {h},\\mathbf {h})$$ (Eq. 29) ", "where $\\mathbf {o}_t^l$ is the query, and the output from the encoder stack $\\mathbf {h}=[\\mathbf {h}_0, \\mathbf {h}_1, \\dots , \\mathbf {h}_{N}]$ , are the values and keys.", "Once obtaining the sememe context vector $\\hat{\\mathbf {c}_t}^l$ and the LM context $\\mathbf {o}_t^l$ , we compute the output from the adaptive attention layer with: ", "$$\\mathbf {c}_t^l = \\beta _t^l \\mathbf {o}_t^l + (1-\\beta _t^l)\\hat{\\mathbf {c}_t}^l, $$ (Eq. 30) ", "where ", "$$\\beta _{t}^l &=& \\frac{\\mathrm {exp}(e_{to})}{\\mathrm {exp}(e_{to})+\\mathrm {exp}(e_{t\\hat{c}})} \\nonumber \\\\\ne_{to}^l &=& (\\mathbf {w}_c^l)^T[\\mathbf {o}_t^l;\\mathbf {z}_t^{l-1}] \\nonumber \\\\\ne_{t\\hat{c}}^l &=& (\\mathbf {w}_c^l)^T[\\hat{\\mathbf {c}_t}^l;\\mathbf {z}_t^{l-1}] $$ (Eq. 31) " ], [ "In this section, we will first introduce the construction process of the CDM dataset, then the experimental results and analysis." ], [ "To verify our proposed models, we construct the CDM dataset for the Chinese definition modeling task. cdmEach entry in the dataset is a triple that consists of: the interpreted word, sememes and a definition for a specific word sense, where the sememes are annotated with HowNet BIBREF5 , and the definition are annotated with Chinese Concept Dictionary (CCD) BIBREF6 .", "Concretely, for a common word in HowNet and CCD, we first align its definitions from CCD and sememe groups from HowNet, where each group represents one word sense. We define the sememes of a definition as the combined sememes associated with any token of the definition. Then for each definition of a word, we align it with the sememe group that has the largest number of overlapping sememes with the definition's sememes. With such aligned definition and sememe group, we add an entry that consists of the word, the sememes of the aligned sememe group and the aligned definition. Each word can have multiple entries in the dataset, especially the polysemous word. To improve the quality of the created dataset, we filter out entries that the definition contains the interpreted word, or the interpreted word is among function words, numeral words and proper nouns.", "After processing, we obtain the dataset that contains 104,517 entries with 30,052 unique interpreted words. We divide the dataset according to the unique interpreted words into training set, validation set and test set with a ratio of 18:1:1. Table 1 shows the detailed data statistics." ], [ "We show the effectiveness of all models on the CDM dataset. All the embeddings, including word and sememe embedding, are fixed 300 dimensional word embeddings pretrained on the Chinese Gigaword corpus (LDC2011T13). All definitions are segmented with Jiaba Chinese text segmentation tool and we use the resulting unique segments as the decoder vocabulary. To evaluate the difference between the generated results and the gold-standard definitions, we compute BLEU score using a script provided by Moses, following BIBREF3 . We implement the Baseline and AAM by modifying the code of BIBREF9 , and SAAM with fairseq-py .", "We use two-layer LSTM network as the recurrent component. We set batch size to 128, and the dimension of the hidden state to 300 for the decoder. Adam optimizer is employed with an initial learning rate of $1\\times 10^{-3}$ . Since the morphemes of the word to be defined can benefit definition modeling, BIBREF3 obtain the model with the best performance by adding a trainable embedding from character-level CNN to the fixed word embedding. To obtain the state-of-the-art result as the baseline, we follow BIBREF3 and experiment with the character-level CNN with the same hyperparameters.", "To be comparable with the baseline, we also use two-layer LSTM network as the recurrent component.We set batch size to 128, and the dimension of the hidden state to 300 for both the encoder and the decoder. Adam optimizer is employed with an initial learning rate of $1\\times 10^{-3}$ .", "We have the same hyperparameters as BIBREF10 , and set these hyperparameters as $(d_{\\text{model}}=300, d_{\\text{hidden}}=2048, n_{\\text{head}}=5, n_{\\text{layer}}=6)$ . To be comparable with AAM, we use the same batch size as 128. We also employ label smoothing technique BIBREF17 with a smoothing value of 0.1 during training." ], [ "We report the experimental results on CDM test set in Figure 3 . It shows that both of our proposed models, namely AAM and SAAM, achieve good results and outperform the baseline by a large margin. With sememes, AAM and SAAM can improve over the baseline with +3.1 BLEU and +6.65 BLEU, respectively.", "We also find that sememes are very useful for generating the definition. The incorporation of sememes improves the AAM with +3.32 BLEU and the SAAM with +3.53 BLEU. This can be explained by that sememes help to disambiguate the word sense associated with the target definition.", "Among all models, SAAM which incorporates sememes achieves the new state-of-the-art, with a BLEU score of 36.36 on the test set, demonstrating the effectiveness of sememes and the architecture of SAAM.", "Table 2 lists some example definitions generated with different models. For each word-sememes pair, the generated three definitions are ordered according to the order: Baseline, AAM and SAAM. For AAM and SAAM, we use the model that incorporates sememes. These examples show that with sememes, the model can generate more accurate and concrete definitions. For example, for the word “旅馆” (hotel), the baseline model fails to generate definition containing the token “旅行者”(tourists). However, by incoporating sememes' information, especially the sememe “旅游” (tour), AAM and SAAM successfully generate “旅行者”(tourists). Manual inspection of others examples also supports our claim.", "We also conduct an ablation study to evaluate the various choices we made for SAAM. We consider three key components: position embedding, the adaptive attention layer, and the incorporated sememes. As illustrated in table 3 , we remove one of these components and report the performance of the resulting model on validation set and test set. We also list the performance of the baseline and AAM for reference.", "It demonstrates that all components benefit the SAAM. Removing position embedding is 0.31 BLEU below the SAAM on the test set. Removing the adaptive attention layer is 0.43 BLEU below the SAAM on the test set. Sememes affects the most. Without incoporating sememes, the performance drops 3.53 BLEU on the test set." ], [ "Distributed representations of words, or word embeddings BIBREF18 were widely used in the field of NLP in recent years. Since word embeddings have been shown to capture lexical semantics, BIBREF3 proposed the definition modeling task as a more transparent and direct representation of word embeddings. This work is followed by BIBREF4 , who studied the problem of word ambiguities in definition modeling by employing latent variable modeling and soft attention mechanisms. Both works focus on evaluating and interpreting word embeddings. In contrast, we incorporate sememes to generate word sense aware word definition for dictionary compilation." ], [ "Recently many knowledge bases (KBs) are established in order to organize human knowledge in structural forms. By providing human experiential knowledge, KBs are playing an increasingly important role as infrastructural facilities of natural language processing.", "HowNet BIBREF19 is a knowledge base that annotates each concept in Chinese with one or more sememes. HowNet plays an important role in understanding the semantic meanings of concepts in human languages, and has been widely used in word representation learning BIBREF7 , word similarity computation BIBREF20 and sentiment analysis BIBREF21 . For example, BIBREF7 improved word representation learning by utilizing sememes to represent various senses of each word and selecting suitable senses in contexts with an attention mechanism.", "Chinese Concept Dictionary (CCD) is a WordNet-like semantic lexicon BIBREF22 , BIBREF23 , where each concept is defined by a set of synonyms (SynSet). CCD has been widely used in many NLP tasks, such as word sense disambiguation BIBREF23 .", "In this work, we annotate the word with aligned sememes from HowNet and definition from CCD." ], [ "Self-attention is a special case of attention mechanism that relates different positions of a single sequence in order to compute a representation for the sequence. Self-attention has been successfully applied to many tasks recently BIBREF24 , BIBREF25 , BIBREF26 , BIBREF10 , BIBREF12 , BIBREF11 .", " BIBREF10 introduced the first transduction model based on self-attention by replacing the recurrent layers commonly used in encoder-decoder architectures with multi-head self-attention. The proposed model called Transformer achieved the state-of-the-art performance on neural machine translation with reduced training time. After that, BIBREF12 demonstrated that self-attention can improve semantic role labeling by handling structural information and long range dependencies. BIBREF11 further extended self-attention to constituency parsing and showed that the use of self-attention helped to analyze the model by making explicit the manner in which information is propagated between different locations in the sentence.", "Self-attention has many good properties. It reduces the computation complexity per layer, allows for more parallelization and reduces the path length between long-range dependencies in the network. In this paper, we use self-attention based architecture in SAAM to learn the relations of word, sememes and definition automatically." ], [ "We introduce the Chinese definition modeling task that generates a definition in Chinese for a given word and sememes of a specific word sense. This task is useful for dictionary compilation. To achieve this, we constructed the CDM dataset with word-sememes-definition triples. We propose two novel methods, AAM and SAAM, to generate word sense aware definition by utilizing sememes. In experiments on the CDM dataset we show that our proposed AAM and SAAM outperform the state-of-the-art approach with a large margin. By efficiently incorporating sememes, the SAAM achieves the best performance with improvement over the state-of-the-art method." ] ] }
{ "question": [ "Is there an online demo of their system?", "Do they perform manual evaluation?", "Do they compare against Noraset et al. 2017?", "What is a sememe?" ], "question_id": [ "e21a8581cc858483a31c6133e53dd0cfda76ae4c", "9f6e877e3bde771595e8aee10c2656a0e7b9aeb2", "a3783e42c2bf616c8a07bd3b3d503886660e4344", "0d0959dba3f7c15ee4f5cdee51682656c4abbd8f" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "research", "research", "research", "research" ], "paper_read": [ "somewhat", "somewhat", "somewhat", "somewhat" ], "search_query": [ "definition modeling", "definition modeling", "definition modeling", "definition modeling" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "7ed2fe79d7f624888ae6b9fa6869da32e2faf92a" ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Table 2 lists some example definitions generated with different models. For each word-sememes pair, the generated three definitions are ordered according to the order: Baseline, AAM and SAAM. For AAM and SAAM, we use the model that incorporates sememes. These examples show that with sememes, the model can generate more accurate and concrete definitions. For example, for the word “旅馆” (hotel), the baseline model fails to generate definition containing the token “旅行者”(tourists). However, by incoporating sememes' information, especially the sememe “旅游” (tour), AAM and SAAM successfully generate “旅行者”(tourists). Manual inspection of others examples also supports our claim." ], "highlighted_evidence": [ "Manual inspection of others examples also supports our claim." ] } ], "annotation_id": [ "649c77288284835e771deb556cf7cc521ecc731a" ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "The baseline model BIBREF3 is implemented with a recurrent neural network based encoder-decoder framework. Without utilizing the information of sememes, it learns a probabilistic mapping $P(y | x)$ from the word $x$ to be defined to a definition $y = [y_1, \\dots , y_T ]$ , in which $y_t$ is the $t$ -th word of definition $y$ ." ], "highlighted_evidence": [ "The baseline model BIBREF3 is implemented with a recurrent neural network based encoder-decoder framework." ] } ], "annotation_id": [ "2cc66ef15aa155ac36125e56593b43ca3ee4a19b" ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Sememes are minimum semantic units of word meanings, and the meaning of each word sense is typically composed of several sememes" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this work, we introduce a new dataset for the Chinese definition modeling task that we call Chinese Definition Modeling Corpus cdm(CDM). CDM consists of 104,517 entries, where each entry contains a word, the sememes of a specific word sense, and the definition in Chinese of the same word sense. Sememes are minimum semantic units of word meanings, and the meaning of each word sense is typically composed of several sememes, as is illustrated in Figure 1 . For a given word sense, CDM annotates the sememes according to HowNet BIBREF5 , and the definition according to Chinese Concept Dictionary (CCD) BIBREF6 . Since sememes have been widely used in improving word representation learning BIBREF7 and word similarity computation BIBREF8 , we argue that sememes can benefit the task of definition modeling." ], "highlighted_evidence": [ "Sememes are minimum semantic units of word meanings, and the meaning of each word sense is typically composed of several sememes, as is illustrated in Figure 1 ." ] } ], "annotation_id": [ "03ce80e87b870ec527dd4c61ef7e7af9f3ae65f9" ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] } ] }
{ "caption": [ "Figure 1: An example of the CDM dataset. The word “旅馆” (hotel) has five sememes, which are “场所” (place), “旅游” (tour), “吃” (eat), “娱乐” (recreation) and “住下” (reside).", "Figure 2: An overview of the decoder for the SAAM. The left sub-figure shows our decoder contains N identical layers, where each layer contains two sublayer: adaptive multi-head attention layer and feed-forward layer. The right sub-figure shows how we perform the adaptive multi-head attention at layer l and time t for the decoder. zlt represents the hidden state of the decoder at layer l, time t. h denotes the output from the encoder stack. ĉt l is the sememe context, while olt is the LM context. c l t is the output of the adaptive multi-head attention layer at time t.", "Table 1: Statistics of the CDM dataset. Jieba Chinese text segmentation tool is used during segmentation.", "Figure 3: Experimental results of the three models on CDM test set. Since this is the first work to utilize sememes and attention mechanism for definition modeling, the baseline method is non-attention and nonsememes.", "Table 2: Example definitions generated by our models. Baseline represents Noraset et al. (2017). Note that Baseline do not utilize sememes, while the AAM and SAAM models both use sememes.", "Table 3: Ablation study: BLEU scores on the CDM validation set and test set. For the last three rows, we remove position embedding, the adaptive attention layer or sememes information from SAAM model." ], "file": [ "2-Figure1-1.png", "4-Figure2-1.png", "6-Table1-1.png", "6-Figure3-1.png", "7-Table2-1.png", "7-Table3-1.png" ] }
2001.06286
RobBERT: a Dutch RoBERTa-based Language Model
Pre-trained language models have been dominating the field of natural language processing in recent years, and have led to significant performance gains for various complex natural language tasks. One of the most prominent pre-trained language models is BERT (Bi-directional Encoders for Transformers), which was released as an English as well as a multilingual version. Although multilingual BERT performs well on many tasks, recent studies showed that BERT models trained on a single language significantly outperform the multilingual results. Training a Dutch BERT model thus has a lot of potential for a wide range of Dutch NLP tasks. While previous approaches have used earlier implementations of BERT to train their Dutch BERT, we used RoBERTa, a robustly optimized BERT approach, to train a Dutch language model called RobBERT. We show that RobBERT improves state of the art results in Dutch-specific language tasks, and also outperforms other existing Dutch BERT-based models in sentiment analysis. These results indicate that RobBERT is a powerful pre-trained model for fine-tuning for a large variety of Dutch language tasks. We publicly release this pre-trained model in hope of supporting further downstream Dutch NLP applications.
{ "section_name": [ "Introduction", "Related Work", "Pre-training RobBERT", "Pre-training RobBERT ::: Data", "Pre-training RobBERT ::: Training", "Evaluation", "Evaluation ::: Sentiment Analysis", "Evaluation ::: Die/Dat Disambiguation", "Code", "Future Work", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "The advent of neural networks in natural language processing (NLP) has significantly improved state-of-the-art results within the field. While recurrent neural networks (RNNs) and long short-term memory networks (LSTMs) initially dominated the field, recent models started incorporating attention mechanisms and then later dropped the recurrent part and just kept the attention mechanisms in so-called transformer models BIBREF0. This latter type of model caused a new revolution in NLP and led to popular language models like GPT-2 BIBREF1, BIBREF2 and ELMo BIBREF3. BERT BIBREF4 improved over previous transformer models and recurrent networks by allowing the system to learn from input text in a bidirectional way, rather than only from left-to-right or the other way around. This model was later re-implemented, critically evaluated and improved in the RoBERTa model BIBREF5.", "These large-scale transformer models provide the advantage of being able to solve NLP tasks by having a common, expensive pre-training phase, followed by a smaller fine-tuning phase. The pre-training happens in an unsupervised way by providing large corpora of text in the desired language. The second phase only needs a relatively small annotated data set for fine-tuning to outperform previous popular approaches in one of a large number of possible language tasks.", "While language models are usually trained on English data, some multilingual models also exist. These are usually trained on a large quantity of text in different languages. For example, Multilingual-BERT is trained on a collection of corpora in 104 different languages BIBREF4, and generalizes language components well across languages BIBREF6. However, models trained on data from one specific language usually improve the performance of multilingual models for this particular language BIBREF7, BIBREF8. Training a RoBERTa model BIBREF5 on a Dutch dataset thus has a lot of potential for increasing performance for many downstream Dutch NLP tasks. In this paper, we introduce RobBERT, a Dutch RoBERTa-based pre-trained language model, and critically test its performance using natural language tasks against other Dutch languages models." ], [ "Transformer models have been successfully used for a wide range of language tasks. Initially, transformers were introduced for use in machine translation, where they vastly improved state-of-the-art results for English to German in an efficient manner BIBREF0. This transformer model architecture resulted in a new paradigm in NLP with the migration from sequence-to-sequence recurrent neural networks to transformer-based models by removing the recurrent component and only keeping attention. This cornerstone was used for BERT, a transformer model that obtained state-of-the-art results for eleven natural language processing tasks, such as question answering and natural language inference BIBREF4. BERT is pre-trained with large corpora of text using two unsupervised tasks. The first task is word masking (also called the Cloze task BIBREF9 or masked language model (MLM)), where the model has to guess which word is masked in certain position in the text. The second task is next sentence prediction. This is done by predicting if two sentences are subsequent in the corpus, or if they are randomly sampled from the corpus. These tasks allowed the model to create internal representations about a language, which could thereafter be reused for different language tasks. This architecture has been shown to be a general language model that could be fine-tuned with little data in a relatively efficient way for a very distinct range of tasks and still outperform previous architectures BIBREF4.", "Transformer models are also capable of generating contextualized word embeddings. These contextualized embeddings were presented by BIBREF3 and addressed the well known issue with a word's meaning being defined by its context (e.g. “a stick” versus “let's stick to”). This lack of context is something that traditional word embeddings like word2vec BIBREF10 or GloVe BIBREF11 lack, whereas BERT automatically incorporates the context a word occurs in.", "Another advantage of transformer models is that attention allows them to better resolve coreferences between words BIBREF12. A typical example for the importance of coreference resolution is “The trophy doesn’t fit in the brown suitcase because it’s too big.”, where the word “it” would refer to the the suitcase instead of the trophy if the last word was changed to “small” BIBREF13. Being able to resolve these coreferences is for example important for translating to languages with gender, as suitcase and trophy have different genders in French.", "Although BERT has been shown to be a useful language model, it has also received some scrutiny on the training and pre-processing of the language model. As mentioned before, BERT uses next sentence prediction (NSP) as one of its two training tasks. In NSP, the model has to predict whether two sentences follow each other in the training text, or are just randomly selected from the corpora. The authors of RoBERTa BIBREF5 showed that while this task made the model achieve a better performance, it was not due to its intended reason, as it might merely predict relatedness rather than subsequent sentences. That BIBREF4 trained a better model when using NSP than without NSP is likely due to the model learning long-range dependencies in text from its inputs, which are longer than just the single sentence on itself. As such, the RoBERTa model uses only the MLM task, and uses multiple full sentences in every input. Other research improved the NSP task by instead making the model predict the correct order of two sentences, where the model thus has to predict whether the sentences occur in the given order in the corpus, or occur in flipped order BIBREF14.", "BIBREF4 also presented a multilingual model (mBERT) with the same architecture as BERT, but trained on Wikipedia corpora in 104 languages. Unfortunately, the quality of these multilingual embeddings is often considered worse than their monolingual counterparts. BIBREF15 illustrated this difference in quality for German and English models in a generative setting. The monolingual French CamemBERT model BIBREF7 also compared their model to mBERT, which performed poorer on all tasks. More recently, BIBREF8 also showed similar results for Dutch using their BERTje model, outperforming multilingual BERT in a wide range of tasks, such as sentiment analysis and part-of-speech tagging. Since this work is concurrent with ours, we compare our results with BERTje in this paper." ], [ "This section describes the data and training regime we used to train our Dutch RoBERTa-based language model called RobBERT." ], [ "We pre-trained our model on the Dutch section of the OSCAR corpus, a large multilingual corpus which was obtained by language classification in the Common Crawl corpus BIBREF16. This Dutch corpus has 6.6 billion words, totalling 39 GB of text. It contains 126,064,722 lines of text, where each line can contain multiple sentences. Subsequent lines are however not related to each other, due to the shuffled nature of the OSCAR data set. For comparison, the French RoBERTa-based language model CamemBERT BIBREF7 has been trained on the French portion of OSCAR, which consists of 138 GB of scraped text.", "Our data differs in several ways from the data used to train BERTje, a BERT-based Dutch language model BIBREF8. Firstly, they trained the model on an assembly of multiple Dutch corpora totalling only 12 GB. Secondly, they used WordPiece as subword embeddings, since this is what the original BERT architecture uses. RobBERT on the other hand uses Byte Pair Encoding (BPE), which is also used by GPT-2 BIBREF2 and RoBERTa BIBREF5." ], [ "RobBERT shares its architecture with RoBERTa's base model, which itself is a replication and improvement over BERT BIBREF5. The architecture of our language model is thus equal to the original BERT model with 12 self-attention layers with 12 heads BIBREF4. One difference with the original BERT is due to the different pre-training task specified by RoBERTa, using only the MLM task and not the NSP task. The training thus only uses word masking, where the model has to predict which words were masked in certain positions of a given line of text. The training process uses the Adam optimizer BIBREF17 with polynomial decay of the learning rate $l_r=10^{-6}$ and a ramp-up period of 1000 iterations, with parameters $\\beta _1=0.9$ (a common default) and RoBERTa's default $\\beta _2=0.98$. Additionally, we also used a weight decay of 0.1 as well as a small dropout of 0.1 to help prevent the model from overfitting BIBREF18.", "We used a computing cluster in order to efficiently pre-train our model. More specifically, the pre-training was executed on a computing cluster with 20 nodes with 4 Nvidia Tesla P100 GPUs (16 GB VRAM each) and 2 nodes with 8 Nvidia V100 GPUs (having 32 GB VRAM each). This pre-training happened in fixed batches of 8192 sentences by rescaling each GPUs batch size depending on the number of GPUs available, in order to maximally utilize the cluster without blocking it entirely for other users. The model trained for two epochs, which is over 16k batches in total. With the large batch size of 8192, this equates to 0.5M updates for a traditional BERT model. At this point, the perplexity did not decrease any further." ], [ "We evaluated RobBERT in several different settings on multiple downstream tasks. First, we compare its performance with other BERT-models and state-of-the-art systems in sentiment analysis, to show its performance for classification tasks. Second, we compare its performance in a recent Dutch language task, namely the disambiguation of demonstrative pronouns, which allows us to additionally compare the zero-shot performance of our and other BERT models, i.e. using only the pre-trained model without any fine-tuning." ], [ "We replicated the high-level sentiment analysis task used to evaluate BERTje BIBREF8 to be able to compare our methods. This task uses a dataset called Dutch Book Reviews Dataset (DBRD), in which book reviews scraped from hebban.nl are labeled as positive or negative BIBREF19. Although the dataset contains 118,516 reviews, only 22,252 of these reviews are actually labeled as positive or negative. The DBRD dataset is already split in a balanced 10% test and 90% train split, allowing us to easily compare to other models trained for solving this task. This dataset was released in a paper analysing the performance of an ULMFiT model (Universal Language Model Fine-tuning for Text Classification model) BIBREF19.", "We fine-tuned RobBERT on the first 10,000 training examples as well as on the full data set. While the ULMFiT model is first fine-tuned using the unlabeled reviews before training the classifier BIBREF19, it is unclear whether BERTje also first fine-tuned on the unlabeled reviews or only used the labeled data for fine-tuning the pretrained model. It is also unclear how it dealt with reviews being longer than the maximum number of tokens allowed as input in BERT models, as the average book review length is 547 tokens, with 40% of the documents being longer than our RobBERT model can handle. For a safe comparison, we thus decided to discard the unlabeled data and only use the labeled data for training and test purposes (20,028 and 2,224 examples respectively), and compare approaches for dealing with too long input sequences. We trained our model for 2000 iterations with a batch size of 128 and a warm-up of 500 iterations, reaching a learning rate of $10^{-5}$. We found that our model performed better when trained on the last part of the book reviews than on the first part. This is likely due to this part containing concluding remarks summarizing the overall sentiment. While BERTje was slightly outperformed by ULMFiT BIBREF8, BIBREF19, we can see that RobBERT achieves better performance than both on the test set, although the performance difference is not statistically significantly better than the ULMFiT model, as can be seen in Table TABREF4." ], [ "Aside from classic natural language processing tasks in previous subsections, we also evaluated its performance on a task that is specific to Dutch, namely disambiguating “die” and “dat” (= “that” in English). In Dutch, depending on the sentence, both terms can be either demonstrative or relative pronouns; in addition they can also be used in a subordinating conjunction, i.e. to introduce a clause. The use of either of these words depends on the gender of the word it refers to. Distinguishing these words is a task introduced by BIBREF20, who presented multiple models trained on the Europarl BIBREF21 and SoNaR corpora BIBREF22. The results ranged from an accuracy of 75.03% on Europarl to 84.56% on SoNaR.", "For this task, we use the Dutch version of the Europarl corpus BIBREF21, which we split in 1.3M utterances for training, 319k for validation, and 399k for testing. We then process every sentence by checking if it contains “die” or “dat”, and if so, add a training example for every occurrence of this word in the sentence, where a single occurrence is masked. For the test set for example, this resulted in about 289k masked sentences. We then test two different approaches for solving this task on this dataset. The first approach is making the BERT models use their MLM task and guess which word should be filled in this spot, and check if it has more confidence in either “die” or “dat” (by checking the first 2,048 guesses at most, as this seemed sufficiently large). This allows us to compare the zero-shot BERT models, i.e. without any fine-tuning after pre-training, for which the results can be seen in Table TABREF7. The second approach uses the same data, but creates two sentences by filling in the mask with both “die” and “dat”, appending both with the [SEP] token and making the model predict which of the two sentences is correct. The fine-tuning was performed using 4 Nvidia GTX 1080 Ti GPUs and evaluated against the same test set of 399k utterances. As before, we fine-tuned the model twice: once with the full training set and once with a subset of 10k utterances from the training set for illustrating the benefits of pre-training on low-resource tasks.", "RobBERT outperforms previous models as well as other BERT models both with as well as without fine-tuning (see Table TABREF4 and Table TABREF7). It is also able to reach similar performance using less data. The fact that zero-shot RobBERT outperforms other zero-shot BERT models is also an indication that the base model has internalised more knowledge about Dutch than the other two have. The reason RobBERT and other BERT models outperform the previous RNN-based approach is likely the transformers ability to deal better with coreference resolution BIBREF12, and by extension better in deciding which word the “die” or “dat” belongs to." ], [ "The training and evaluation code of this paper as well as the RobBERT model and the fine-tuned models are publicly available for download on https://github.com/iPieter/RobBERT." ], [ "There are several possible improvements as well as interesting future directions for this research, for example in training similar models. First, as BERT-based models are a very active field of research, it is interesting to experiment with change the pre-training tasks with new unsupervised tasks when they are discovered, such as the sentence order prediction BIBREF14. Second, while RobBERT is trained on lines that contain multiple sentences, it does not put subsequent lines of the corpus after each other due to the shuffled nature of the OSCAR corpus BIBREF16. This is unlike RoBERTa, which does put full sentences next to each other if they fit, in order to learn the long-range dependencies between words that the original BERT learned using its controversial NSP task. It could be interesting to use the processor used to create OSCAR in order to create an unshuffled version to train on, such that this technique can be used on the data set. Third, RobBERT uses the same tokenizer as RoBERTa, meaning it uses a tokenizer built for the English language. Training a new model using a custom Dutch tokenizer, e.g. using the newly released HuggingFace tokenizers library BIBREF23, could increase the performance even further. On the same note, incorporating more Unicode glyphs as separate tokens can also be beneficial for example for tasks related to conversational agents BIBREF24.", "RobBERT itself could also be used in new settings to help future research. First, RobBERT could be used in different settings thanks to the renewed interest of sequence-to-sequence models due to their results on a vast range of language tasks BIBREF25, BIBREF26. These models use a BERT-like transformer stack for the encoder and depending on the task a generative model as a decoder. These advances once again highlight the flexibility of the self-attention mechanism and it might be interesting to research the re-usability of RobBERT in these type of architectures. Second, there are many Dutch language tasks that we did not examine in this paper, for which it may also be possible to achieve state-of-the-art results when fine-tuned on this pre-trained model." ], [ "We introduced a new language model for Dutch based on RoBERTa, called RobBERT, and showed that it outperforms earlier approaches for Dutch language tasks, as well as other BERT-based language models. We thus hope this model can serve as a base for fine-tuning on other tasks, and thus help foster new models that might advance results for Dutch language tasks." ], [ "Pieter Delobelle was supported by the Research Foundation - Flanders under EOS No. 30992574 and received funding from the Flemish Government under the “Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen” programme. Thomas Winters is a fellow of the Research Foundation-Flanders (FWO-Vlaanderen). Most computational resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemish Government – department EWI. We are especially grateful to Luc De Raedt for his guidance as well as for providing the facilities to complete this project. We are thankful to Liesbeth Allein and her supervisors for inspiring us to use the die/dat task. We are also grateful to BIBREF27, BIBREF28, BIBREF29, BIBREF23 for their software packages." ] ] }
{ "question": [ "What data did they use?", "What is the state of the art?", "What language tasks did they experiment on?" ], "question_id": [ "589be705a5cc73a23f30decba23ce58ec39d313b", "6e962f1f23061f738f651177346b38fd440ff480", "594a6bf37eab64a16c6a05c365acc100e38fcff1" ], "nlp_background": [ "two", "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "the Dutch section of the OSCAR corpus" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We pre-trained our model on the Dutch section of the OSCAR corpus, a large multilingual corpus which was obtained by language classification in the Common Crawl corpus BIBREF16. This Dutch corpus has 6.6 billion words, totalling 39 GB of text. It contains 126,064,722 lines of text, where each line can contain multiple sentences. Subsequent lines are however not related to each other, due to the shuffled nature of the OSCAR data set. For comparison, the French RoBERTa-based language model CamemBERT BIBREF7 has been trained on the French portion of OSCAR, which consists of 138 GB of scraped text." ], "highlighted_evidence": [ "We pre-trained our model on the Dutch section of the OSCAR corpus, a large multilingual corpus which was obtained by language classification in the Common Crawl corpus BIBREF16." ] } ], "annotation_id": [ "0880f455b6709a830625423ff58159d4337d789f" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "BERTje BIBREF8", "an ULMFiT model (Universal Language Model Fine-tuning for Text Classification model) BIBREF19.", "mBERT" ], "yes_no": null, "free_form_answer": "", "evidence": [ "FLOAT SELECTED: Table 1: Results of RobBERT fine-tuned on several downstream tasks compared to the state of the art on the tasks. For accuracy, we also report the 95% confidence intervals. (Results annotated with * from van der Burgh and Verberne (2019), ** = from de Vries et al. (2019), *** from Allein et al. (2020))", "We replicated the high-level sentiment analysis task used to evaluate BERTje BIBREF8 to be able to compare our methods. This task uses a dataset called Dutch Book Reviews Dataset (DBRD), in which book reviews scraped from hebban.nl are labeled as positive or negative BIBREF19. Although the dataset contains 118,516 reviews, only 22,252 of these reviews are actually labeled as positive or negative. The DBRD dataset is already split in a balanced 10% test and 90% train split, allowing us to easily compare to other models trained for solving this task. This dataset was released in a paper analysing the performance of an ULMFiT model (Universal Language Model Fine-tuning for Text Classification model) BIBREF19." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Results of RobBERT fine-tuned on several downstream tasks compared to the state of the art on the tasks. For accuracy, we also report the 95% confidence intervals. (Results annotated with * from van der Burgh and Verberne (2019), ** = from de Vries et al. (2019), *** from Allein et al. (2020))", "This dataset was released in a paper analysing the performance of an ULMFiT model (Universal Language Model Fine-tuning for Text Classification model) BIBREF19." ] } ], "annotation_id": [ "1fb4caf31528823432cea6bbaf36e143717b0860" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "sentiment analysis", "the disambiguation of demonstrative pronouns," ], "yes_no": null, "free_form_answer": "", "evidence": [ "We evaluated RobBERT in several different settings on multiple downstream tasks. First, we compare its performance with other BERT-models and state-of-the-art systems in sentiment analysis, to show its performance for classification tasks. Second, we compare its performance in a recent Dutch language task, namely the disambiguation of demonstrative pronouns, which allows us to additionally compare the zero-shot performance of our and other BERT models, i.e. using only the pre-trained model without any fine-tuning." ], "highlighted_evidence": [ "First, we compare its performance with other BERT-models and state-of-the-art systems in sentiment analysis, to show its performance for classification tasks. ", "Second, we compare its performance in a recent Dutch language task, namely the disambiguation of demonstrative pronouns, which allows us to additionally compare the zero-shot performance of our and other BERT models, i.e. using only the pre-trained model without any fine-tuning." ] } ], "annotation_id": [ "040b04e1dbd49eb538211077d80dbddae9559ed0" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] } ] }
{ "caption": [ "Table 1: Results of RobBERT fine-tuned on several downstream tasks compared to the state of the art on the tasks. For accuracy, we also report the 95% confidence intervals. (Results annotated with * from van der Burgh and Verberne (2019), ** = from de Vries et al. (2019), *** from Allein et al. (2020))", "Table 2: Performance of predicting die/dat as most likely candidate for a mask using zero-shot BERTmodels (i.e. without fine-tuning) as well as a majority class predictor (ZeroR), tested on the 288,799 test set sentences" ], "file": [ "4-Table1-1.png", "5-Table2-1.png" ] }
1910.02789
Natural Language State Representation for Reinforcement Learning
Recent advances in Reinforcement Learning have highlighted the difficulties in learning within complex high dimensional domains. We argue that one of the main reasons that current approaches do not perform well, is that the information is represented sub-optimally. A natural way to describe what we observe, is through natural language. In this paper, we implement a natural language state representation to learn and complete tasks. Our experiments suggest that natural language based agents are more robust, converge faster and perform better than vision based agents, showing the benefit of using natural language representations for Reinforcement Learning.
{ "section_name": [ "Introduction", "Preliminaries ::: Reinforcement Learning", "Preliminaries ::: Deep Learning for NLP", "Semantic Representation Methods", "Semantic State Representations in the Doom Environment", "Semantic State Representations in the Doom Environment ::: Experiments", "Related Work", "Discussion and Future Work", "Appendix ::: VizDoom", "Appendix ::: Natural language State Space", "Appendix ::: Language model implementation", "Appendix ::: Model implementation" ], "paragraphs": [ [ "“The world of our experiences must be enormously simplified and generalized before it is possible to make a symbolic inventory of all our experiences of things and relations.\"", "(Edward Sapir, Language: An Introduction to the Study of Speech, 1921)", "Deep Learning based algorithms use neural networks in order to learn feature representations that are good for solving high dimensional Machine Learning (ML) tasks. Reinforcement Learning (RL) is a subfield of ML that has been greatly affected by the use of deep neural networks as universal function approximators BIBREF0, BIBREF1. These deep neural networks are used in RL to estimate value functions, state-action value functions, policy mappings, next-state predictions, rewards, and more BIBREF2, BIBREF3, BIBREF4, thus combating the “curse of dimensionality\".", "The term representation is used differently in different contexts. For the purpose of this paper we define a semantic representation of a state as one that reflects its meaning as it is understood by an expert. The semantic representation of a state should thus be paired with a reliable and computationally efficient method for extracting information from it. Previous success in RL has mainly focused on representing the state in its raw form (e.g., visual input in Atari-based games BIBREF2). This approach stems from the belief that neural networks (specifically convolutional networks) can extract meaningful features from complex inputs. In this work, we challenge current representation techniques and suggest to represent the state using natural language, similar to the way we, as humans, summarize and transfer information efficiently from one to the other BIBREF5.", "The ability to associate states with natural language sentences that describe them is a hallmark of understanding representations for reinforcement learning. Humans use rich natural language to describe and communicate their visual perceptions, feelings, beliefs, strategies, and more. The semantics inherent to natural language carry knowledge and cues of complex types of content, including: events, spatial relations, temporal relations, semantic roles, logical structures, support for inference and entailment, as well as predicates and arguments BIBREF6. The expressive nature of language can thus act as an alternative semantic state representation.", "Over the past few years, Natural Language Processing (NLP) has shown an acceleration in progress on a wide range of downstream applications ranging from Question Answering BIBREF7, BIBREF8, to Natural Language Inference BIBREF9, BIBREF10, BIBREF11 through Syntactic Parsing BIBREF12, BIBREF13, BIBREF14. Recent work has shown the ability to learn flexible, hierarchical, contextualized representations, obtaining state-of-the-art results on various natural language processing tasks BIBREF15. A basic observation of our work is that natural language representations are also beneficial for solving problems in which natural language is not the underlying source of input. Moreover, our results indicate that natural language is a strong alternative to current complementary methods for semantic representations of a state.", "In this work we assume a state can be described using natural language sentences. We use distributional embedding methods in order to represent sentences, processed with a standard Convolutional Neural Network for feature extraction. In Section SECREF2 we describe the basic frameworks we rely on. We discuss possible semantic representations in Section SECREF3, namely, raw visual inputs, semantic segmentation, feature vectors, and natural language representations. Then, in Section SECREF4 we compare NLP representations with their alternatives. Our results suggest that representation of the state using natural language can achieve better performance, even on difficult tasks, or tasks in which the description of the state is saturated with task-nuisances BIBREF17. Moreover, we observe that NLP representations are more robust to transfer and changes in the environment. We conclude the paper with a short discussion and related work." ], [ "In Reinforcement Learning the goal is to learn a policy $\\pi (s)$, which is a mapping from state $s$ to a probability distribution over actions $\\mathcal {A}$, with the objective to maximize a reward $r(s)$ that is provided by the environment. This is often solved by formulating the problem as a Markov Decision Process (MDP) BIBREF19. Two common quantities used to estimate the performance in MDPs are the value $v (s)$ and action-value $Q (s, a)$ functions, which are defined as follows: ${v(s) = \\mathbb {E}^{\\pi } [\\sum _t \\gamma ^t r_t | s_0 = s ]}$ and ${Q(s, a) = \\mathbb {E}^{\\pi } [\\sum _t \\gamma ^t r_t | s_0 = s, a_0 = a ]}$. Two prominent algorithms for solving RL tasks, which we use in this paper, are the value-based DQN BIBREF2 and the policy-based PPO BIBREF3.", "Deep Q Networks (DQN): The DQN algorithm is an extension of the classical Q-learning approach, to a deep learning regime. Q-learning learns the optimal policy by directly learning the value function, i.e., the action-value function. A neural network is used to estimate the $Q$-values and is trained to minimize the Bellman error, namely", "Proximal Policy Optimization (PPO): While the DQN learns the optimal behavioral policy using a dynamic programming approach, PPO takes a different route. PPO builds upon the policy gradient theorem, which optimizes the policy directly, with an addition of a trust-region update rule. The policy gradient theorem updates the policy by" ], [ "A word embedding is a mapping from a word $w$ to a vector $\\mathbf {w} \\in \\mathbb {R}^d$. A simple form of word embedding is the Bag of Words (BoW), a vector $\\mathbf {w} \\in \\mathbb {N}^{|D|}$ ($|D|$ is the dictionary size), in which each word receives a unique 1-hot vector representation. Recently, more efficient methods have been proposed, in which the embedding vector is smaller than the dictionary size, $d \\ll |D|$. These methods are also known as distributional embeddings.", "The distributional hypothesis in linguistics is derived from the semantic theory of language usage (i.e. words that are used and occur in the same contexts tend to have similar meanings). Distributional word representations are a fundamental building block for representing natural language sentences. Word embeddings such as Word2vec BIBREF20 and GloVe BIBREF21 build upon the distributional hypothesis, improving efficiency of state-of-the-art language models.", "Convolutional Neural Networks (CNNs), originally invented for computer vision, have been shown to achieve strong performance on text classification tasks BIBREF22, BIBREF23, as well as other traditional NLP tasks BIBREF24. In this paper we consider a common architecture BIBREF25, in which each word in a sentence is represented as an embedding vector, a single convolutional layer with $m$ filters is applied, producing an $m$-dimensional vector for each $n$-gram. The vectors are combined using max-pooling followed by a ReLU activation. The result is then passed through multiple hidden linear layers with ReLU activation, eventually generating the final output." ], [ "Contemporary methods for semantic representation of states currently follow one of three approaches: (1) raw visual inputs BIBREF2, BIBREF26, in which raw sensory values of pixels are used from one or multiple sources, (2) feature vectors BIBREF27, BIBREF28, in which general features of the problem are chosen, with no specific structure, and (3) semantic segmentation maps BIBREF29, BIBREF30, in which discrete or logical values are used in one or many channels to represent the general features of the state.", "The common approach is to derive decisions (e.g., classification, action, etc.) based on information in its raw form. In RL, the raw form is often the pixels representing an image – however the image is only one form of a semantic representation. In Semantic Segmentation, the image is converted from a 3-channel (RGB) matrix into an $N$-channel matrix, where $N$ is the number of classes. In this case, each channel represents a class, and a binary value at each coordinate denotes whether or not this class is present in the image at this location. For instance, fig: semantic segmentation example considers an autonomous vehicle task. The raw image and segmentation maps are both sufficient for the task (i.e., both contain a sufficient semantic representation). Nevertheless, the semantic segmentation maps contain less task-nuisances BIBREF17, which are random variables that affect the observed data, but are not informative to the task we are trying to solve.", "In this paper we propose a forth method for representing a state, namely using natural language descriptions. One method to achieve such a representation is through Image Captioning BIBREF31, BIBREF32. Natural language is both rich as well as flexible. This flexibility enables the algorithm designer to represent the information present in the state as efficiently and compactly as possible. As an example, the top image in fig: semantic segmentation example can be represented using natural language as “There is a car in your lane two meters in front of you, a bicycle rider on your far left in the negative lane, a car in your direction in the opposite lane which is twenty meters away, and trees and pedestrians on the side walk.” or compactly by “There is a car two meters in front of you a pedestrian on the sidewalk to your right and a car inbound in the negative lane which is far away.”. Language also allows us to efficiently compress information. As an example, the segmentation map in the bottom image of fig: semantic segmentation example can be compactly described by “There are 13 pedestrians crossing the road in front of you”. In the next section we will demonstrate the benefits of using natural-language-based semantic state representation in a first person shooter enviornment." ], [ "In this section we compare the different types of semantic representations for representing states in the ViZDoom environment BIBREF26, as described in the previous section. More specifically, we use a semantic natural language parser in order to describe a state, over numerous instances of levels varying in difficulty, task-nuisances, and objectives. Our results show that, though semantic segmentation and feature vector representation techniques express a similar statistic of the state, natural language representation offers better performance, faster convergence, more robust solutions, as well as better transfer.", "The ViZDoom environment involves a 3D world that is significantly more real-world-like than Atari 2600 games, with a relatively realistic physics model. An agent in the ViZDoom environment must effectively perceive, interpret, and learn the 3D world in order to make tactical and strategic decisions of where to go and how to act. There are three types of state representations that are provided by the environment. The first, which is also most commonly used, is raw visual inputs, in which the state is represented by an image from a first person view of the agent. A feature vector representation is an additional state representation provided by the environment. The feature vector representation includes positions as well as labels of all objects and creatures in the vicinity of the agent. Lastly, the environment provides a semantic segmentation map based on the aforementioned feature vector. An example of the visual representations in VizDoom is shown in fig: representations in vizdoom.", "In order to incorporate natural language representation to the VizDoom environment we've constructed a semantic parser of the semantic segmentation maps provided by the environment. Each state of the environment was converted into a natural language sentence based on positions and labels of objects in the frame. To implement this, the screen was divided into several vertical and horizontal patches, as depicted in fig: patches. These patches describe relational aspects of the state, such as distance of objects and their direction with respect to the agent's point of view. In each patch, objects were counted, and a natural language description of the patch was constructed. This technique was repeated for all patches to form the final state representation. fig: nlp state rep depicts examples of natural language sentences of different states in the enviornment." ], [ "We tested the natural language representation against the visual-based and feature representations on several tasks, with varying difficulty. In these tasks, the agent could navigate, shoot, and collect items such as weapons and medipacks. Often, enemies of different types attacked the agent, and a positive reward was given when an enemy was killed. Occasionally, the agent also suffered from health degeneration. The tasks included a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios was designed to challenge the agent.", "More specifically, in the basic scenario, a single monster is spawned in front of the agent. The purpose of this scenario is to teach the agent to aim at the enemy and shoot at it. In the health gathering scenario, the floor of the room is covered in toxin, causing the agent to gradually lose health. Medipacks are spawned randomly in the room and the agent's objective is to keep itself alive by collecting them. In the take cover scenario, multiple fireball shooting monsters are spawned in front of the agent. The goal of the agent is to stay alive as long as possible, dodging inbound fireballs. The difficulty of the task increases over time, as additional monsters are spawned. In the defend the center scenario, melee attacking monsters are randomly spawned in the room, and charge towards the agent. As opposed to other scenarios, the agent is incapable of moving, aside from turning left and right and shooting. In the defend the line scenario, both melee and fireball shooting monsters are spawned near the opposing wall. The agent can only step right, left or shoot. Finally, in the “super\" scenario both melee and fireball shooting monsters are repeatably spawned all over the room. the room contains various items the agent can pick up and use, such as medipacks, shotguns, ammunition and armor. Furthermore, the room is filled with unusable objects, various types of trees, pillars and other decorations. The agent can freely move and turn in any direction, as well as shoot. This scenario combines elements from all of the previous scenarios.", "Our agent was implemented using a Convolutional Neural Network as described in Section SECREF4. We converted the parsed state into embedded representations of fixed length. We tested both a DQN and a PPO based agent, and compared the natural language representation to the other representation techniques, namely the raw image, feature vector, and semantic segmentation representations.", "In order to effectively compare the performance of the different representation methods, we conducted our experiments under similar conditions for all agents. The same hyper-parameters were used under all tested representations. Moreover, to rule out effects of architectural expressiveness, the number of weights in all neural networks was approximately matched, regardless of the input type. Finally, we ensured the “super\" scenario was positively biased toward image-based representations. This was done by adding a large amount items to the game level, thereby filling the state with nuisances (these tests are denoted by `nuisance' in the scenario name). This was especially evident in the NLP representations, as sentences became extensively longer (average of over 250 words). This is contrary to image-based representations, which did not change in dimension.", "Results of the DQN-based agent are presented in fig: scenario comparison. Each plot depicts the average reward (across 5 seeds) of all representations methods. It can be seen that the NLP representation outperforms the other methods. This is contrary to the fact that it contains the same information as the semantic segmentation maps. More interestingly, comparing the vision-based and feature-based representations render inconsistent conclusions with respect to their relative performance. NLP representations remain robust to changes in the environment as well as task-nuisances in the state. As depicted in fig: nuisance scenarios, inflating the state space with task-nuisances impairs the performance of all representations. There, a large amount of unnecessary objects were spawned in the level, increasing the state's description length to over 250 words, whilst retaining the same amount of useful information. Nevertheless, the NLP representation outperformed the vision and feature based representations, with high robustness to the applied noise.", "In order to verify the performance of the natural language representation was not due to extensive discretization of patches, we've conducted experiments increasing the number of horizontal patches - ranging from 3 to 31 patches in the extreme case. Our results, as depicted in fig: patch count, indicate that the amount of discretization of patches did not affect the performance of the NLP agent, remaining a superior representation compared to the rest.", "To conclude, our experiments suggest that NLP representations, though they describe the same raw information of the semantic segmentation maps, are more robust to task-nuisances, allow for better transfer, and achieve higher performance in complex tasks, even when their description is long and convoluted. While we've only presented results for DQN agents, we include plots for a PPO agent in the Appendix, showing similar trends and conclusions. We thus deduce that NLP-based semantic state representations are a preferable choice for training VizDoom agents." ], [ "Work on representation learning is concerned with finding an appropriate representation of data in order to perform a machine learning task BIBREF33. In particular, deep learning exploits this concept by its very nature BIBREF2. Work on representation learning include Predictive State Representations (PSR) BIBREF34, BIBREF35, which capture the state as a vector of predictions of future outcomes, and a Heuristic Embedding of Markov Processes (HEMP) BIBREF36, which learns to embed transition probabilities using an energy-based optimization problem.", "There has been extensive work attempting to use natural language in RL. Efforts that integrate language in RL develop tools, approaches, and insights that are valuable for improving the generalization and sample efficiency of learning agents. Previous work on language-conditioned RL has considered the use of natural language in the observation and action space. Environments such as Zork and TextWorld BIBREF37 have been the standard benchmarks for testing text-based games. Nevertheless, these environments do not search for semantic state representations, in which an RL algorithm can be better evaluated and controlled.", "BIBREF38 use high-level semantic abstractions of documents in a representation to facilitate relational learning using Inductive Logic Programming and a generative language model. BIBREF39 use high-level guidance expressed in text to enrich a stochastic agent, playing against the built-in AI of Civilization II. They train an agent with the Monte-Carlo search framework in order to jointly learn to identify text that is relevant to a given game state as well as game strategies based only on environment feedback. BIBREF40 utilize natural language in a model-based approach to describe the dynamics and rewards of an environment, showing these can facilitate transfer between different domains.", "More recently, the structure and compositionality of natural language has been used for representing policies in hierarchical RL. In a paper by BIBREF41, instructions given in natural language were used in order to break down complex problems into high-level plans and lower-level actions. Their suggested framework leverages the structure inherent to natural language, allowing for transfer to unfamiliar tasks and situations. This use of semantic structure has also been leveraged by BIBREF42, where abstract actions (not necessarily words) were recognized as symbols of a natural and expressive language, improving performance and transfer of RL agents.", "Outside the context of RL, previous work has also shown that high-quality linguistic representations can assist in cross-modal transfer, such as using semantic relationships between labels for zero-shot transfer in image classification BIBREF43, BIBREF44." ], [ "Our results indicate that natural language can outperform, and sometime even replace, vision-based representations. Nevertheless, natural language representations can also have disadvantages in various scenarios. For one, they require the designer to be able to describe the state exactly, whether by a rule-based or learned parser. Second, they abstract notions of the state space that the designer may not realize are necessary for solving the problem. As such, semantic representations should be carefully chosen, similar to the process of reward shaping or choosing a training algorithm. Here, we enumerate three instances in which we believe natural language representations are beneficial:", "Natural use-case: Information contained in both generic and task-specific textual corpora may be highly valuable for decision making. This case assumes the state can either be easily described using natural language or is already in a natural language state. This includes examples such as user-based domains, in which user profiles and comments are part of the state, or the stock market, in which stocks are described by analysts and other readily available text. 3D physical environments such as VizDoom also fall into this category, as semantic segmentation maps can be easily described using natural language.", "Subjective information: Subjectivity refers to aspects used to express opinions, evaluations, and speculations. These may include strategies for a game, the way a doctor feels about her patient, the mood of a driver, and more.", "Unstructured information: In these cases, features might be measured by different units, with an arbitrary position in the state's feature vector, rendering them sensitive to permutations. Such state representations are thus hard to process using neural networks. As an example, the medical domain may contain numerous features describing the vitals of a patient. These raw features, when observed by an expert, can be efficiently described using natural language. Moreover, they allow an expert to efficiently add subjective information.", "An orthogonal line of research considers automating the process of image annotation. The noise added from the supervised or unsupervised process serves as a great challenge for natural language representation. We suspect the noise accumulated by this procedure would require additional information to be added to the state (e.g., past information). Nevertheless, as we have shown in this paper, such information can be compressed using natural language. In addition, while we have only considered spatial features of the state, information such as movement directions and transient features can be efficiently encoded as well.", "Natural language representations help abstract information and interpret the state of an agent, improving its overall performance. Nevertheless, it is imperative to choose a representation that best fits the domain at hand. Designers of RL algorithms should consider searching for a semantic representation that fits their needs. While this work only takes a first step toward finding better semantic state representations, we believe the structure inherent in natural language can be considered a favorable candidate for achieving this goal." ], [ "VizDoom is a \"Doom\" based research environment that was developed at the Poznań University of Technology. It is based on \"ZDoom\" game executable, and includes a Python based API. The API offers the user the ability to run game instances, query the game state, and execute actions. The original purpose of VizDoom is to provide a research platform for vision based reinforcement learning. Thus, a natural language representation for the game was needed to be implemented. ViZDoom emulates the \"Doom\" game and enables us to access data within a certain frame using Python dictionaries. This makes it possible to extract valuable data including player health, ammo, enemy locations etc. Each game frame contains \"labels\", which contain data on visible objects in the game (the player, enemies, medkits, etc). We used \"Doom Builder\" in order to edit some of the scenarios and design a new one. Enviroment rewards are presented in doom-scenarios-table." ], [ "A semantic representation using natural language should contain information which can be deduced by a human playing the game. For example, even though a human does not know the exact distance between objects, it can classify them as \"close\" or \"far\". However, objects that are outside the player's field of vision can not be a part of the state. Furthermore, a human would most likely refer to an object's location relative to itself, using directions such as \"right\" or \"left\"." ], [ "To convert each frame to a natural language representation state, the list of available labels is iterated, and a string is built accordingly. The main idea of our implementation is to divide the screen into multiple vertical patches, count the amount of different objects inside by their types, and parse it as a sentence. The decision as to whether an object is close or far can be determined by calculating the distance from it to the player, and using two threshold levels. Object descriptions can be concise or detailed, as needed. We experimented with the following mechanics:", "the screen can be divided between patches equally, or by determined ratios. Here, our main guideline was to keep the \"front\" patch narrow enough so it can be used as \"sights\".", "our initial experiment was with 3 patches, and later we added 2 more patches classified as \"outer left\" and \"outer right\". In our experiments we have tested up to 51 patches, referred to as left or right patch with corresponding numbers.", "we used 2 thresholds, which allowed us to classify the distance of an object from the player as \"close\",\"mid\", and \"far. Depending on the task, the value of the threshold can be changed, as well as adding more thresholds.", "different states might generate sentence with different size. A maximum sentence length is another parameter that was tested. sentences-length-table presents some data regarding the average word count in some of the game sceanrios.", "After the sentence describing the state is generated, it is transformed to an embedding vector. Words that were not found in the vocabulary were replaced with an “OOV\" vector. All words were then concatenated to a NxDx1 matrix, representing the state. We experimented with both Word2Vec and GloVe pretrained embedding vectors. Eventually, we used the latter, as it consumes less memory and speeds up the training process. The length of the state sentence is one of the hyperparameters of the agents; shorter sentences are zero padded, where longer ones are trimmed." ], [ "All of our models were implemented using PyTorch. The DQN agents used a single network that outputs the Q-Values of the available actions. The PPO agents used an Actor-Critic model with two networks; the first outputs the policy distribution for the input state, and the second network outputs it's value. As mentioned earlier, we used three common neural network architectures:", "used for the raw image and semantic segmentation based agents. VizDoom's raw output image resolution is 640X480X3 RGB image. We experimented with both the original image and its down-sampled version. The semantic segmentation image was of resolution 640X480X1, where the pixel value represents the object's class, generated using the VizDoom label API. the network consisted of two convolutional layers, two hidden linear layers and an output layer. The first convolutional layer has 8 6X6 filters with stride 3 and ReLU activation. The second convolutional layer has 16 3X3 filters with stride 2 and ReLU activation. The fully connected layers has 32 and 16 units, both of them are followed by ReLU activation. The output layer's size is the amount of actions the agent has available in the trained scenario.", "Used in the feature vector based agent. Naturally, some discretization is needed in order to build a feature vector, so some of the state data is lost. the feature vector was made using features we extracted from the VizDoom API, and its dimensions was 90 X 1. The network is made up of two fully connected layers, each of them followed by a ReLU activation. The first layer has 32 units, and the second one one has 16 units. The output layer's size was the amount of actions available to the agent.", "Used in the natural language based agent. As previously mentioned, each word in the natural language state is transformed into a 200X50X1 matrix. The first layers of the TextCNN are convolutional layers with 8 filter which are designed to scan input sentence, and return convolution outputs of sequences of varying lengths. The filters vary in width, such that each of them learns to identify different lengths of sequences in words. Longer filters have higher capability of extracting features from longer word sequences. The filters we have chosen have the following dimensions: 3X50X1, 4X50X1, 5X50X1, 8X50X1,11X50X1. Following the convolution layer there is a ReLU activation and a max pool layer. Finally, there are two fully connected layers; The first layer has 32 units, and second one has 16 units. Both of them are followed by ReLU activation.", "All architectures have the same output, regardless of the input type. The DQN network is a regression network, with its output size the number of available actions. The PPO agent has 2 networks; actor and critic. The actor network has a Softmax activation with size equal to the available amount of actions. The critic network is a regression model with a single output representing the state's value. Reward plots for the PPO agent can be found in Figure FIGREF47." ] ] }
{ "question": [ "What result from experiments suggest that natural language based agents are more robust?", "How better is performance of natural language based agents in experiments?", "How much faster natural language agents converge in performed experiments?", "What experiments authors perform?", "How is state to learn and complete tasks represented via natural language?" ], "question_id": [ "d79d897f94e666d5a6fcda3b0c7e807c8fad109e", "599d9ca21bbe2dbe95b08cf44dfc7537bde06f98", "827464c79f33e69959de619958ade2df6f65fdee", "8e857e44e4233193c7b2d538e520d37be3ae1552", "084fb7c80a24b341093d4bf968120e3aff56f693" ], "nlp_background": [ "zero", "zero", "zero", "zero", "zero" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no", "no" ], "search_query": [ "computer vision", "computer vision", "computer vision", "computer vision", "computer vision" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Average reward across 5 seeds show that NLP representations are robust to changes in the environment as well task-nuisances", "evidence": [ "Results of the DQN-based agent are presented in fig: scenario comparison. Each plot depicts the average reward (across 5 seeds) of all representations methods. It can be seen that the NLP representation outperforms the other methods. This is contrary to the fact that it contains the same information as the semantic segmentation maps. More interestingly, comparing the vision-based and feature-based representations render inconsistent conclusions with respect to their relative performance. NLP representations remain robust to changes in the environment as well as task-nuisances in the state. As depicted in fig: nuisance scenarios, inflating the state space with task-nuisances impairs the performance of all representations. There, a large amount of unnecessary objects were spawned in the level, increasing the state's description length to over 250 words, whilst retaining the same amount of useful information. Nevertheless, the NLP representation outperformed the vision and feature based representations, with high robustness to the applied noise." ], "highlighted_evidence": [ "Results of the DQN-based agent are presented in fig: scenario comparison. Each plot depicts the average reward (across 5 seeds) of all representations methods. It can be seen that the NLP representation outperforms the other methods. ", "NLP representations remain robust to changes in the environment as well as task-nuisances in the state. " ] } ], "annotation_id": [ "040faf49fbe5c02af982b966eec96f2efaef2243" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "1247a16fee4fd801faca9eb81331034412d89054" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "ce47dbd8c234f9ef99f4c96c5e2e0271910589eb" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We tested the natural language representation against the visual-based and feature representations on several tasks, with varying difficulty. In these tasks, the agent could navigate, shoot, and collect items such as weapons and medipacks. Often, enemies of different types attacked the agent, and a positive reward was given when an enemy was killed. Occasionally, the agent also suffered from health degeneration. The tasks included a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios was designed to challenge the agent." ], "highlighted_evidence": [ "We tested the natural language representation against the visual-based and feature representations on several tasks, with varying difficulty.", "The tasks included a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios was designed to challenge the agent." ] } ], "annotation_id": [ "fc219faad4cbdc4a0d17a5c4e30b187b5b08fd05" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " represent the state using natural language" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The term representation is used differently in different contexts. For the purpose of this paper we define a semantic representation of a state as one that reflects its meaning as it is understood by an expert. The semantic representation of a state should thus be paired with a reliable and computationally efficient method for extracting information from it. Previous success in RL has mainly focused on representing the state in its raw form (e.g., visual input in Atari-based games BIBREF2). This approach stems from the belief that neural networks (specifically convolutional networks) can extract meaningful features from complex inputs. In this work, we challenge current representation techniques and suggest to represent the state using natural language, similar to the way we, as humans, summarize and transfer information efficiently from one to the other BIBREF5." ], "highlighted_evidence": [ ". In this work, we challenge current representation techniques and suggest to represent the state using natural language, similar to the way we, as humans, summarize and transfer information efficiently from one to the other BIBREF5." ] } ], "annotation_id": [ "47aee4bb630643e14ceaa348b2fd1762fd4d43b1" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ] }
{ "caption": [ "Figure 1: Example of Semantic Segmentation [Kundu et al., 2016].", "Figure 2: Left: Raw visual inputs and their corresponding semantic segmentation in the VizDoom enviornment. Right: Our suggested NLP-based semantic state representation framework.", "Figure 3: Frame division used for describing the state in natural language.", "Figure 4: Natural language state representation for a simple state (top) and complex state (bottom). The corresponding embedded representations and shown on the right.", "Figure 5: Comparison of representation methods on the different VizDoom scenarios using a DQN agent. X and Y axes represent the number of iterations and cumulative reward, respectively. Last three graphs (bottom) depict nuisance-augmented scenarios.", "Figure 6: Robustness of each representation type with respect to amount of nuisance.", "Figure 7: Average rewards of NLP based agent as a function of the number of patches in the language model.", "Figure 8: PPO - state representation and their average rewards, various degrees of nuisance", "Table 1: statistics of words per state as function of patches.", "Table 2: Doom scenarios" ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Figure3-1.png", "5-Figure4-1.png", "6-Figure5-1.png", "7-Figure6-1.png", "7-Figure7-1.png", "13-Figure8-1.png", "14-Table1-1.png", "14-Table2-1.png" ] }
1902.00672
Query-oriented text summarization based on hypergraph transversals
Existing graph- and hypergraph-based algorithms for document summarization represent the sentences of a corpus as the nodes of a graph or a hypergraph in which the edges represent relationships of lexical similarities between sentences. Each sentence of the corpus is then scored individually, using popular node ranking algorithms, and a summary is produced by extracting highly scored sentences. This approach fails to select a subset of jointly relevant sentences and it may produce redundant summaries that are missing important topics of the corpus. To alleviate this issue, a new hypergraph-based summarizer is proposed in this paper, in which each node is a sentence and each hyperedge is a theme, namely a group of sentences sharing a topic. Themes are weighted in terms of their prominence in the corpus and their relevance to a user-defined query. It is further shown that the problem of identifying a subset of sentences covering the relevant themes of the corpus is equivalent to that of finding a hypergraph transversal in our theme-based hypergraph. Two extensions of the notion of hypergraph transversal are proposed for the purpose of summarization, and polynomial time algorithms building on the theory of submodular functions are proposed for solving the associated discrete optimization problems. The worst-case time complexity of the proposed algorithms is squared in the number of terms, which makes it cheaper than the existing hypergraph-based methods. A thorough comparative analysis with related models on DUC benchmark datasets demonstrates the effectiveness of our approach, which outperforms existing graph- or hypergraph-based methods by at least 6% of ROUGE-SU4 score.
{ "section_name": [ "Introduction", "Background and related work", "Problem statement and system overview", "Summarization based on hypergraph transversals", "Preprocessing and similarity computation", "Sentence theme detection based on topic tagging", "Sentence hypergraph construction", "Detection of hypergraph transversals for text summarization", "Complexity analysis", "Experiments and evaluation", "Dataset and metrics for evaluation", "Parameter tuning", "Testing the TC-TranSum algorithm", "Testing the hypergraph structure", "Comparison with related systems", "Comparison with DUC systems", "Conclusion" ], "paragraphs": [ [ "The development of automatic tools for the summarization of large corpora of documents has attracted a widespread interest in recent years. With fields of application ranging from medical sciences to finance and legal science, these summarization systems considerably reduce the time required for knowledge acquisition and decision making, by identifying and formatting the relevant information from a collection of documents. Since most applications involve large corpora rather than single documents, summarization systems developed recently are meant to produce summaries of multiple documents. Similarly, the interest has shifted from generic towards query-oriented summarization, in which a query expresses the user's needs. Moreover, existing summarizers are generally extractive, namely they produce summaries by extracting relevant sentences from the original corpus.", "Among the existing extractive approaches for text summarization, graph-based methods are considered very effective due to their ability to capture the global patterns of connection between the sentences of the corpus. These systems generally define a graph in which the nodes are the sentences and the edges denote relationships of lexical similarities between the sentences. The sentences are then scored using graph ranking algorithms such as the PageRank BIBREF0 or HITS BIBREF1 algorithms, which can also be adapted for the purpose of query-oriented summarization BIBREF2 . A key step of graph-based summarizers is the way the graph is constructed, since it has a strong impact on the sentence scores. As pointed out in BIBREF3 , a critical issue of traditional graph-based summarizers is their inability to capture group relationships among sentences since each edge of a graph only connects a pair of nodes.", "Following the idea that each topic of a corpus connects a group of multiple sentences covering that topic, hypergraph models were proposed in BIBREF3 and BIBREF4 , in which the hyperedges represent similarity relationships among groups of sentences. These group relationships are formed by detecting clusters of lexically similar sentences we refer to as themes or theme-based hyperedges. Each theme is believed to cover a specific topic of the corpus. However, since the models of BIBREF3 and BIBREF4 define the themes as groups of lexically similar sentences, the underlying topics are not explicitly discovered. Moreover, their themes do not overlap which contradicts the fact that each sentence carries multiple information and may thus belong to multiple themes, as can be seen from the following example of sentence.", "Two topics are covered by the sentence above: the topics of studies and leisure. Hence, the sentence should belong to multiple themes simultaneously, which is not allowed in existing hypergraph models of BIBREF3 and BIBREF4 .", "The hypergraph model proposed in this paper alleviates these issues by first extracting topics, i.e. groups of semantically related terms, using a new topic model referred to as SEMCOT. Then, a theme is associated to each topic, such that each theme is defined a the group of sentences covering the associated topic. Finally, a hypergraph is formed with sentences as nodes, themes as hyperedges and hyperedge weights reflecting the prominence of each theme and its relevance to the query. In such a way, our model alleviates the weaknesses of existing hypergraph models since each theme-based hyperedge is associated to a specific topic and each sentence may belong to multiple themes.", "Furthermore, a common drawback of existing graph- and hypergraph-based summarizers is that they select sentences based on the computation of an individual relevance score for each sentence. This approach fails to capture the information jointly carried by the sentences which results in redundant summaries missing important topics of the corpus. To alleviate this issue, we propose a new approach of sentence selection using our theme-based hypergraph. A minimal hypergraph transversal is the smallest subset of nodes covering all hyperedges of a hypergraph BIBREF5 . The concept of hypergraph transversal is used in computational biology BIBREF6 and data mining BIBREF5 for identifying a subset of relevant agents in a hypergraph. In the context of our theme-based hypergraph, a hypergraph transversal can be viewed as the smallest subset of sentences covering all themes of the corpus. We extend the notion of transversal to take the theme weights into account and we propose two extensions called minimal soft hypergraph transversal and maximal budgeted hypergraph transversal. The former corresponds to finding a subset of sentences of minimal aggregated length and achieving a target coverage of the topics of the corpus (in a sense that will be clarified). The latter seeks a subset of sentences maximizing the total weight of covered hyperedges while not exceeding a target summary length. As the associated discrete optimization problems are NP-hard, we propose two approximation algorithms building on the theory of submodular functions. Our transversal-based approach for sentence selection alleviates the drawback of methods of individual sentence scoring, since it selects a set of sentences that are jointly covering a maximal number of relevant themes and produces informative and non-redundant summaries. As demonstrated in the paper, the time complexity of the method is equivalent to that of early graph-based summarization systems such as LexRank BIBREF0 , which makes it more efficient than existing hypergraph-based summarizers BIBREF3 , BIBREF4 . The scalability of summarization algorithms is essential, especially in applications involving large corpora such as the summarization of news reports BIBREF7 or the summarization of legal texts BIBREF8 .", "The method of BIBREF9 proposes to select sentences by using a maximum coverage approach, which shares some similarities with our model. However, they attempt to select a subset of sentences maximizing the number of relevant terms covered by the sentences. Hence, they fail to capture the topical relationships among sentences, which are, in contrast, included in our theme-based hypergraph.", "A thorough comparative analysis with state-of-the-art summarization systems is included in the paper. Our model is shown to outperform other models on a benchmark dataset produced by the Document Understanding Conference. The main contributions of this paper are (1) a new topic model extracting groups of semantically related terms based on patterns of term co-occurrences, (2) a natural hypergraph model representing nodes as sentences and each hyperedge as a theme, namely a group of sentences sharing a topic, and (3) a new sentence selection approach based on hypergraph transversals for the extraction of a subset of jointly relevant sentences.", "The structure of the paper is as follows. In section \"Background and related work\" , we present work related to our method. In section \"Problem statement and system overview\" , we present an overview of our system which is described in further details in section \"Summarization based on hypergraph transversals\" . Then, in section \"Experiments and evaluation\" , we present experimental results. Finally, section \"Conclusion\" presents a discussion and concluding remarks." ], [ "While early models focused on the task of single document summarization, recent systems generally produce summaries of corpora of documents BIBREF10 . Similarly, the focus has shifted from generic summarization to the more realistic task of query-oriented summarization, in which a summary is produced with the essential information contained in a corpus that is also relevant to a user-defined query BIBREF11 .", "Summarization systems are further divided into two classes, namely abstractive and extractive models. Extractive summarizers identify relevant sentences in the original corpus and produce summaries by aggregating these sentences BIBREF10 . In contrast, an abstractive summarizer identifies conceptual information in the corpus and reformulates a summary from scratch BIBREF11 . Since abstractive approaches require advanced natural language processing, the majority of existing summarization systems consist of extractive models.", "Extractive summarizers differ in the method used to identify relevant sentences, which leads to a classification of models as either feature-based or graph-based approaches. Feature-based methods represent the sentences with a set of predefined features such as the sentence position, the sentence length or the presence of cue phrases BIBREF12 . Then, they train a model to compute relevance scores for the sentences based on their features. Since feature-based approaches generally require datasets with labelled sentences which are hard to produce BIBREF11 , unsupervised graph-based methods have attracted growing interest in recent years.", "Graph-based summarizers represent the sentences of a corpus as the nodes of a graph with the edges modelling relationships of similarity between the sentences BIBREF0 . Then, graph-based algorithms are applied to identify relevant sentences. The models generally differ in the type of relationship captured by the graph or in the sentence selection approach. Most graph-based models define the edges connecting sentences based on the co-occurrence of terms in pairs of sentences BIBREF0 , BIBREF2 , BIBREF3 . Then, important sentences are identified either based on node ranking algorithms, or using a global optimization approach. Methods based on node ranking compute individual relevance scores for the sentences and build summaries with highly scored sentences. The earliest such summarizer, LexRank BIBREF0 , applies the PageRank algorithm to compute sentence scores. Introducing a query bias in the node ranking algorithm, this method can be adapted for query-oriented summarization as in BIBREF2 . A different graph model was proposed in BIBREF13 , where sentences and key phrases form the two classes of nodes of a bipartite graph. The sentences and the key phrases are then scored simultaneously by applying a mutual reinforcement algorithm. An extended bipartite graph ranking algorithm is also proposed in BIBREF1 in which the sentences represent one class of nodes and clusters of similar sentences represent the other class. The hubs and authorities algorithm is then applied to compute sentence scores. Adding terms as a third class of nodes, BIBREF14 propose to score terms, sentences and sentence clusters simultaneously, based on a mutual reinforcement algorithm which propagates the scores across the three node classes. A common drawback of the approaches based on node ranking is that they compute individual relevance scores for the sentences and they fail to model the information jointly carried by the sentences, which may result in redundant summaries. Hence, global optimization approaches were proposed to select a set of jointly relevant and non-redundant sentences as in BIBREF15 and BIBREF16 . For instance, BIBREF17 propose a greedy algorithm to find a dominating set of nodes in the sentence graph. A summary is then formed with the corresponding set of sentences. Similarly, BIBREF15 extract a set of sentences with a maximal similarity with the entire corpus and a minimal pairwise lexical similarity, which is modelled as a multi-objective optimization problem. In contrast, BIBREF9 propose a coverage approach in which a set of sentences maximizing the number of distinct relevant terms is selected. Finally, BIBREF16 propose a two step approach in which individual sentence relevance scores are computed first. Then a set of sentences with a maximal total relevance and a minimal joint redundancy is selected. All three methods attempt to solve NP-hard problems. Hence, they propose approximation algorithms based on the theory of submodular functions.", "Going beyond pairwise lexical similarities between sentences and relations based on the co-occurrence of terms, hypergraph models were proposed, in which nodes are sentences and hyperedges model group relationships between sentences BIBREF3 . The hyperedges of the hypergraph capture topical relationships among groups of sentences. Existing hypergraph-based systems BIBREF3 , BIBREF4 combine pairwise lexical similarities and clusters of lexically similar sentences to form the hyperedges of the hypergraph. Hypergraph ranking algorithms are then applied to identify important and query-relevant sentences. However, they do not provide any interpretation for the clusters of sentences discovered by their method. Moreover, these clusters do not overlap, which is incoherent with the fact that each sentence carries multiple information and hence belongs to multiple semantic groups of sentences. In contrast, each hyperedge in our proposed hypergraph connects sentences covering the same topic, and these hyperedges do overlap.", "A minimal hypergraph transversal is a subset of the nodes of hypergraph of minimum cardinality and such that each hyperedge of the hypergraph is incident to at least one node in the subset BIBREF5 . Theoretically equivalent to the minimum hitting set problem, the problem of finding a minimum hypergraph transversal can be viewed as finding a subset of representative nodes covering the essential information carried by each hyperedge. Hence, hypergraph transversals find applications in various areas such as computational biology, boolean algebra and data mining BIBREF18 . Extensions of hypergraph transversals to include hyperedge and node weights were also proposed in BIBREF19 . Since the associated optimization problems are generally NP-hard, various approximation algorithms were proposed, including greedy algorithms BIBREF20 and LP relaxations BIBREF21 . The problem of finding a hypergraph transversal is conceptually similar to that of finding a summarizing subset of a set of objects modelled as a hypergraph. However, to the best of our knowledge, there was no attempt to use hypergraph transversals for text summarization in the past. Since it seeks a set of jointly relevant sentences, our method shares some similarities with existing graph-based models that apply global optimization strategies for sentence selection BIBREF9 , BIBREF15 , BIBREF16 . However, our hypergraph better captures topical relationships among sentences than the simple graphs based on lexical similarities between sentences." ], [ "Given a corpus of $N_d$ documents and a user-defined query $q$ , we intend to produce a summary of the documents with the information that is considered both central in the corpus and relevant to the query. Since we limit ourselves to the production of extracts, our task is to extract a set $S$ of relevant sentences from the corpus and to aggregate them to build a summary. Let $N_s$ be the total number of sentences in the corpus. We further split the task into two subtasks:", "The sentences in the set $S$ are then aggregated to form the final summary. Figure 1 summarizes the steps of our proposed method. After some preprocessing steps, the themes are detected based on a topic detection algorithm which tags each sentence with multiple topics. A theme-based hypergraph is then built with the weight of each theme reflecting both its importance in the corpus and its similarity with the query. Finally, depending on the task at hand, one of two types of hypergraph transversal is generated. If the summary must not exceed a target summary length, then a maximal budgeted hypergraph transversal is generated. If the summary must achieve a target coverage, then a minimal soft hypergraph transversal is generated. Finally the sentences corresponding to the generated transversal are selected for the summary." ], [ "In this section, we present the key steps of our algorithm: after some standard preprocessing steps, topics of semantically related terms are detected from which themes grouping topically similar sentences are extracted. A hypergraph is then formed based on the sentence themes and sentences are selected based on the detection of a hypergraph transversal." ], [ "As the majority of extractive summarization approaches, our model is based on the representation of sentences as vectors. To reduce the size of the vocabulary, we remove stopwords that do not contribute to the meaning of sentences such as \"the\" or \"a\", using a publicly available list of 667 stopwords . The words are also stemmed using Porter Stemmer BIBREF22 . Let $N_t$ be the resulting number of distinct terms after these two preprocessing steps are performed. We define the inverse sentence frequency $\\text{isf}(t)$ BIBREF23 as ", "$$\\text{isf}(t)=\\log \\left(\\frac{N_s}{N_s^t}\\right)$$ (Eq. 7) ", "where $N_s^t$ is the number of sentences containing term $t$ . This weighting scheme yields higher weights for rare terms which are assumed to contribute more to the semantics of sentences BIBREF23 . Sentence $i$ is then represented by a vector $s_i=[\\text{tfisf}(i,1),...,\\text{tfisf}(i,N_t)]$ where ", "$$\\text{tfisf}(i,t)=\\text{tf}(i,t)\\text{isf}(t)$$ (Eq. 8) ", "and $\\text{tf}(i,t)$ is the frequency of term $t$ in sentence $i$ . Finally, to denote the similarity between two text fragments $a$ and $b$ (which can be sentences, groups of sentences or the query), we use the cosine similarity between the $\\text{tfisf}$ representations of $a$ and $b$ , as suggested in BIBREF2 : ", "$$\\text{sim}(a,b)=\\frac{\\sum _t \\text{tfisf}(a,t)\\text{tfisf}(b,t)}{\\sqrt{\\sum _t\\text{tfisf}(a,t)^2}\\sqrt{\\sum _t\\text{tfisf}(b,t)^2}}$$ (Eq. 9) ", "where $\\text{tfisf}(a,t)$ is also defined as the frequency of term $t$ in fragment $a$ multiplied by $\\text{isf}(t)$ . This similarity measure will be used in section \"Sentence hypergraph construction\" to compute the similarity with the query $q$ ." ], [ "As mentioned in section \"Introduction\" , our hypergraph model is based on the detection of themes. A theme is defined as a group of sentences covering the same topic. Hence, our theme detection algorithm is based on a 3-step approach: the extraction of topics, the process of tagging each sentence with multiple topics and the detection of themes based on topic tags.", "A topic is viewed as a set of semantically similar terms, namely terms that refer to the same subject or the same piece of information. In the context of a specific corpus of related documents, a topic can be defined as a set of terms that are likely to occur close to each other in a document BIBREF24 . In order to extract topics, we make use of a clustering approach based on the definition of a semantic dissimilarity between terms. For terms $u$ and $v$ , we first define the joint $\\text{isf}$ weight $\\text{isf}(u,v)$ as ", "$$\\text{isf}(u,v)=\\log \\left(\\frac{N_s}{N_s^{uv}}\\right)$$ (Eq. 11) ", "where $N_s^{uv}$ is the number of sentences in which both terms $u$ and $v$ occur together. Then, the semantic dissimilarity $d_{\\text{sem}}(u,v)$ between the two terms is defined as ", "$$d_{\\text{sem}}(u,v)=\\frac{\\text{isf}(u,v)-\\min (\\text{isf}(u),\\text{isf}(v))}{\\max (\\text{isf}(u),\\text{isf}(v))}$$ (Eq. 12) ", "which can be viewed as a special case of the so-called google distance which was already successfully applied to learn semantic similarities between terms on webpages BIBREF25 . Using concepts from information theory, $\\text{isf}(u)$ represents the number of bits required to express the occurrence of term $u$ in a sentence using an optimally efficient code. Then, $\\text{isf}(u,v)-\\text{isf}(u)$ can be viewed as the number of bits of information in $v$ relative to $u$ . Assuming $\\text{isf}(v)\\ge \\text{isf}(u)$ , $d_{\\text{sem}}(u,v)$ can be viewed as the improvement obtained when compressing $v$ using a previously compressed code for $u$ and compressing $v$ from scratch BIBREF26 . More details can be found in BIBREF25 . In practice, two terms $u$0 and $u$1 with a low value of $u$2 are expected to consistently occur together in the same context, and they are thus considered to be semantically related in the context of the corpus.", "Based on the semantic dissimilarity measure between terms, we define a topic as a group of terms with a high semantic density, namely a group of terms such that each term of the group is semantically related to a sufficiently high number of terms in the group. The DBSCAN algorithm is a method of density-based clustering that achieves this result by iteratively growing cohesive groups of agents, with the condition that each member of a group should contain a sufficient number of other members in an $\\epsilon $ -neighborhood around it BIBREF27 . Using the semantic dissimilarity as a distance measure, DBSCAN extracts groups of semantically related terms which are considered as topics. The advantages offered by DBSCAN over other clustering algorithms are threefold. First, DBSCAN is capable of detecting the number of clusters automatically. Second, although the semantic dissimilarity is symmetric and nonnegative, it does not satisfy the triangle inequality. This prevents the use of various clustering algorithms such as the agglomerative clustering with complete linkage BIBREF28 . However, DBSCAN does not explicitly require the triangle inequality to be satisfied. Finally, it is able to detect noisy samples in low density region, that do not belong to any other cluster.", "Given a set of pairwise dissimilarity measures, a density threshold $\\epsilon $ and a minimum neighborhood size $m$ , DBSCAN returns a number $K$ of clusters and a set of labels $\\lbrace c(i)\\in \\lbrace -1,1,...,K\\rbrace :1\\le i\\le N_t\\rbrace $ such that $c(i)=-1$ if term $i$ is considered a noisy term. While it is easy to determine a natural value for $m$ , choosing a value for $\\epsilon $ is not straightforward. Hence, we adapt DBSCAN algorithm to build our topic model referred to as Semantic Clustering Of Terms (SEMCOT) algorithm. It iteratively applies DBSCAN and decreases the parameter $\\epsilon $ until the size of each cluster does not exceed a predefined value. Algorithm \"Sentence theme detection based on topic tagging\" summarizes the process. Apart from $m$ , the algorithm also takes parameters $m$0 (the initial value of $m$1 ), $m$2 (the maximum number of points allowed in a cluster) and $m$3 (a factor close to 1 by which $m$4 is multiplied until all clusters have sizes lower than $m$5 ). Experiments on real-world data suggest empirical values of $m$6 , $m$7 , $m$8 and $m$9 . Additionally, we observe that, among the terms considered as noisy by DBSCAN, some could be highly infrequent terms with a high $K$0 value but yet having a strong impact on the meaning of sentences. Hence, we include them as topics consisting of single terms if their $K$1 value exceeds a threshold $K$2 whose value is determined by cross-validation, as explained in section \"Experiments and evaluation\" .", "[H] INPUT: Semantic Dissimilarities $\\lbrace d_{\\text{sem}}(u,v):1\\le u,v\\le N_t\\rbrace $ ,", "PARAMETERS: $\\epsilon _0$ , $M$ , $m$ , $\\beta \\le 1$ , $\\mu $ ", "OUTPUT: Number $K$ of topics, topic tags $\\lbrace c(i):1\\le i\\le N_t\\rbrace $ ", " $\\epsilon \\leftarrow \\epsilon _0$ , $\\text{minTerms}\\leftarrow m$ , $\\text{proceed}\\leftarrow \\text{True}$ ", "while $\\text{proceed}$ :", " $[c,K]\\leftarrow DBSCAN(d_{\\text{sem}},\\epsilon ,\\text{minTerms})$ ", "if $\\underset{1\\le k\\le K}{\\max }(|\\lbrace i:c(i)=k\\rbrace |)<M$ : $\\text{proceed}\\leftarrow \\text{False}$ ", "else: $\\epsilon \\leftarrow \\beta \\epsilon $ ", "for each $t$ s.t. $c(t)=-1$ (noisy terms):", "if $\\text{isf}(t)\\ge \\mu $ :", " $c(t)\\leftarrow K+1$ , $K\\leftarrow K+1$ ", "SEMCOT ", "Once the topics are obtained based on algorithm \"Sentence theme detection based on topic tagging\" , a theme is associated to each topic, namely a group of sentences covering the same topic. The sentences are first tagged with multiple topics based on a scoring function. The score of the $l$ -th topic in the $i$ -th sentence is given by ", "$$\\sigma _{il}=\\underset{t:c(t)=l}{\\sum }\\text{tfisf}(i,t)$$ (Eq. 13) ", "and the sentence is tagged with topic $l$ whenever $\\sigma _{il}\\ge \\delta $ , in which $\\delta $ is a parameter whose value is tuned as explained in section \"Experiments and evaluation\" (ensuring that each sentence is tagged with at least one topic). The scores are intentionally not normalized to avoid tagging short sentences with an excessive number of topics. The $l$ -th theme is then defined as the set of sentences ", "$$T_l=\\lbrace i:\\sigma _{il}\\ge \\delta ,1\\le i\\le N_s\\rbrace .$$ (Eq. 14) ", "While there exist other summarization models based on the detection of clusters or groups of similar sentence, the novelty of our theme model is twofold. First, each theme is easily interpretable as the set of sentences associated to a specific topic. As such, our themes can be considered as groups of semantically related sentences. Second, it is clear that the themes discovered by our approach do overlap since a single sentence may be tagged with multiple topics. To the best of our knowledge, none of the previous cluster-based summarizers involved overlapping groups of sentences. Our model is thus more realistic since it better captures the multiplicity of the information covered by each sentence." ], [ "A hypergraph is a generalization of a graph in which the hyperedges may contain any number of nodes, as expressed in definition UID16 BIBREF3 . Our hypergraph model moreover includes both hyperedge and node weights.", "Definition 1 (Hypergraph) A node- and hyperedge-weighted hypergraph is defined as a quadruplet $H=(V,E,\\phi ,w)$ in which $V$ is a set of nodes, $E\\subseteq 2^{V}$ is a set of hyperedges, $\\phi \\in \\mathbb {R}_+^{|V|}$ is a vector of positive node weights and $w\\in \\mathbb {R}_+^{|E|}$ is a vector of positive hyperedge weights.", "For convenience, we will refer to a hypergraph by its weight vectors $\\phi $ and $w$ , its hyperedges represented by a set $E\\subseteq 2^V$ and its incidence lists $\\text{inc}(i)=\\lbrace e\\in E:i\\in e\\rbrace $ for each $i\\in V$ .", "As mentioned in section \"Introduction\" , our system relies on the definition of a theme-based hypergraph which models groups of semantically related sentences as hyperedges. Hence, compared to traditional graph-based summarizers, the hypergraph is able to capture more complex group relationships between sentences instead of being restricted to pairwise relationships.", "In our sentence-based hypergraph, the sentences are the nodes and each theme defines a hyperedge connecting the associated sentences. The weight $\\phi _i$ of node $i$ is the length of the $i$ -th sentence, namely: ", "$$\\begin{array}{l}\nV = \\lbrace 1,...,N_s\\rbrace \\text{ and }\\phi _i=L_i\\text{, }\\text{ }1\\le i\\le N_s\\\\\nE = \\lbrace e_1,...,e_K\\rbrace \\subseteq 2^V\\\\\ne_l=T_l\\text{ i.e. }e_l\\in \\text{inc}(i)\\leftrightarrow i\\in T_l\n\\end{array}$$ (Eq. 17) ", "Finally, the weights of the hyperedges are computed based on the centrality of the associated theme and its similarity with the query: ", "$$w_l=(1-\\lambda )\\text{sim}(T_l,D)+\\lambda \\text{sim}(T_l,q)$$ (Eq. 18) ", "where $\\lambda \\in [0,1]$ is a parameter and $D$ represents the entire corpus. $\\text{sim}(T_l,D)$ denotes the similarity of the set of sentences in theme $T_l$ with the entire corpus (using the tfisf-based similarity of equation 9 ) which measures the centrality of the theme in the corpus. $\\text{sim}(T_l,q)$ refers to the similarity of the theme with the user-defined query $q$ ." ], [ "The sentences to be included in the query-oriented summary should contain the essential information in the corpus, they should be relevant to the query and, whenever required, they should either not exceed a target length or jointly achieve a target coverage (as mentioned in section \"Problem statement and system overview\" ). Existing systems of graph-based summarization generally solve the problem by ranking sentences in terms of their individual relevance BIBREF0 , BIBREF2 , BIBREF3 . Then, they extract a set of sentences with a maximal total relevance and pairwise similarities not exceeding a predefined threshold. However, we argue that the joint relevance of a group of sentences is not reflected by the individual relevance of each sentence. And limiting the redundancy of selected sentences as done in BIBREF3 does not guarantee that the sentences jointly cover the relevant themes of the corpus.", "Considering each topic as a distinct piece of information in the corpus, an alternative approach is to select the smallest subset of sentences covering each of the topics. The latter condition can be reformulated as ensuring that each theme has at least one of its sentences appearing in the summary. Using our sentence hypergraph representation, this corresponds to the detection of a minimal hypergraph transversal as defined below BIBREF5 .", "Definition 2 Given an unweighted hypergraph $H=(V,E)$ , a minimal hypergraph transversal is a subset $S^*\\subseteq V$ of nodes satisfying ", "$$\\begin{array}{rcl}\nS^*&=&\\underset{S\\subseteq V}{\\text{argmin}}|S|\\\\\n&& \\text{s.t. }\\underset{i\\in S}{\\bigcup }\\text{inc}(i)=E\n\\end{array}$$ (Eq. 21) ", "where $\\text{inc}(i)=\\lbrace e:i\\in e\\rbrace $ denotes the set of hyperedges incident to node $i$ .", "Figure 2 shows an example of hypergraph and a minimal hypergraph transversal of it (star-shaped nodes). In this case, since the nodes and the hyperedges are unweighted, the minimal transversal is not unique.", "The problem of finding a minimal transversal in a hypergraph is NP-hard BIBREF29 . However, greedy algorithms or LP relaxations provide good approximate solutions in practice BIBREF21 . As intended, the definition of transversal includes the notion of joint coverage of the themes by the sentences. However, it neglects node and hyperedge weights and it is unable to identify query-relevant themes. Since both the sentence lengths and the relevance of themes should be taken into account in the summary generation, we introduce two extensions of transversal, namely the minimal soft hypergraph transversal and the maximal budgeted hypergraph transversal. A minimal soft transversal of a hypergraph is obtained by minimizing the total weights of selected nodes while ensuring that the total weight of covered hyperedges exceeds a given threshold.", "Definition 3 (minimal soft hypergraph transversal) Given a node and hyperedge weighted hypergraph $H=(V,E,\\phi ,w)$ and a parameter $\\gamma \\in [0,1]$ , a minimal soft hypergraph transversal is a subset $S^*\\subseteq V$ of nodes satisfying ", "$$\\begin{array}{rcl}\nS^*&=&\\underset{S\\subseteq V}{\\text{argmin}}\\underset{i\\in S}{\\sum }\\phi _i\\\\\n&& \\text{s.t. }\\underset{e\\in \\text{inc}(S)}{\\sum }w_e\\ge \\gamma W\n\\end{array}$$ (Eq. 24) ", "in which $\\text{inc}(S)=\\underset{i\\in S}{\\bigcup }\\text{inc}(i)$ and $W=\\sum _ew_e$ .", "The extraction of a minimal soft hypergraph transversal of the sentence hypergraph produces a summary of minimal length achieving a target coverage expressed by parameter $\\gamma \\in [0,1]$ . As mentioned in section \"Problem statement and system overview\" , applications of text summarization may also involve a hard constraint on the total summary length $L$ . For that purpose, we introduce the notion of maximal budgeted hypergraph transversal which maximizes the volume of covered hyperedges while not exceeding the target length.", "Definition 4 (maximal budgeted hypergraph transversal) Given a node and hyperedge weighted hypergraph $H=(V,E,\\phi ,w)$ and a parameter $L>0$ , a maximal budgeted hypergraph transversal is a subset $S^*\\subseteq V$ of nodes satisfying ", "$$\\begin{array}{rcl}\nS^*&=&\\underset{S\\subseteq V}{\\text{argmax}}\\underset{e\\in \\text{inc}(S)}{\\sum }w_e\\\\\n&& \\text{s.t. }\\underset{i\\in S}{\\sum }\\phi _i\\le L.\n\\end{array}$$ (Eq. 26) ", "We refer to the function $\\underset{e\\in \\text{inc}(S)}{\\sum }w_e$ as the hyperedge coverage of set $S$ . We observe that both weighted transversals defined above include the notion of joint coverage of the hyperedges by the selected nodes. As a result and from the definition of hyperedge weights (equation 18 ), the resulting summary covers themes that are both central in the corpus and relevant to the query. This approach also implies that the resulting summary does not contain redundant sentences covering the exact same themes. As a result selected sentences are expected to cover different themes and to be semantically diverse. Both the problems of finding a minimal soft transversal or finding a maximal budgeted transversal are NP-hard as proved by theorem UID27 .", "Theorem 1 (NP-hardness) The problems of finding a minimal soft hypergraph transversal or a maximal budgeted hypergraph transversal in a weighted hypergraph are NP-hard.", "Regarding the minimal soft hypergraph transversal problem, with parameter $\\gamma =1$ and unit node weights, the problem is equivalent to the classical set cover problem (definition UID20 ) which is NP-complete BIBREF29 . The maximal budgeted hypergraph transversal problem can be shown to be equivalent to the maximum coverage problem with knapsack constraint which was shown to be NP-complete in BIBREF29 .", "Since both problems are NP-hard, we formulate polynomial time algorithms to find approximate solutions to them and we provide the associated approximation factors. The algorithms build on the submodularity and the non-decreasing properties of the hyperedge coverage function, which are defined below.", "Definition 5 (Submodular and non-decreasing set functions) Given a finite set $A$ , a function $f:2^{A}\\rightarrow \\mathbb {R}$ is monotonically non-decreasing if $\\forall S\\subset A$ and $\\forall u\\in A\\setminus S$ , ", "$$f(S\\cup \\lbrace u\\rbrace )\\ge f(S)$$ (Eq. 29) ", "and it is submodular if $\\forall S,T$ with $S\\subseteq T\\subset A$ , and $\\forall u\\in A\\setminus T$ , ", "$$f(T\\cup \\lbrace u\\rbrace )-f(T)\\le f(S\\cup \\lbrace u\\rbrace )-f(S).$$ (Eq. 30) ", "Based on definition UID28 , we prove in theorem UID31 that the hyperedge coverage function is submodular and monotonically non-decreasing, which provides the basis of our algorithms.", "Theorem 2 Given a hypergraph $H=(V,E,\\phi ,w)$ , the hyperedge coverage function $f:2^V\\rightarrow \\mathbb {R}$ defined by ", "$$f(S)=\\underset{e\\in \\text{inc}(S)}{\\sum }w_e$$ (Eq. 32) ", "is submodular and monotonically non-decreasing.", "The hyperege coverage function $f$ is clearly monotonically non-decreasing and it is submodular since $\\forall S\\subseteq T\\subset V$ , and $s\\in V\\setminus T$ , ", "$$\\begin{array}{l}\n(f(S\\cup \\lbrace s\\rbrace )-f(S))-(f(T\\cup \\lbrace s\\rbrace )-f(T))\\\\\n=\\left[\\underset{e\\in \\text{inc}(S\\cup \\lbrace s\\rbrace )}{\\sum }w_e-\\underset{e\\in \\text{inc}(S)}{\\sum }w_e\\right]-\\left[\\underset{e\\in \\text{inc}(T\\cup \\lbrace s\\rbrace )}{\\sum }w_e-\\underset{e\\in \\text{inc}(T)}{\\sum }w_e\\right]\\\\\n= \\left[ \\underset{e\\in \\text{inc}(\\lbrace s\\rbrace )\\setminus \\text{inc}(S)}{\\sum }w_e\\right]-\\left[ \\underset{e\\in \\text{inc}(\\lbrace s\\rbrace )\\setminus \\text{inc}(T)}{\\sum }w_e\\right]\\\\\n= \\underset{e\\in (\\text{inc}(T)\\cap \\text{inc}(\\lbrace s\\rbrace ))\\setminus \\text{inc}(S)}{\\sum }w_e\\ge 0\n\\end{array}$$ (Eq. 33) ", "where $\\text{inc}(R)=\\lbrace e:e\\cap S\\ne \\emptyset \\rbrace $ for $R\\subseteq V$ . The last equality follows from $\\text{inc}(S)\\subseteq \\text{inc}(T)$ and $\\text{inc}(\\lbrace s\\rbrace )\\setminus \\text{inc}(T)\\subseteq \\text{inc}(\\lbrace s\\rbrace )\\setminus \\text{inc}(S)$ .", "Various classes of NP-hard problems involving a submodular and non-decreasing function can be solved approximately by polynomial time algorithms with provable approximation factors. Algorithms \"Detection of hypergraph transversals for text summarization\" and \"Detection of hypergraph transversals for text summarization\" are our core methods for the detection of approximations of maximal budgeted hypergraph transversals and minimal soft hypergraph transversals, respectively. In each case, a transversal is found and the summary is formed by extracting and aggregating the associated sentences. Algorithm \"Detection of hypergraph transversals for text summarization\" is based on an adaptation of an algorithm presented in BIBREF30 for the maximization of submodular functions under a Knaspack constraint. It is our primary transversal-based summarization model, and we refer to it as the method of Transversal Summarization with Target Length (TL-TranSum algorithm). Algorithm \"Detection of hypergraph transversals for text summarization\" is an application of the algorithm presented in BIBREF20 for solving the submodular set covering problem. We refer to it as Transversal Summarization with Target Coverage (TC-TranSum algorithm). Both algorithms produce transversals by iteratively appending the node inducing the largest increase in the total weight of the covered hyperedges relative to the node weight. While long sentences are expected to cover more themes and induce a larger increase in the total weight of covered hyperedges, the division by the node weights (i.e. the sentence lengths) balances this tendency and allows the inclusion of short sentences as well. In contrast, the methods of sentence selection based on a maximal relevance and a minimal redundancy such as, for instance, the maximal marginal relevance approach of BIBREF31 , tend to favor the selection of long sentences only. The main difference between algorithms \"Detection of hypergraph transversals for text summarization\" and \"Detection of hypergraph transversals for text summarization\" is the stopping criterion: in algorithm \"Detection of hypergraph transversals for text summarization\" , the approximate minimal soft transversal is obtained whenever the targeted hyperedge coverage is reached while algorithm \"Detection of hypergraph transversals for text summarization\" appends a given sentence to the approximate maximal budgeted transversal only if its addition does not make the summary length exceed the target length $L$ .", "[H] INPUT: Sentence Hypergraph $H=(V,E,\\phi ,w)$ , target length $L$ .", "OUTPUT: Set $S$ of sentences to be included in the summary.", "for each $i\\in \\lbrace 1,...,N_s\\rbrace $ : $r_i\\leftarrow \\frac{1}{\\phi _i}\\underset{e\\in \\text{inc}(i)}{\\sum }w_e$ ", " $R\\leftarrow \\emptyset $ , $Q\\leftarrow V$ , $f\\leftarrow 0$ ", "while $Q\\ne \\emptyset $ :", " $s^*\\leftarrow \\underset{i\\in Q}{\\text{argmax}}\\text{ }r_i$ , $Q\\leftarrow Q\\setminus \\lbrace s^*\\rbrace $ ", "if $\\phi _{s^*}+f\\le L$ :", " $R\\leftarrow R\\cup \\lbrace s^*\\rbrace $ , $f\\leftarrow f+l^*$ ", "for each $i\\in \\lbrace 1,...,N_s\\rbrace $ : $r_i\\leftarrow r_i-\\frac{\\underset{e\\in \\text{inc}(s^*)\\cap \\text{inc}(i)}{\\sum } w_e}{\\phi _i}$ ", "Let $G\\leftarrow \\lbrace \\lbrace i\\rbrace \\text{ : }i\\in V,\\phi _i\\le L\\rbrace $ ", " $S\\leftarrow \\underset{S\\in \\lbrace Q\\rbrace \\cup G}{\\text{argmax}}\\text{ }\\text{ }\\text{ }\\underset{e\\in \\text{inc}(S)}{\\sum }w_e$ ", "return $S$ ", "Transversal Summarization with Target Length (TL-TranSum) ", "[H] INPUT: Sentence Hypergraph $H=(V,E,\\phi ,w)$ , parameter $\\gamma \\in [0,1]$ .", "OUTPUT: Set $S$ of sentences to be included in the summary.", "for each $i\\in \\lbrace 1,...,N_s\\rbrace $ : $r_i\\leftarrow \\frac{1}{\\phi _i}\\underset{e\\in \\text{inc}(i)}{\\sum }w_e$ ", " $S\\leftarrow \\emptyset $ , $Q\\leftarrow V$ , $\\tilde{W}\\leftarrow 0$ , $W\\leftarrow \\sum _ew_e$ ", "while $Q\\ne \\emptyset $ and $\\tilde{W}<\\gamma W$ :", " $s^*\\leftarrow \\underset{i\\in Q}{\\text{argmax}}\\text{ }r_i$ ", " $S\\leftarrow S\\cup \\lbrace s^*\\rbrace $ , $\\tilde{W}\\leftarrow \\tilde{W}+\\phi _{s*}r_{s^*}$ ", "for each $i\\in \\lbrace 1,...,N_s\\rbrace $ : $r_i\\leftarrow r_i-\\frac{\\underset{e\\in \\text{inc}(s^*)\\cap \\text{inc}(i)}{\\sum } w_e}{\\phi _i}$ ", "return $S$ ", "Transversal Summarization with Target Coverage (TC-TranSum) ", "We next provide theoretical guarantees that support the formulation of algorithms \"Detection of hypergraph transversals for text summarization\" and \"Detection of hypergraph transversals for text summarization\" as approximation algorithms for our hypergraph transversals. Theorem UID34 provides a constant approximation factor for the output of algorithm \"Detection of hypergraph transversals for text summarization\" for the detection of minimal soft hypergraph transversals. It builds on the submodularity and the non-decreasing property of the hyperedge coverage function.", "Theorem 3 Let $S^L$ be the summary produced by our TL-TranSum algorithm \"Detection of hypergraph transversals for text summarization\" , and $S^*$ be a maximal budgeted transversal associated to the sentence hypergraph, then ", "$$\\underset{e\\in \\text{inc}(S^L)}{\\sum }w_e \\ge \\frac{1}{2}\\left(1-\\frac{1}{e}\\right)\\underset{e\\in \\text{inc}(S^*)}{\\sum }w_e.$$ (Eq. 35) ", "Since the hyperedge coverage function is submodular and monotonically non-decreasing, the extraction of a maximal budgeted transversal is a problem of maximization of a submodular and monotonically non-decreasing function under a Knapsack constraint, namely ", "$$\\underset{S\\subseteq V}{\\max }f(S)\\text{ s.t. }\\underset{i\\in S}{\\sum }\\phi _i\\le L$$ (Eq. 36) ", "where $f(S)=\\underset{e\\in \\text{inc}(S)}{\\sum }w_e$ . Hence, by theorem 2 in BIBREF30 , the algorithm forming a transversal $S^F$ by iteratively growing a set $S_t$ of sentences according to ", "$$S_{t+1}=S_t\\cup \\left\\lbrace \\underset{s\\in V\\setminus S_t}{\\text{argmax}}\\left\\lbrace \\frac{f(S\\cup \\lbrace s\\rbrace )-f(S)}{\\phi _s}, \\phi _s+\\underset{i\\in S_t}{\\sum }\\phi _i\\le L\\right\\rbrace \\right\\rbrace $$ (Eq. 37) ", "produces a final summary $S^F$ satisfying ", "$$f(S^F)\\ge f(S^*)\\frac{1}{2}\\left(1-\\frac{1}{e}\\right).$$ (Eq. 38) ", "As algorithm \"Detection of hypergraph transversals for text summarization\" implements the iterations expressed by equation 37 , it achieves a constant approximation factor of $\\frac{1}{2}\\left(1-\\frac{1}{e}\\right)$ .", "Similarly, theorem UID39 provides a data-dependent approximation factor for the output of algorithm \"Detection of hypergraph transversals for text summarization\" for the detection of maximal budgeted hypergraph transversals. It also builds on the submodularity and the non-decreasing property of the hyperedge coverage function.", "Theorem 4 Let $S^P$ be the summary produced by our TC-TranSum algorithm \"Detection of hypergraph transversals for text summarization\" and let $S^*$ be a minimal soft hypergraph transversal, then ", "$$\\underset{i\\in S^P}{\\sum }\\phi _i\\le \\underset{i\\in S^*}{\\sum }\\phi _i \\left(1+\\log \\left(\\frac{\\gamma W}{\\gamma W-\\underset{e\\in \\text{inc}(S^{T-1})}{\\sum }w_e}\\right)\\right)$$ (Eq. 40) ", "where $S_1,...,S_T$ represent the consecutive sets of sentences produced by algorithm \"Detection of hypergraph transversals for text summarization\" .", "Consider the function $g(S)=\\min (\\gamma W,\\underset{e\\in \\text{inc}(S)}{\\sum }w_e)$ . Then the problem of finding a minimal soft hypergraph transversal can be reformulated as ", "$$S^*=\\underset{S\\subseteq V}{\\text{argmin}} \\underset{s\\in S}{\\sum }\\phi _s\\text{ s.t. }g(S)\\ge g(V)$$ (Eq. 41) ", "As $g$ is submodular and monotonically non-decreasing, theorem 1 in BIBREF20 shows that the summary $S^G$ produced by iteratively growing a set $S_t$ of sentences such that ", "$$S_{t+1}=S_t\\cup \\left\\lbrace \\underset{s\\in V\\setminus S_t}{\\text{argmax}}\\left\\lbrace \\frac{f(S\\cup \\lbrace s\\rbrace )-f(S)}{\\phi _s}\\right\\rbrace \\right\\rbrace $$ (Eq. 42) ", "produces a summary $S^G$ satisfying ", "$$\\underset{i\\in S^G}{\\sum }\\phi _i\\le \\underset{i\\in S^*}{\\sum }\\phi _i \\left(1+\\log \\left(\\frac{g(V)}{g(V)-g(S^{T-1})}\\right)\\right).$$ (Eq. 43) ", "which can be rewritten as ", "$$\\underset{i\\in S^G}{\\sum }\\phi _i\\le \\underset{i\\in S^*}{\\sum }\\phi _i \\left(1+\\log \\left(\\frac{\\gamma W}{\\gamma W-\\underset{e\\in \\text{inc}(S^{T-1})}{\\sum }w_e}\\right)\\right).$$ (Eq. 44) ", "As algorithm \"Detection of hypergraph transversals for text summarization\" implements the iterations expressed by equation 42 , the summary $S^S$ produced by our algorithm \"Detection of hypergraph transversals for text summarization\" satisfies the same inequality.", "In practice, the result of theorem UID39 suggests that the quality of the output depends on the relative increase in the hyperedge coverage induced by the last sentence to be appended to the summary. In particular, if each sentence that is appended to the summary in the interations of algorithm \"Detection of hypergraph transversals for text summarization\" covers a sufficient number of new themes that are not covered already by the summary, the approximation factor is low." ], [ "We analyse the worst case time complexity of each step of our method. The time complexity of DBSCAN algorithm BIBREF27 is $O(N_t\\log (N_t))$ . Hence, the theme detection algorithm \"Sentence theme detection based on topic tagging\" takes $O(N_cN_t\\log (N_t))$ steps, where $N_c$ is the number of iterations of algorithm \"Sentence theme detection based on topic tagging\" which is generally low compared to the number of terms. The time complexity for the hypergraph construction is $O(K(N_s+N_t))$ where $K$ is the number of topics, or $O(N_t^2)$ if $N_t\\ge N_s$ . The time complexity of the sentence selection algorithms \"Detection of hypergraph transversals for text summarization\" and \"Detection of hypergraph transversals for text summarization\" are bounded by $O(N_sKC^{\\max }L^{\\max })$ where $C^{\\max }$ is the number of sentences in the largest theme and $L^{\\max }$ is the length of the longest sentences. Assuming $O(N_cN_t\\log (N_t))$0 is larger than $O(N_cN_t\\log (N_t))$1 , the overall time complexity of the method is of $O(N_cN_t\\log (N_t))$2 steps in the worst case. Hence the method is essentially equivalent to early graph-based models for text summarization in terms of computational burden, such as the LexRank-based systems BIBREF0 , BIBREF2 or greedy approaches based on global optimization BIBREF17 , BIBREF15 , BIBREF16 . However, it is computationnally more efficient than traditional hypergraph-based summarizers such as the one in BIBREF4 which involves a Markov Chain Monte Carlo inference for its topic model or the one in BIBREF3 which is based on an iterative computation of scores involving costly matrix multiplications at each step." ], [ "We present experimental results obtained with a Python implementation of algorithms \"Detection of hypergraph transversals for text summarization\" and \"Detection of hypergraph transversals for text summarization\" on a standard computer with a $2.5GHz$ processor and a 8GB memory." ], [ "We test our algorithms on DUC2005 BIBREF32 , DUC2006 BIBREF33 and DUC2007 BIBREF34 datasets which were produced by the Document Understanding Conference (DUC) and are widely used as benchmark datasets for the evaluation of query-oriented summarizers. The datasets consist respectively of 50, 50 and 45 corpora, each consisting of 25 documents of approximately 1000 words, on average. A query is associated to each corpus. For evaluation purposes, each corpus is associated with a set of query-relevant summaries written by humans called reference summaries. In each of our experiments, a candidate summary is produced for each corpus by one of our algorithms and it is compared with the reference summaries using the metrics described below. Moreover, in experiments involving algorithm \"Detection of hypergraph transversals for text summarization\" , the target summary length is set to 250 words as required in DUC evalutions.", "In order to evaluate the similarity of a candidate summary with a set of reference summaries, we make use of the ROUGE toolkit of BIBREF35 , and more specifically of ROUGE-2 and ROUGE-SU4 metrics, which were adopted by DUC for summary evaluation. ROUGE-2 measures the number of bigrams found both in the candidate summary and the set of reference summaries. ROUGE-SU4 extends this approach by counting the number of unigrams and the number of 4-skip-bigrams appearing in the candidate and the reference summaries, where a 4-skip-bigram is a pair of words that are separated by no more than 4 words in a text. We refer to ROUGE toolkit BIBREF35 for more details about the evaluation metrics. ROUGE-2 and ROUGE-SU4 metrics are computed following the same setting as in DUC evaluations, namely with word stemming and jackknife resampling but without stopword removal." ], [ "Besides the parameters of SEMCOT algorithm for which empirical values were given in section \"Sentence theme detection based on topic tagging\" , there are three parameters of our system that need to be tuned: parameters $\\mu $ (threshold on isf value to include a noisy term as a single topic in SEMCOT), $\\delta $ (threshold on the topic score for tagging a sentence with a given topic) and $\\lambda $ (balance between the query relevance and the centrality in hyperedge weights). The values of all three parameters are determined by an alternating maximization strategy of ROUGE-SU4 score in which the values of two parameters are fixed and the value of the third parameter is tuned to maximize the ROUGE-SU4 score produced by algorithm \"Detection of hypergraph transversals for text summarization\" with a target summary length of 250 words, in an iterative fashion. The ROUGE-SU4 scores are evaluated by cross-validation using a leave-one-out process on a validation dataset consisting of $70\\%$ of DUC2007 dataset, which yields $\\mu =1.98$ , $\\delta =0.85$ and $\\lambda =0.4$ .", "Additionally, we display the evolution of ROUGE-SU4 and ROUGE-2 scores as a function of $\\delta $ and $\\lambda $ . For parameter $\\delta $ , we observe in graphs UID49 and UID50 that the quality of the summary is low for $\\delta $ close to 0 since it encourages our theme detection algorithm to tag the sentences with irrelevant topics with low associated tfisf values. In contrast, when $\\delta $ exceeds $0.9$ , some relevant topics are overlooked and the quality of the summaries drops severely. Regarding parameter $\\lambda $ , we observe in graphs UID52 and UID53 that $\\lambda =0.4$ yields the highest score since it combines both the relevance of themes to the query and their centrality within the corpus for the computation of hyperedge weights. In contrast, with $\\lambda =1$ , the algorithm focuses on the lexical similarity of themes with the query but it neglects the prominence of each theme." ], [ "In order to test our soft transversal-based summarizer, we display the evolution of the summary length and the ROUGE-SU4 score as a function of parameter $\\gamma $ of algorithm \"Detection of hypergraph transversals for text summarization\" . In figure UID57 , we observe that the summary length grows linearly with the value of parameter $\\gamma $ which confirms that our system does not favor longer sentences for low values of $\\gamma $ . The ROUGE-SU4 curve of figure UID56 has a concave shape, with a low score when $\\gamma $ is close to 0 (due to a poor recall) or when $\\gamma $ is close to 1 (due to a poor precision). The overall concave shape of the ROUGE-SU4 curve also demonstrates the efficiency of our TC-TranSum algorithm: based on our hyperedge weighting scheme and our hyperedge coverage function, highly relevant sentences inducing a significant increase in the ROUGE-SU4 score are identified and included first in the summary.", "In the subsequent experiments, we focus on TL-TranSum algorithm \"Detection of hypergraph transversals for text summarization\" which includes a target summary length and can thus be compared with other summarization systems which generally include a length constraint." ], [ "To justify our theme-based hypergraph definition, we test other hypergraph models. We only change the hyperedge model which determines the kind of relationship between sentences that is captured by the hypergraph. The sentence selection is performed by applying algorithm \"Detection of hypergraph transversals for text summarization\" to the resulting hypergraph. We test three alternative hyperedge models. First a model based on agglomerative clustering instead of SEMCOT: the same definition of semantic dissimilarity (equation 12 ) is used, then topics are detected as clusters of terms obtained by agglomerative clustering with single linkage with the semantic dissimilarity as a distance measure. The themes are detected and the hypergraph is constructed in the same way as in our model. Second, Overlap model defines hyperedges as overlapping clusters of sentences obtained by applying an algorithm of overlapping cluster detection BIBREF36 and using the cosine distance between tfisf representations of sentences as a distance metric. Finally, we test a hypergraph model already proposed in HyperSum system by BIBREF3 which combines pairwise hyperedges joining any two sentences having terms in common and hyperedges formed by non-overlapping clusters of sentences obtained by DBSCAN algorithm. Table 1 displays the ROUGE-2 and ROUGE-SU4 scores and the corresponding $95\\%$ confidence intervals for each model. We observe that our model outperforms both HyperSum and Overlap models by at least $4\\%$ and $15\\%$ of ROUGE-SU4 score, respectively, which confirms that a two-step process extracting consistent topics first and then defining theme-based hyperedges from topic tags outperforms approaches based on sentence clustering, even when these clusters do overlap. Our model also outperforms the Agglomerative model by $10\\%$ of ROUGE-SU4 score, due to its ability to identify noisy terms and to detect the number of topics automatically." ], [ "We compare the performance of our TL-TranSum algorithm \"Detection of hypergraph transversals for text summarization\" with that of five related summarization systems. Topic-sensitive LexRank of BIBREF2 (TS-LexRank) and HITS algorithms of BIBREF1 are early graph-based summarizers. TS-LexRank builds a sentence graph based on term co-occurrences in sentences, and it applies a query-biased PageRank algorithm for sentence scoring. HITS method additionally extracts clusters of sentences and it applies the hubs and authorities algorithm for sentence scoring, with the sentences as authorities and the clusters as hubs. As suggested in BIBREF3 , in order to extract query relevant sentences, only the top $5\\%$ of sentences that are most relevant to the query are considered. HyperSum extends early graph-based summarizers by defining a cluster-based hypergraph with the sentences as nodes and hyperedges as sentence clusters, as described in section \"Testing the hypergraph structure\" . The sentences are then scored using an iterative label propagation algorithm over the hypergraph, starting with the lexical similarity of each sentence with the query as initial labels. In all three methods, the sentences with highest scores and pairwise lexical similarity not exceeding a threshold are included in the summary. Finally, we test two methods that also build on the theory of submodular functions. First, the MaxCover approach BIBREF9 seeks a summary by maximizing the number of distinct relevant terms appearing in the summary while not exceeding the target summary length (using equation 18 to compute the term relevance scores). While the objective function of the method is similar to that of the problem of finding a maximal budgeted hypergraph transversal (equation 26 ) of BIBREF16 , they overlook the semantic similarities between terms which are captured by our SEMCOT algorithm and our hypergraph model. Similarly, the Maximal Relevance Minimal Redundancy (MRMR) first computes relevance scores of sentences as in equation 18 , then it seeks a summary with a maximal total relevance score and a minimal redundancy while not exceeding the target summary length. The problem is solved by an iterative algorithm building on the submodularity and non-decreasing property of the objective function.", "Table 2 displays the ROUGE-2 and ROUGE-SU4 scores with the corresponding $95\\%$ confidence intervals for all six systems, including our TL-TranSum method. We observe that our system outperforms other graph and hypergraph-based summarizers involving the computation of individual sentence scores: LexRank by $6\\%$ , HITS by $13\\%$ and HyperSum by $6\\%$ of ROUGE-SU4 score; which confirms both the relevance of our theme-based hypergraph model and the capacity of our transversal-based summarizer to identify jointly relevant sentences as opposed to methods based on the computation of individual sentence scores. Moreover, our TL-TranSum method also outperforms other approaches such as MaxCover ( $5\\%$ ) and MRMR ( $7\\%$ ). These methods are also based on a submodular and non-decreasing function expressing the information coverage of the summary, but they are limited to lexical similarities between sentences and fail to detect topics and themes to measure the information coverage of the summary." ], [ "As a final experiment, we compare our TL-TranSum approach to other summarizers presented at DUC contests. Table 3 displays the ROUGE-2 and ROUGE-SU4 scores for the worst summary produced by a human, for the top four systems submitted for the contests, for the baseline proposed by NIST (a summary consisting of the leading sentences of randomly selected documents) and the average score of all methods submitted, respectively for DUC2005, DUC2006 and DUC2007 contests. Regarding DUC2007, our method outperforms the best system by $2\\%$ and the average ROUGE-SU4 score by $21\\%$ . It also performs significantly better than the baseline of NIST. However, it is outperformed by the human summarizer since our systems produces extracts, while humans naturally reformulate the original sentences to compress their content and produce more informative summaries. Tests on DUC2006 dataset lead to similar conclusions, with our TL-TranSum algorithm outperforming the best other system and the average ROUGE-SU4 score by $2\\%$ and $22\\%$ , respectively. On DUC2005 dataset however, our TL-TranSum method is outperformed by the beset system which is due to the use of advanced NLP techniques (such as sentence trimming BIBREF37 ) which tend to increase the ROUGE-SU4 score. Nevertheless, the ROUGE-SU4 score produced by our TL-TranSum algorithm is still $15\\%$ higher than the average score for DUC2005 contest." ], [ "In this paper, a new hypergraph-based summarization model was proposed, in which the nodes are the sentences of the corpus and the hyperedges are themes grouping sentences covering the same topics. Going beyond existing methods based on simple graphs and pairwise lexical similarities, our hypergraph model captures groups of semantically related sentences. Moreover, two new method of sentence selection based on the detection of hypergraph transversals were proposed: one to generate summaries of minimal length and achieving a target coverage, and the other to generate a summary achieving a maximal coverage of relevant themes while not exceeding a target length. The approach generates informative summaries by extracting a subset of sentences jointly covering the relevant themes of the corpus. Experiments on a real-world dataset demonstrate the effectiveness of the approach. The hypergraph model itself is shown to produce more accurate summaries than other models based on term or sentence clustering. The overall system also outperforms related graph- or hypergraph-based approaches by at least $10\\%$ of ROUGE-SU4 score.", "As a future research direction, we may analyse the performance of other algorithms for the detection of hypergraph transversals, such as methods based on LP relaxations. We may also further extend our topic model to take the polysemy of terms into acount: since each term may carry multiple meanings, a given term could refer to different topics depending on its context. Finally, we intend to adapt our model for solving related problems, such as commmunity question answering." ] ] }
{ "question": [ "How does the model compare with the MMR baseline?" ], "question_id": [ "babe72f0491e65beff0e5889380e8e32d7a81f78" ], "nlp_background": [ "infinity" ], "topic_background": [ "familiar" ], "paper_read": [ "no" ], "search_query": [ "summarization" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ " Moreover, our TL-TranSum method also outperforms other approaches such as MaxCover ( $5\\%$ ) and MRMR ( $7\\%$ )" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Various classes of NP-hard problems involving a submodular and non-decreasing function can be solved approximately by polynomial time algorithms with provable approximation factors. Algorithms \"Detection of hypergraph transversals for text summarization\" and \"Detection of hypergraph transversals for text summarization\" are our core methods for the detection of approximations of maximal budgeted hypergraph transversals and minimal soft hypergraph transversals, respectively. In each case, a transversal is found and the summary is formed by extracting and aggregating the associated sentences. Algorithm \"Detection of hypergraph transversals for text summarization\" is based on an adaptation of an algorithm presented in BIBREF30 for the maximization of submodular functions under a Knaspack constraint. It is our primary transversal-based summarization model, and we refer to it as the method of Transversal Summarization with Target Length (TL-TranSum algorithm). Algorithm \"Detection of hypergraph transversals for text summarization\" is an application of the algorithm presented in BIBREF20 for solving the submodular set covering problem. We refer to it as Transversal Summarization with Target Coverage (TC-TranSum algorithm). Both algorithms produce transversals by iteratively appending the node inducing the largest increase in the total weight of the covered hyperedges relative to the node weight. While long sentences are expected to cover more themes and induce a larger increase in the total weight of covered hyperedges, the division by the node weights (i.e. the sentence lengths) balances this tendency and allows the inclusion of short sentences as well. In contrast, the methods of sentence selection based on a maximal relevance and a minimal redundancy such as, for instance, the maximal marginal relevance approach of BIBREF31 , tend to favor the selection of long sentences only. The main difference between algorithms \"Detection of hypergraph transversals for text summarization\" and \"Detection of hypergraph transversals for text summarization\" is the stopping criterion: in algorithm \"Detection of hypergraph transversals for text summarization\" , the approximate minimal soft transversal is obtained whenever the targeted hyperedge coverage is reached while algorithm \"Detection of hypergraph transversals for text summarization\" appends a given sentence to the approximate maximal budgeted transversal only if its addition does not make the summary length exceed the target length $L$ .", "FLOAT SELECTED: Table 2: Comparison with related graph- and hypergraph-based summarization systems." ], "highlighted_evidence": [ "While long sentences are expected to cover more themes and induce a larger increase in the total weight of covered hyperedges, the division by the node weights (i.e. the sentence lengths) balances this tendency and allows the inclusion of short sentences as well. In contrast, the methods of sentence selection based on a maximal relevance and a minimal redundancy such as, for instance, the maximal marginal relevance approach of BIBREF31 , tend to favor the selection of long sentences only.", "FLOAT SELECTED: Table 2: Comparison with related graph- and hypergraph-based summarization systems." ] } ], "annotation_id": [ "0410b17d6ef580742ecf5b4df095cad1de80f828" ], "worker_id": [ "74eea9f3f4f790836045fcc75d0b3f5156901499" ] } ] }
{ "caption": [ "Figure 1: Algorithm Chart.", "Figure 2: Example of hypergraph and minimal hypergraph transversal.", "Figure 3: ROUGE-2 and ROUGE-SU4 as a function of δ for λ = 0.4 and µ = 1.98.", "Figure 4: ROUGE-2 and ROUGE-SU4 as a function of λ for δ = 0.85 and µ = 1.98.", "Figure 5: Evolution of the ROUGE-SU4 score (left) and the summary length (right) as a function of the coverage parameter γ of TC-TranSum algorithm 4.3.", "Table 1: ROUGE-2 and ROUGE-SU4 scores for our TL-TranSum system compared to three other hypergraph models.", "Table 2: Comparison with related graph- and hypergraph-based summarization systems.", "Table 3: Comparison with DUC2005, DUC2006 and DUC2007 systems" ], "file": [ "6-Figure1-1.png", "11-Figure2-1.png", "18-Figure3-1.png", "19-Figure4-1.png", "20-Figure5-1.png", "20-Table1-1.png", "21-Table2-1.png", "22-Table3-1.png" ] }
2001.07209
Text-based inference of moral sentiment change
We present a text-based framework for investigating moral sentiment change of the public via longitudinal corpora. Our framework is based on the premise that language use can inform people's moral perception toward right or wrong, and we build our methodology by exploring moral biases learned from diachronic word embeddings. We demonstrate how a parameter-free model supports inference of historical shifts in moral sentiment toward concepts such as slavery and democracy over centuries at three incremental levels: moral relevance, moral polarity, and fine-grained moral dimensions. We apply this methodology to visualizing moral time courses of individual concepts and analyzing the relations between psycholinguistic variables and rates of moral sentiment change at scale. Our work offers opportunities for applying natural language processing toward characterizing moral sentiment change in society.
{ "section_name": [ "Moral sentiment change and language", "Emerging NLP research on morality", "A three-tier modelling framework", "A three-tier modelling framework ::: Lexical data for moral sentiment", "A three-tier modelling framework ::: Models", "Historical corpus data", "Model evaluations", "Model evaluations ::: Moral sentiment inference of seed words", "Model evaluations ::: Alignment with human valence ratings", "Applications to diachronic morality", "Applications to diachronic morality ::: Moral change in individual concepts ::: Historical time courses.", "Applications to diachronic morality ::: Moral change in individual concepts ::: Prediction of human judgments.", "Applications to diachronic morality ::: Retrieval of morally changing concepts", "Applications to diachronic morality ::: Broad-scale investigation of moral change", "Discussion and conclusion", "Acknowledgments" ], "paragraphs": [ [ "People's moral sentiment—our feelings toward right or wrong—can change over time. For instance, the public's views toward slavery have shifted substantially over the past centuries BIBREF0. How society's moral views evolve has been a long-standing issue and a constant source of controversy subject to interpretations from social scientists, historians, philosophers, among others. Here we ask whether natural language processing has the potential to inform moral sentiment change in society at scale, involving minimal human labour or intervention.", "The topic of moral sentiment has been thus far considered a traditional inquiry in philosophy BIBREF1, BIBREF2, BIBREF3, with contemporary development of this topic represented in social psychology BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, cognitive linguistics BIBREF9, and more recently, the advent of Moral Foundations Theory BIBREF10, BIBREF11, BIBREF12. Despite the fundamental importance and interdisciplinarity of this topic, large-scale formal treatment of moral sentiment, particularly its evolution, is still in infancy from the natural language processing (NLP) community (see overview in Section SECREF2).", "We believe that there is a tremendous potential to bring NLP methodologies to bear on the problem of moral sentiment change. We build on extensive recent work showing that word embeddings reveal implicit human biases BIBREF13, BIBREF14 and social stereotypes BIBREF15. Differing from this existing work, we demonstrate that moral sentiment change can be revealed by moral biases implicitly learned from diachronic text corpora. Accordingly, we present to our knowledge the first text-based framework for probing moral sentiment change at a large scale with support for different levels of analysis concerning moral relevance, moral polarity, and fine-grained moral dimensions. As such, for any query item such as slavery, our goal is to automatically infer its moral trajectories from sentiments at each of these levels over a long period of time.", "Our approach is based on the premise that people's moral sentiments are reflected in natural language, and more specifically, in text BIBREF16. In particular, we know that books are highly effective tools for conveying moral views to the public. For example, Uncle Tom's Cabin BIBREF17 was central to the anti-slavery movement in the United States. The framework that we develop builds on this premise to explore changes in moral sentiment reflected in longitudinal or historical text.", "Figure FIGREF1 offers a preview of our framework by visualizing the evolution trajectories of the public's moral sentiment toward concepts signified by the probe words slavery, democracy, and gay. Each of these concepts illustrates a piece of “moral history” tracked through a period of 200 years (1800 to 2000), and our framework is able to capture nuanced moral changes. For instance, slavery initially lies at the border of moral virtue (positive sentiment) and vice (negative sentiment) in the 1800s yet gradually moves toward the center of moral vice over the 200-year period; in contrast, democracy considered morally negative (e.g., subversion and anti-authority under monarchy) in the 1800s is now perceived as morally positive, as a mechanism for fairness; gay, which came to denote homosexuality only in the 1930s BIBREF18, is inferred to be morally irrelevant until the modern day. We will describe systematic evaluations and applications of our framework that extend beyond these anecdotal cases of moral sentiment change.", "The general text-based framework that we propose consists of a parameter-free approach that facilitates the prediction of public moral sentiment toward individual concepts, automated retrieval of morally changing concepts, and broad-scale psycholinguistic analyses of historical rates of moral sentiment change. We provide a description of the probabilistic models and data used, followed by comprehensive evaluations of our methodology." ], [ "An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16.", "While there is emerging awareness of ethical issues in NLP BIBREF24, BIBREF25, work exploiting NLP techniques to study principles of moral sentiment change is scarce. Moreover, since morality is variable across cultures and time BIBREF12, BIBREF16, developing systems that capture the diachronic nature of moral sentiment will be a pivotal research direction. Our work leverages and complements existing research that finds implicit human biases from word embeddings BIBREF13, BIBREF14, BIBREF19 by developing a novel perspective on using NLP methodology to discover principles of moral sentiment change in human society." ], [ "Our framework treats the moral sentiment toward a concept at three incremental levels, as illustrated in Figure FIGREF3. First, we consider moral relevance, distinguishing between morally irrelevant and morally relevant concepts. At the second tier, moral polarity, we further split morally relevant concepts into those that are positively or negatively perceived in the moral domain. Finally, a third tier classifies these concepts into fine-grained categories of human morality.", "We draw from research in social psychology to inform our methodology, most prominently Moral Foundations Theory BIBREF26. MFT seeks to explain the structure and variation of human morality across cultures, and proposes five moral foundations: Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation. Each foundation is summarized by a positive and a negative pole, resulting in ten fine-grained moral categories." ], [ "To ground moral sentiment in text, we leverage the Moral Foundations Dictionary BIBREF27. The MFD is a psycholinguistic resource that associates each MFT category with a set of seed words, which are words that provide evidence for the corresponding moral category in text. We use the MFD for moral polarity classification by dividing seed words into positive and negative sets, and for fine-grained categorization by splitting them into the 10 MFT categories.", "To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words." ], [ "We propose and evaluate a set of probabilistic models to classify concepts in the three tiers of morality specified above. Our models exploit the semantic structure of word embeddings BIBREF29 to perform tiered moral classification of query concepts. In each tier, the model receives a query word embedding vector $\\mathbf {q}$ and a set of seed words for each class in that tier, and infers the posterior probabilities over the set of classes $c$ to which the query concept is associated with.", "The seed words function as “labelled examples” that guide the moral classification of novel concepts, and are organized per classification tier as follows. In moral relevance classification, sets $\\mathbf {S}_0$ and $\\mathbf {S}_1$ contain the morally irrelevant and morally relevant seed words, respectively; for moral polarity, $\\mathbf {S}_+$ and $\\mathbf {S}_-$ contain the positive and negative seed words; and for fine-grained moral categories, $\\mathbf {S}_1, \\ldots , \\mathbf {S}_{10}$ contain the seed words for the 10 categories of MFT. Then our general problem is to estimate $p(c\\,|\\,\\mathbf {q})$, where $\\mathbf {q}$ is a query vector and $c$ is a moral category in the desired tier.", "We evaluate the following four models:", "A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;", "A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;", "A $k$-Nearest Neighbors ($k$NN) model exploits local density estimation and classifies concepts according to the majority vote of the $k$ seed words closest to the query vector;", "A Kernel Density Estimation (KDE) model performs density estimation at a broader scale by considering the contribution of each seed word toward the total likelihood of each class, regulated by a bandwidth parameter $h$ that controls the sensitivity of the model to distance in embedding space.", "Table TABREF2 specifies the formulation of each model. Note that we adopt a parsimonious design principle in our modelling: both Centroid and Naïve Bayes are parameter-free models, $k$NN only depends on the choice of $k$, and KDE uses a single bandwidth parameter $h$." ], [ "To apply our models diachronically, we require a word embedding space that captures the meanings of words at different points in time and reflects changes pertaining to a particular word as diachronic shifts in a common embedding space.", "Following BIBREF30, we combine skip-gram word embeddings BIBREF29 trained on longitudinal corpora of English with rotational alignments of embedding spaces to obtain diachronic word embeddings that are aligned through time.", "We divide historical time into decade-long bins, and use two sets of embeddings provided by BIBREF30, each trained on a different historical corpus of English:", "Google N-grams BIBREF31: a corpus of $8.5 \\times 10^{11}$ tokens collected from the English literature (Google Books, all-genres) spanning the period 1800–1999.", "COHA BIBREF32: a smaller corpus of $4.1 \\times 10^8$ tokens from works selected so as to be genre-balanced and representative of American English in the period 1810–2009." ], [ "We evaluated our models in two ways: classification of moral seed words on all three tiers (moral relevance, polarity, and fine-grained categories), and correlation of model predictions with human judgments." ], [ "In this evaluation, we assessed the ability of our models to classify the seed words that compose our moral environment in a leave-one-out classification task. We performed the evaluation for all three classification tiers: 1) moral relevance, where seed words are split into morally relevant and morally irrelevant; 2) moral polarity, where moral seed words are split into positive and negative; 3) fine-grained categories, where moral seed words are split into the 10 MFT categories. In each test, we removed one seed word from the training set at a time to obtain cross-validated model predictions.", "Table TABREF14 shows classification accuracy for all models and corpora on each tier for the 1990–1999 period. We observe that all models perform substantially better than chance, confirming the efficacy of our methodology in capturing moral dimensions of words. We also observe that models using word embeddings trained on Google N-grams perform better than those trained on COHA, which could be expected given the larger corpus size of the former.", "In the remaining analyses, we employ the Centroid model, which offers competitive accuracy and a simple, parameter-free specification." ], [ "We evaluated the approximate agreement between our methodology and human judgments using valence ratings, i.e., the degree of pleasantness or unpleasantness of a stimulus. Our assumption is that the valence of a concept should correlate with its perceived moral polarity, e.g., morally repulsive ideas should evoke an unpleasant feeling. However, we do not expect this correspondence to be perfect; for example, the concept of dessert evokes a pleasant reaction without being morally relevant.", "In this analysis, we took the valence ratings for the nearly 14,000 English nouns collected by BIBREF28 and, for each query word $q$, we generated a corresponding prediction of positive moral polarity from our model, $P(c_+\\,|\\,\\mathbf {q})$. Table TABREF16 shows the correlations between human valence ratings and predictions of positive moral polarity generated by models trained on each of our corpora. We observe that the correlations are significant, suggesting the ability of our methodology to capture relevant features of moral sentiment from text.", "In the remaining applications, we use the diachronic embeddings trained on the Google N-grams corpus, which enabled superior model performance throughout our evaluations." ], [ "We applied our framework in three ways: 1) evaluation of selected concepts in historical time courses and prediction of human judgments; 2) automatic detection of moral sentiment change; and 3) broad-scale study of the relations between psycholinguistic variables and historical change of moral sentiment toward concepts." ], [ "We applied our models diachronically to predict time courses of moral relevance, moral polarity, and fine-grained moral categories toward two historically relevant topics: slavery and democracy. By grounding our model in word embeddings for each decade and querying concepts at the three tiers of classification, we obtained the time courses shown in Figure FIGREF21.", "We note that these trajectories illustrate actual historical trends. Predictions for democracy show a trend toward morally positive sentiment, consistent with the adoption of democratic regimes in Western societies. On the other hand, predictions for slavery trend down and suggest a drop around the 1860s, coinciding with the American Civil War. We also observe changes in the dominant fine-grained moral categories, such as the perception of democracy as a fair concept, suggesting potential mechanisms behind the polarity changes and providing further insight into the public sentiment toward these concepts as evidenced by text." ], [ "We explored the predictive potential of our framework by comparing model predictions with human judgments of moral relevance and acceptability. We used data from the Pew Research Center's 2013 Global Attitudes survey BIBREF33, in which participants from 40 countries judged 8 topics such as abortion and homosexuality as one of “acceptable\", “unacceptable\", and “not a moral issue\".", "We compared human ratings with model predictions at two tiers: for moral relevance, we paired the proportion of “not a moral issue” human responses with irrelevance predictions $p(c_0\\,|\\,\\mathbf {q})$ for each topic, and for moral acceptability, we paired the proportion of “acceptable” responses with positive predictions $p(c_+\\,|\\,\\mathbf {q})$. We used 1990s word embeddings, and obtained predictions for two-word topics by querying the model with their averaged embeddings.", "Figure FIGREF23 shows plots of relevance and polarity predictions against survey proportions, and we observe a visible correspondence between model predictions and human judgments despite the difficulty of this task and limited number of topics." ], [ "Beyond analyzing selected concepts, we applied our framework predictively on a large repertoire of words to automatically discover the concepts that have exhibited the greatest change in moral sentiment at two tiers, moral relevance and moral polarity.", "We selected the 10,000 nouns with highest total frequency in the 1800–1999 period according to data from BIBREF30, restricted to words labelled as nouns in WordNet BIBREF34 for validation. For each such word $\\mathbf {q}$, we computed diachronic moral relevance scores $R_i = p(c_1\\,|\\,\\mathbf {q}), i=1,\\ldots ,20$ for the 20 decades in our time span. Then, we performed a linear regression of $R$ on $T = 1,\\ldots ,n$ and took the fitted slope as a measure of moral relevance change. We repeated the same procedure for moral polarity. Finally, we removed words with average relevance score below $0.5$ to focus on morally relevant retrievals.", "Table TABREF17 shows the words with steepest predicted change toward moral relevance, along with their predicted fine-grained moral categories in modern times (i.e., 1900–1999). Table TABREF18 shows the words with steepest predicted change toward the positive and negative moral poles. To further investigate the moral sentiment that may have led to such polarity shifts, we also show the predicted fine-grained moral categories of each word at its earliest time of predicted moral relevance and in modern times. Although we do not have access to ground truth for this application, these results offer initial insight into the historical moral landscape of the English language at scale." ], [ "In this application, we investigated the hypothesis that concept concreteness is inversely related to change in moral relevance, i.e., that concepts considered more abstract might become morally relevant at a higher rate than concepts considered more concrete. To test this hypothesis, we performed a multiple linear regression analysis on rate of change toward moral relevance of a large repertoire of words against concept concreteness ratings, word frequency BIBREF35, and word length BIBREF36.", "We obtained norms of concreteness ratings from BIBREF28. We collected the same set of high-frequency nouns as in the previous analysis, along with their fitted slopes of moral relevance change. Since we were interested in moral relevance change within this large set of words, we restricted our analysis to those words whose model predictions indicate change in moral relevance, in either direction, from the 1800s to the 1990s.", "We performed a multiple linear regression under the following model:", "Here $\\rho (w)$ is the slope of moral relevance change for word $w$; $f(w$) is its average frequency; $l(w)$ is its character length; $c(w)$ is its concreteness rating; $\\beta _f$, $\\beta _l$, $\\beta _c$, and $\\beta _0$ are the corresponding factor weights and intercept, respectively; and $\\epsilon \\sim \\mathcal {N}(0, \\sigma )$ is the regression error term.", "Table TABREF27 shows the results of multiple linear regression. We observe that concreteness is a significant negative predictor of change toward moral relevance, suggesting that abstract concepts are more strongly associated with increasing moral relevance over time than concrete concepts. This significance persists under partial correlation test against the control factors ($p < 0.01$).", "We further verified the diachronic component of this effect in a random permutation analysis. We generated 1,000 control time courses by randomly shuffling the 20 decades in our data, and repeated the regression analysis to obtain a control distribution for each regression coefficient. All effects became non-significant under the shuffled condition, suggesting the relevance of concept concreteness for diachronic change in moral sentiment (see Supplementary Material)." ], [ "We presented a text-based framework for exploring the socio-scientific problem of moral sentiment change. Our methodology uses minimal parameters and exploits implicit moral biases learned from diachronic word embeddings to reveal the public's moral perception toward a large concept repertoire over a long historical period.", "Differing from existing work in NLP that treats moral sentiment as a flat classification problem BIBREF19, BIBREF20, our framework probes moral sentiment change at multiple levels and captures moral dynamics concerning relevance, polarity, and fine-grained categories informed by Moral Foundations Theory BIBREF12. We applied our methodology to the automated analyses of moral change both in individual concepts and at a broad scale, thus providing insights into psycholinguistic variables that associate with rates of moral change in the public.", "Our current work focuses on exploring moral sentiment change in English-speaking cultures. Future research should evaluate the appropriateness of the framework to probing moral change from a diverse range of cultures and linguistic backgrounds, and the extent to which moral sentiment change interacts and crisscrosses with linguistic meaning change and lexical coinage. Our work creates opportunities for applying natural language processing toward characterizing moral sentiment change in society." ], [ "We would like to thank Nina Wang, Nicola Lacerata, Dan Jurafsky, Paul Bloom, Dzmitry Bahdanau, and the Computational Linguistics Group at the University of Toronto for helpful discussion. We would also like to thank Ben Prystawski for his feedback on the manuscript. JX is supported by an NSERC USRA Fellowship and YX is funded through a SSHRC Insight Grant, an NSERC Discovery Grant, and a Connaught New Researcher Award." ] ] }
{ "question": [ "Does the paper discuss previous models which have been applied to the same task?", "Which datasets are used in the paper?", "How does the parameter-free model work?", "How do they quantify moral relevance?", "Which fine-grained moral dimension examples do they showcase?", "Which dataset sources to they use to demonstrate moral sentiment through history?" ], "question_id": [ "31ee92e521be110b6a5a8d08cc9e6f90a3a97aae", "737397f66751624bcf4ef891a10b29cfc46b0520", "87cb19e453cf7e248f24b5f7d1ff9f02d87fc261", "5fb6a21d10adf4e81482bb5c1ec1787dc9de260d", "542a87f856cb2c934072bacaa495f3c2645f93be", "4fcc668eb3a042f60c4ce2e7d008e7923b25b4fc" ], "nlp_background": [ "two", "two", "two", "infinity", "infinity", "infinity" ], "topic_background": [ "familiar", "familiar", "familiar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "somewhat", "somewhat", "somewhat", "no", "no", "no" ], "search_query": [ "sentiment ", "sentiment ", "sentiment ", "Inference", "Inference", "Inference" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16." ], "highlighted_evidence": [ "An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16." ] } ], "annotation_id": [ "047ca89bb05cf86c1747c79e310917a8225aebf3" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Google N-grams\nCOHA\nMoral Foundations Dictionary (MFD)\n", "evidence": [ "To ground moral sentiment in text, we leverage the Moral Foundations Dictionary BIBREF27. The MFD is a psycholinguistic resource that associates each MFT category with a set of seed words, which are words that provide evidence for the corresponding moral category in text. We use the MFD for moral polarity classification by dividing seed words into positive and negative sets, and for fine-grained categorization by splitting them into the 10 MFT categories.", "To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words.", "We divide historical time into decade-long bins, and use two sets of embeddings provided by BIBREF30, each trained on a different historical corpus of English:", "Google N-grams BIBREF31: a corpus of $8.5 \\times 10^{11}$ tokens collected from the English literature (Google Books, all-genres) spanning the period 1800–1999.", "COHA BIBREF32: a smaller corpus of $4.1 \\times 10^8$ tokens from works selected so as to be genre-balanced and representative of American English in the period 1810–2009." ], "highlighted_evidence": [ "To ground moral sentiment in text, we leverage the Moral Foundations Dictionary BIBREF27. The MFD is a psycholinguistic resource that associates each MFT category with a set of seed words, which are words that provide evidence for the corresponding moral category in text.", "To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words.", "We divide historical time into decade-long bins, and use two sets of embeddings provided by BIBREF30, each trained on a different historical corpus of English:\n\nGoogle N-grams BIBREF31: a corpus of $8.5 \\times 10^{11}$ tokens collected from the English literature (Google Books, all-genres) spanning the period 1800–1999.\n\nCOHA BIBREF32: a smaller corpus of $4.1 \\times 10^8$ tokens from works selected so as to be genre-balanced and representative of American English in the period 1810–2009." ] } ], "annotation_id": [ "f17a2c6afd767ff5278c07164927c3c3a166ee40" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;", "A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Table TABREF2 specifies the formulation of each model. Note that we adopt a parsimonious design principle in our modelling: both Centroid and Naïve Bayes are parameter-free models, $k$NN only depends on the choice of $k$, and KDE uses a single bandwidth parameter $h$.", "A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;", "A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;" ], "highlighted_evidence": [ " Note that we adopt a parsimonious design principle in our modelling: both Centroid and Naïve Bayes are parameter-free models, $k$NN only depends on the choice of $k$, and KDE uses a single bandwidth parameter $h$.", "A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;", "A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;" ] } ], "annotation_id": [ "25a58a9ba9472e5de77ec1ddeba0ef18e0238b02" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "By complementing morally relevant seed words with a set of morally irrelevant seed words based on the notion of valence", "evidence": [ "To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words." ], "highlighted_evidence": [ "To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words." ] } ], "annotation_id": [ "e3c7a80666fff31b038cdb13330b9fa7a8b6c8d0" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We draw from research in social psychology to inform our methodology, most prominently Moral Foundations Theory BIBREF26. MFT seeks to explain the structure and variation of human morality across cultures, and proposes five moral foundations: Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation. Each foundation is summarized by a positive and a negative pole, resulting in ten fine-grained moral categories." ], "highlighted_evidence": [ "We draw from research in social psychology to inform our methodology, most prominently Moral Foundations Theory BIBREF26. MFT seeks to explain the structure and variation of human morality across cultures, and proposes five moral foundations: Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation. Each foundation is summarized by a positive and a negative pole, resulting in ten fine-grained moral categories." ] } ], "annotation_id": [ "0c7b39838a3715c9f96f44796512eb886463cfe9" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "b1ca28830abd09b4dea845015b4b37b90b141847" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Figure 1: Illustration of moral sentiment change over the past two centuries. Moral sentiment trajectories of three probe concepts, slavery, democracy, and gay, are shown in moral sentiment embedding space through 2D projection from Fisher’s discriminant analysis with respect to seed words from the classes of moral virtue, moral vice, and moral irrelevance. Parenthesized items represent moral categories predicted to be most strongly associated with the probe concepts. Gray markers represent the fine-grained centroids (or anchors) of these moral classes.", "Figure 2: Illustration of the three-tier framework that supports moral sentiment inference at different levels.", "Table 1: Summary of models for moral sentiment classification. Each model infers moral sentiment of a query word vector q based on moral classes c (at any of the three levels) represented by moral seed words Sc. E [Sc] is the mean vector of Sc; E [Sc, j] ,Var [Sc, j] refer to the mean and variance of Sc along the j-th dimension in embedding space. d is the number of embedding dimensions; and fN , fMN refer to the density functions of univariate and multivariate normal distributions, respectively.", "Table 2: Classification accuracy of moral seed words for moral relevance, moral polarity, and fine-grained moral categories based on 1990–1999 word embeddings for two independent corpora, Google N-grams and COHA.", "Table 3: Pearson correlations between model predicted moral sentiment polarities and human valence ratings.", "Table 4: Top 10 changing words towards moral relevance during 1800–2000, with model-inferred moral category and switching period. *, **, and *** denote p < 0.05, p < 0.001, and p < 0.0001, all Bonferroni-corrected.", "Table 5: Top 10 changing words towards moral positive (upper panel) and negative (lower panel) polarities, with model-inferred most representative moral categories during historical and modern periods and the switching periods. *, **, and *** denote p < 0.05, p < 0.001, and p < 0.0001, all Bonferroni-corrected for multiple tests.", "Figure 3: Moral sentiment time courses of slavery (left) and democracy (right) at each of the three levels, inferred by the Centroid model. Time courses at the moral relevance and polarity levels are in log odds ratios, and those for the fine-grained moral categories are represented by circles with sizes proportional to category probabilities.", "Figure 4: Model predictions against percentage of Pew respondents who selected “Not a moral concern” (left) or “Acceptable” (right), with lines of best fit and Pearson correlation coefficients r shown in the background.", "Table 6: Results from multiple regression that regresses rate of change in moral relevance against the factors of word frequency, length, and concreteness (n = 606)." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Table1-1.png", "5-Table2-1.png", "5-Table3-1.png", "6-Table4-1.png", "6-Table5-1.png", "7-Figure3-1.png", "7-Figure4-1.png", "8-Table6-1.png" ] }
2001.10161
Bringing Stories Alive: Generating Interactive Fiction Worlds
World building forms the foundation of any task that requires narrative intelligence. In this work, we focus on procedurally generating interactive fiction worlds---text-based worlds that players "see" and "talk to" using natural language. Generating these worlds requires referencing everyday and thematic commonsense priors in addition to being semantically consistent, interesting, and coherent throughout. Using existing story plots as inspiration, we present a method that first extracts a partial knowledge graph encoding basic information regarding world structure such as locations and objects. This knowledge graph is then automatically completed utilizing thematic knowledge and used to guide a neural language generation model that fleshes out the rest of the world. We perform human participant-based evaluations, testing our neural model's ability to extract and fill-in a knowledge graph and to generate language conditioned on it against rule-based and human-made baselines. Our code is available at this https URL.
{ "section_name": [ "Introduction", "Related Work", "World Generation", "World Generation ::: Knowledge Graph Construction", "World Generation ::: Knowledge Graph Construction ::: Neural Graph Construction", "World Generation ::: Knowledge Graph Construction ::: Rule-Based Graph Construction", "World Generation ::: Description Generation", "World Generation ::: Description Generation ::: Neural Description Generation", "World Generation ::: Description Generation ::: Rules-Based Description Generation", "Evaluation", "Evaluation ::: Knowledge Graph Construction Evaluation", "Evaluation ::: Full Game Evaluation", "Conclusion" ], "paragraphs": [ [ "Interactive fictions—also called text-adventure games or text-based games—are games in which a player interacts with a virtual world purely through textual natural language—receiving descriptions of what they “see” and writing out how they want to act, an example can be seen in Figure FIGREF2. Interactive fiction games are often structured as puzzles, or quests, set within the confines of given game world. Interactive fictions have been adopted as a test-bed for real-time game playing agents BIBREF0, BIBREF1, BIBREF2. Unlike other, graphical games, interactive fictions test agents' abilities to infer the state of the world through communication and to indirectly affect change in the world through language. Interactive fictions are typically modeled after real or fantasy worlds; commonsense knowledge is an important factor in successfully playing interactive fictions BIBREF3, BIBREF4.", "In this paper we explore a different challenge for artificial intelligence: automatically generating text-based virtual worlds for interactive fictions. A core component of many narrative-based tasks—everything from storytelling to game generation—is world building. The world of a story or game defines the boundaries of where the narrative is allowed and what the player is allowed to do. There are four core challenges to world generation: (1) commonsense knowledge: the world must reference priors that the player possesses so that players can make sense of the world and build expectations on how to interact with it. This is especially true in interactive fictions where the world is presented textually because many details of the world necessarily be left out (e.g., the pot is on a stove; kitchens are found in houses) that might otherwise be literal in a graphical virtual world. (2) Thematic knowledge: interactive fictions usually involve a theme or genre that comes with its own expectations. For example, light speed travel is plausible in sci-fi worlds but not realistic in the real world. (3) Coherence: the world must not appear to be an random assortment of locations. (3) Natural language: The descriptions of the rooms as well as the permissible actions must text, implying that the system has natural language generation capability.", "Because worlds are conveyed entirely through natural language, the potential output space for possible generated worlds is combinatorially large. To constrain this space and to make it possible to evaluate generated world, we present an approach which makes use of existing stories, building on the worlds presented in them but leaving enough room for the worlds to be unique. Specifically, we take a story such as Sherlock Holmes or Rapunzel—a linear reading experience—and extract the description of the world the story is set in to make an interactive world the player can explore.", "Our method first extracts a partial, potentially disconnected knowledge graph from the story, encoding information regarding locations, characters, and objects in the form of $\\langle entity,relation,entity\\rangle $ triples. Relations between these types of entities as well as their properties are captured in this knowledge graph. However, stories often do not explicitly contain all the information required to fully fill out such a graph. A story may mention that there is a sword stuck in a stone but not what you can do with the sword or where it is in relation to everything else. Our method fills in missing relation and affordance information using thematic knowledge gained from training on stories in a similar genre. This knowledge graph is then used to guide the text description generation process for the various locations, characters, and objects. The game is then assembled on the basis of the knowledge graph and the corresponding generated descriptions.", "We have two major contributions. (1) A neural model and a rules-based baseline for each of the tasks described above. The phases are that of graph extraction and completion followed by description generation and game formulation. Each of these phases are relatively distinct and utilize their own models. (2) A human subject study for comparing the neural model and variations on it to the rules-based and human-made approaches. We perform two separate human subject studies—one for the first phase of knowledge graph construction and another for the overall game creation process—testing specifically for coherence, interestingness, and the ability to maintain a theme or genre." ], [ "There has been a slew of recent work in developing agents that can play text games BIBREF0, BIBREF5, BIBREF1, BIBREF6. BIBREF7 ammanabrolutransfer,ammanabrolu,ammanabrolu2020graph in particular use knowledge graphs as state representations for game-playing agents. BIBREF8 propose QAit, a set of question answering tasks framed as text-based or interactive fiction games. QAit focuses on helping agents learn procedural knowledge through interaction with a dynamic environment. These works all focus on agents that learn to play a given set of interactive fiction games as opposed to generating them.", "Scheherazade BIBREF9 is a system that learns a plot graph based on stories written by crowd sourcing the task of writing short stories. The learned plot graph contains details relevant to ensure story coherence. It includes: plot events, temporal precedence, and mutual exclusion relations. Scheherazade-IF BIBREF10 extends the system to generate choose-your-own-adventure style interactive fictions in which the player chooses from prescribed options. BIBREF11 explore a method of creating interactive narratives revolving around locations, wherein sentences are mapped to a real-world GPS location from a corpus of sentences belonging to a certain genre. Narratives are made by chaining together sentences selected based on the player's current real-world location. In contrast to these models, our method generates a parser-based interactive fiction in which the player types in a textual command, allowing for greater expressiveness.", "BIBREF12 define the problem of procedural content generation in interactive fiction games in terms of the twin considerations of world and quest generation and focus on the latter. They present a system in which quest content is first generated by learning from a corpus and then grounded into a given interactive fiction world. The work is this paper focuses on the world generation problem glossed in the prior work. Thus these two systems can be seen as complimentary.", "Light BIBREF13 is a crowdsourced dataset of grounded text-adventure game dialogues. It contains information regarding locations, characters, and objects set in a fantasy world. The authors demonstrate that the supervised training of transformer-based models lets us contextually relevant dialog, actions, and emotes. Most in line with the spirit of this paper, BIBREF14 leverage Light to generate worlds for text-based games. They train a neural network based model using Light to compositionally arrange locations, characters, and objects into an interactive world. Their model is tested using a human subject study against other machine learning based algorithms with respect to the cohesiveness and diversity of generated worlds. Our work, in contrast, focuses on extracting the information necessary for building interactive worlds from existing story plots." ], [ "World generation happens in two phases. In the first phase, a partial knowledge graph is extracted from a story plot and then filled in using thematic commonsense knowledge. In the second phase, the graph is used as the skeleton to generate a full interactive fiction game—generating textual descriptions or “flavortext” for rooms and embedded objects. We present a novel neural approach in addition to a rule guided baseline for each of these phases in this section." ], [ "The first phase is to extract a knowledge graph from the story that depicts locations, characters, objects, and the relations between these entities. We present two techniques. The first uses neural question-answering technique to extract relations from a story text. The second, provided as a baseline, uses OpenIE5, a commonly used rule-based information extraction technique. For the sake of simplicity, we considered primarily the location-location and location-character/object relations, represented by the “next to” and “has” edges respectively in Figure FIGREF4." ], [ "While many neural models already exist that perform similar tasks such as named entity extraction and part of speech tagging, they often come at the cost of large amounts of specialized labeled data suited for that task. We instead propose a new method that leverages models trained for context-grounded question-answering tasks to do entity extraction with no task dependent data or fine-tuning necessary. Our method, dubbed AskBERT, leverages the Question-Answering (QA) model ALBERT BIBREF15. AskBERT consists of two main steps as shown in Figure FIGREF7: vertex extraction and graph construction.", "The first step is to extract the set of entities—graph vertices—from the story. We are looking to extract information specifically regarding characters, locations, and objects. This is done by using asking the QA model questions such as “Who is a character in the story?”. BIBREF16 have shown that the phrasing of questions given to a QA model is important and this forms the basis of how we formulate our questions—questions are asked so that they are more likely to return a single answer, e.g. asking “Where is a location in the story?” as opposed to “Where are the locations in the story?”. In particular, we notice that pronoun choice can be crucial; “Where is a location in the story?” yielded more consistent extraction than “What is a location in the story?”. ALBERT QA is trained to also output a special <$no$-$answer$> token when it cannot find an answer to the question within the story. Our method makes use of this by iteratively asking QA model a question and masking out the most likely answer outputted on the previous step. This process continues until the <$no$-$answer$> token becomes the most likely answer.", "The next step is graph construction. Typical interactive fiction worlds are usually structured as trees, i.e. no cycles except between locations. Using this fact, we use an approach that builds a graph from the vertex set by one relation—or edge—at a time. Once again using the entire story plot as context, we query the ALBERT-QA model picking a random starting location $x$ from the set of vertices previously extracted.and asking the questions “What location can I visit from $x$?” and “Who/What is in $x$?”. The methodology for phrasing these questions follows that described for the vertex extraction. The answer given by the QA model is matched to the vertex set by picking the vertex $u$ that contains the best word-token overlap with the answer. Relations between vertices are added by computing a relation probability on the basis of the output probabilities of the answer given by the QA model. The probability that vertices $x,u$ are related:", "where", "is the sum of the individual token probabilities of all the overlapping tokens in the answer from the QA model and $u$." ], [ "We compared our proposed AskBERT method with a non-neural, rule-based approach. This approach is based on the information extracted by OpenIE5, followed by some post-processing such as named-entity recognition and part-of-speech tagging. OpenIE5 combines several cutting-edge ideas from several existing papers BIBREF17, BIBREF18, BIBREF19 to create a powerful information extraction tools. For a given sentence, OpenIE5 generates multiple triples in the format of $\\langle entity, relation, entity\\rangle $ as concise representations of the sentence, each with a confidence score. These triples are also occasionally annotated with location information indicating that a triple happened in a location.", "As in the neural AskBERT model, we attempt to extract information regarding locations, characters, and objects. The entire story plot is passed into the OpenIE5 and we receive a set of triples. The location annotations on the triples are used to create a set of locations. We mark which sentences in the story contain these locations. POS tagging based on marking noun-phrases is then used in conjunction with NER to further filter the set of triples—identifying the set of characters and objects in the story.", "The graph is constructed by linking the set of triples on the basis of the location they belong to. While some sentences contain very explicit location information for OpenIE5 to mark it out in the triples, most of them do not. We therefore make the assumption that the location remains the same for all triples extracted in between sentences where locations are explicitly mentioned. For example, if there exists $location A$ in the 1st sentence and $location B$ in the 5th sentence of the story, all the events described in sentences 1-4 are considered to take place in $location A$. The entities mentioned in these events are connected to $location A$ in the graph." ], [ "The second phase involves using the constructed knowledge graph to generate textual descriptions of the entities we have extracted, also known as flavortext. This involves generating descriptions of what a player “sees” when they enter a location and short blurbs for each object and character. These descriptions need to not only be faithful to the information present in the knowledge graph and the overall story plot but to also contain flavor and be interesting for the player." ], [ "Here, we approach the problem of description generation by taking inspiration from conditional transformer-based generation methods BIBREF20. Our approach is outlined in Figure FIGREF11 and an example description shown in Figure FIGREF2. For any given entity in the story, we first locate it in the story plot and then construct a prompt which consists of the entire story up to and including the sentence when the entity is first mentioned in the story followed by a question asking to describe that entity. With respect to prompts, we found that more direct methods such as question-answering were more consistent than open-ended sentence completion. For example, “Q: Who is the prince? A:” often produced descriptions that were more faithful to the information already present about the prince in the story than “You see the prince. He is/looks”. For our transformer-based generation, we use a pre-trained 355M GPT-2 model BIBREF21 finetuned on a corpus of plot summaries collected from Wikipedia. The plots used for finetuning are tailored specific to the genre of the story in order to provide more relevant generation for the target genre. Additional details regarding the datasets used are provided in Section SECREF4. This method strikes a balance between knowledge graph verbalization techniques which often lack “flavor” and open ended generation which struggles to maintain semantic coherence." ], [ "In the rule-based approach, we utilized the templates from the built-in text game generator of TextWorld BIBREF1 to generate the description for our graphs. TextWorld is an open-source library that provides a way to generate text-game learning environments for training reinforcement learning agents using pre-built grammars.", "Two major templates involved here are the Room Intro Templates and Container Description Templates from TextWorld, responsible for generating descriptions of locations and blurbs for objects/characters respectively. The location and object/character information are taken from the knowledge graph constructed previously.", "Example of Room Intro Templates: “This might come as a shock to you, but you've just $\\#entered\\#$ a <$location$-$name$>”", "Example of Container Description Templates: “The <$location$-$name$> $\\#contains\\#$ <$object/person$-$name$>”", "Each token surrounded by $\\#$ sign can be expanded using a select set of terminal tokens. For instance, $\\#entered\\#$ could be filled with any of the following phrases here: entered; walked into; fallen into; moved into; stumbled into; come into. Additional prefixes, suffixes and adjectives were added to increase the relative variety of descriptions. Unlike the neural methods, the rule-based approach is not able to generate detailed and flavorful descriptions of the properties of the locations/objects/characters. By virtue of the templates, however, it is much better at maintaining consistency with the information contained in the knowledge graph." ], [ "We conducted two sets of human participant evaluations by recruiting participants over Amazon Mechanical Turk. The first evaluation tests the knowledge graph construction phase, in which we measure perceived coherence and genre or theme resemblance of graphs extracted by different models. The second study compares full games—including description generation and game assembly, which can't easily be isolated from graph construction—generated by different methods. This study looks at how interesting the games were to the players in addition to overall coherence and genre resemblance. Both studies are performed across two genres: mystery and fairy-tales. This is done in part to test the relative effectiveness of our approach across different genres with varying thematic commonsense knowledge. The dataset used was compiled via story summaries that were scraped from Wikipedia via a recursive crawling bot. The bot searched pages for both for plot sections as well as links to other potential stories. From the process, 695 fairy-tales and 536 mystery stories were compiled from two categories: novels and short stories. We note that the mysteries did not often contain many fantasy elements, i.e. they consisted of mysteries set in our world such as Sherlock Holmes, while the fairy-tales were much more removed from reality. Details regarding how each of the studies were conducted and the corresponding setup are presented below." ], [ "We first select a subset of 10 stories randomly from each genre and then extract a knowledge graph using three different models. Each participant is presented with the three graphs extracted from a single story in each genre and then asked to rank them on the basis of how coherent they were and how well the graphs match the genre. The graphs resembles the one shown in in Figure FIGREF4 and are presented to the participant sequentially. The exact order of the graphs and genres was also randomized to mitigate any potential latent correlations. Overall, this study had a total of 130 participants.This ensures that, on average, graphs from every story were seen by 13 participants.", "In addition to the neural AskBERT and rules-based methods, we also test a variation of the neural model which we dub to be the “random” approach. The method of vertex extraction remains identical to the neural method, but we instead connect the vertices randomly instead of selecting the most confident according to the QA model. We initialize the graph with a starting location entity. Then, we randomly sample from the vertex set and connect it to a randomly sampled location in the graph until every vertex has been connected. This ablation in particular is designed to test the ability of our neural model to predict relations between entities. It lets us observe how accurately linking related vertices effects each of the metrics that we test for. For a fair comparison between the graphs produced by different approaches, we randomly removed some of the nodes and edges from the initial graphs so that the maximum number of locations per graph and the maximum number of objects/people per location in each story genre are the same.", "The results are shown in Table TABREF20. We show the median rank of each of the models for both questions across the genres. Ranked data is generally closely interrelated and so we perform Friedman's test between the three models to validate that the results are statistically significant. This is presented as the $p$-value in table (asterisks indicate significance at $p<0.05$). In cases where we make comparisons between specific pairs of models, when necessary, we additionally perform the Mann-Whitney U test to ensure that the rankings differed significantly.", "In the mystery genre, the rules-based method was often ranked first in terms of genre resemblance, followed by the neural and random models. This particular result was not statistically significant however, likely indicating that all the models performed approximately equally in this category. The neural approach was deemed to be the most coherent followed by the rules and random. For the fairy-tales, the neural model ranked higher on both of the questions asked of the participants. In this genre, the random neural model also performed better than the rules based approach.", "Tables TABREF18 and TABREF19 show the statistics of the constructed knowledge graphs in terms of vertices and edges. We see that the rules-based graph construction has a lower number of locations, characters, and relations between entities but far more objects in general. The greater number of objects is likely due to the rules-based approach being unable to correctly identify locations and characters. The gap between the methods is less pronounced in the mystery genre as opposed to the fairy-tales, in fact the rules-based graphs have more relations than the neural ones. The random and neural models have the same number of entities in all categories by construction but random in general has lower variance on the number of relations found. In this case as well, the variance is lower for mystery as opposed to fairy-tales. When taken in the context of the results in Table TABREF20, it appears to indicate that leveraging thematic commonsense in the form of AskBERT for graph construction directly results in graphs that are more coherent and maintain genre more easily. This is especially true in the case of the fairy-tales where the thematic and everyday commonsense diverge more than than in the case of the mysteries." ], [ "This participant study was designed to test the overall game formulation process encompassing both phases described in Section SECREF3. A single story from each genre was chosen by hand from the 10 stories used for the graph evaluation process. From the knowledge graphs for this story, we generate descriptions using the neural, rules, and random approaches described previously. Additionally, we introduce a human-authored game for each story here to provide an additional benchmark. This author selected was familiar with text-adventure games in general as well as the genres of detective mystery and fairy tale. To ensure a fair comparison, we ensure that the maximum number of locations and maximum number of characters/objects per location matched the other methods. After setting general format expectations, the author read the selected stories and constructed knowledge graphs in a corresponding three step process of: identifying the $n$ most important entities in the story, mapping positional relationships between entities, and then synthesizing flavor text for the entities based off of said location, the overall story plot, and background topic knowledge.", "Once the knowledge graph and associated descriptions are generated for a particular story, they are then automatically turned into a fully playable text-game using the text game engine Evennia. Evennia was chosen for its flexibility and customization, as well as a convenient web client for end user testing. The data structures were translated into builder commands within Evennia that constructed the various layouts, flavor text, and rules of the game world. Users were placed in one “room” out of the different world locations within the game they were playing, and asked to explore the game world that was available to them. Users achieved this by moving between rooms and investigating objects. Each time a new room was entered or object investigated, the player's total number of explored entities would be displayed as their score.", "Each participant was was asked to play the neural game and then another one from one of the three additional models within a genre. The completion criteria for each game is collect half the total score possible in the game, i.e. explore half of all possible rooms and examine half of all possible entities. This provided the participant with multiple possible methods of finishing a particular game. On completion, the participant was asked to rank the two games according to overall perceived coherence, interestingness, and adherence to the genre. We additionally provided a required initial tutorial game which demonstrated all of these mechanics. The order in which participants played the games was also randomized as in the graph evaluation to remove potential correlations. We had 75 participants in total, 39 for mystery and 36 for fairy-tales. As each player played the neural model created game and one from each of the other approaches—this gave us 13 on average for the other approaches in the mystery genre and 12 for fairy-tales.", "The summary of the results of the full game study is shown in Table TABREF23. As the comparisons made in this study are all made pairwise between our neural model and one of the baselines—they are presented in terms of what percentage of participants prefer the baseline game over the neural game. Once again, as this is highly interrelated ranked data, we perform the Mann-Whitney U test between each of the pairs to ensure that the rankings differed significantly. This is also indicated on the table.", "In the mystery genre, the neural approach is generally preferred by a greater percentage of participants than the rules or random. The human-made game outperforms them all. A significant exception to is that participants thought that the rules-based game was more interesting than the neural game. The trends in the fairy-tale genre are in general similar with a few notable deviations. The first deviation is that the rules-based and random approaches perform significantly worse than neural in this genre. We see also that the neural game is as coherent as the human-made game.", "As in the previous study, we hypothesize that this is likely due to the rules-based approach being more suited to the mystery genre, which is often more mundane and contains less fantastical elements. By extension, we can say that thematic commonsense in fairy-tales has less overlap with everyday commonsense than for mundane mysteries. This has a few implications, one of which is that this theme specific information is unlikely to have been seen by OpenIE5 before. This is indicated in the relatively improved performance of the rules-based model in this genre across in terms of both interestingness and coherence.The genre difference can also be observed in terms of the performance of the random model. This model also lacking when compared to our neural model across all the questions asked especially in the fairy-tale setting. This appears to imply that filling in gaps in the knowledge graph using thematically relevant information such as with AskBERT results in more interesting and coherent descriptions and games especially in settings where the thematic commonsense diverges from everyday commonsense." ], [ "Procedural world generation systems are required to be semantically consistent, comply with thematic and everyday commonsense understanding, and maintain overall interestingness. We describe an approach that transform a linear reading experience in the form of a story plot into a interactive narrative experience. Our method, AskBERT, extracts and fills in a knowledge graph using thematic commonsense and then uses it as a skeleton to flesh out the rest of the world. A key insight from our human participant study reveals that the ability to construct a thematically consistent knowledge graph is critical to overall perceptions of coherence and interestingness particularly when the theme diverges from everyday commonsense understanding." ] ] }
{ "question": [ "How well did the system do?", "How is the information extracted?" ], "question_id": [ "c180f44667505ec03214d44f4970c0db487a8bae", "76d62e414a345fe955dc2d99562ef5772130bc7e" ], "nlp_background": [ "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no" ], "search_query": [ "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "the neural approach is generally preferred by a greater percentage of participants than the rules or random", "human-made game outperforms them all" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We conducted two sets of human participant evaluations by recruiting participants over Amazon Mechanical Turk. The first evaluation tests the knowledge graph construction phase, in which we measure perceived coherence and genre or theme resemblance of graphs extracted by different models. The second study compares full games—including description generation and game assembly, which can't easily be isolated from graph construction—generated by different methods. This study looks at how interesting the games were to the players in addition to overall coherence and genre resemblance. Both studies are performed across two genres: mystery and fairy-tales. This is done in part to test the relative effectiveness of our approach across different genres with varying thematic commonsense knowledge. The dataset used was compiled via story summaries that were scraped from Wikipedia via a recursive crawling bot. The bot searched pages for both for plot sections as well as links to other potential stories. From the process, 695 fairy-tales and 536 mystery stories were compiled from two categories: novels and short stories. We note that the mysteries did not often contain many fantasy elements, i.e. they consisted of mysteries set in our world such as Sherlock Holmes, while the fairy-tales were much more removed from reality. Details regarding how each of the studies were conducted and the corresponding setup are presented below.", "Each participant was was asked to play the neural game and then another one from one of the three additional models within a genre. The completion criteria for each game is collect half the total score possible in the game, i.e. explore half of all possible rooms and examine half of all possible entities. This provided the participant with multiple possible methods of finishing a particular game. On completion, the participant was asked to rank the two games according to overall perceived coherence, interestingness, and adherence to the genre. We additionally provided a required initial tutorial game which demonstrated all of these mechanics. The order in which participants played the games was also randomized as in the graph evaluation to remove potential correlations. We had 75 participants in total, 39 for mystery and 36 for fairy-tales. As each player played the neural model created game and one from each of the other approaches—this gave us 13 on average for the other approaches in the mystery genre and 12 for fairy-tales.", "FLOAT SELECTED: Table 4: Results of the full game evaluation participant study. *Indicates statistical significance (p < 0.05).", "In the mystery genre, the neural approach is generally preferred by a greater percentage of participants than the rules or random. The human-made game outperforms them all. A significant exception to is that participants thought that the rules-based game was more interesting than the neural game. The trends in the fairy-tale genre are in general similar with a few notable deviations. The first deviation is that the rules-based and random approaches perform significantly worse than neural in this genre. We see also that the neural game is as coherent as the human-made game." ], "highlighted_evidence": [ "We conducted two sets of human participant evaluations by recruiting participants over Amazon Mechanical Turk. The first evaluation tests the knowledge graph construction phase, in which we measure perceived coherence and genre or theme resemblance of graphs extracted by different models. The second study compares full games—including description generation and game assembly, which can't easily be isolated from graph construction—generated by different methods. This study looks at how interesting the games were to the players in addition to overall coherence and genre resemblance. Both studies are performed across two genres: mystery and fairy-tales.", "Each participant was was asked to play the neural game and then another one from one of the three additional models within a genre. The completion criteria for each game is collect half the total score possible in the game, i.e. explore half of all possible rooms and examine half of all possible entities. This provided the participant with multiple possible methods of finishing a particular game. On completion, the participant was asked to rank the two games according to overall perceived coherence, interestingness, and adherence to the genre. We additionally provided a required initial tutorial game which demonstrated all of these mechanics. The order in which participants played the games was also randomized as in the graph evaluation to remove potential correlations. We had 75 participants in total, 39 for mystery and 36 for fairy-tales. As each player played the neural model created game and one from each of the other approaches—this gave us 13 on average for the other approaches in the mystery genre and 12 for fairy-tales.", "FLOAT SELECTED: Table 4: Results of the full game evaluation participant study. *Indicates statistical significance (p < 0.05).", "In the mystery genre, the neural approach is generally preferred by a greater percentage of participants than the rules or random. The human-made game outperforms them all. A significant exception to is that participants thought that the rules-based game was more interesting than the neural game. The trends in the fairy-tale genre are in general similar with a few notable deviations. The first deviation is that the rules-based and random approaches perform significantly worse than neural in this genre. We see also that the neural game is as coherent as the human-made game." ] } ], "annotation_id": [ "e3a0a331d4262d971d51441992bba0ff3dbfcc84" ], "worker_id": [ "2cfd959e433f290bb50b55722370f0d22fe090b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "neural question-answering technique to extract relations from a story text", "OpenIE5, a commonly used rule-based information extraction technique" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The first phase is to extract a knowledge graph from the story that depicts locations, characters, objects, and the relations between these entities. We present two techniques. The first uses neural question-answering technique to extract relations from a story text. The second, provided as a baseline, uses OpenIE5, a commonly used rule-based information extraction technique. For the sake of simplicity, we considered primarily the location-location and location-character/object relations, represented by the “next to” and “has” edges respectively in Figure FIGREF4.", "While many neural models already exist that perform similar tasks such as named entity extraction and part of speech tagging, they often come at the cost of large amounts of specialized labeled data suited for that task. We instead propose a new method that leverages models trained for context-grounded question-answering tasks to do entity extraction with no task dependent data or fine-tuning necessary. Our method, dubbed AskBERT, leverages the Question-Answering (QA) model ALBERT BIBREF15. AskBERT consists of two main steps as shown in Figure FIGREF7: vertex extraction and graph construction.", "The first step is to extract the set of entities—graph vertices—from the story. We are looking to extract information specifically regarding characters, locations, and objects. This is done by using asking the QA model questions such as “Who is a character in the story?”. BIBREF16 have shown that the phrasing of questions given to a QA model is important and this forms the basis of how we formulate our questions—questions are asked so that they are more likely to return a single answer, e.g. asking “Where is a location in the story?” as opposed to “Where are the locations in the story?”. In particular, we notice that pronoun choice can be crucial; “Where is a location in the story?” yielded more consistent extraction than “What is a location in the story?”. ALBERT QA is trained to also output a special <$no$-$answer$> token when it cannot find an answer to the question within the story. Our method makes use of this by iteratively asking QA model a question and masking out the most likely answer outputted on the previous step. This process continues until the <$no$-$answer$> token becomes the most likely answer.", "The next step is graph construction. Typical interactive fiction worlds are usually structured as trees, i.e. no cycles except between locations. Using this fact, we use an approach that builds a graph from the vertex set by one relation—or edge—at a time. Once again using the entire story plot as context, we query the ALBERT-QA model picking a random starting location $x$ from the set of vertices previously extracted.and asking the questions “What location can I visit from $x$?” and “Who/What is in $x$?”. The methodology for phrasing these questions follows that described for the vertex extraction. The answer given by the QA model is matched to the vertex set by picking the vertex $u$ that contains the best word-token overlap with the answer. Relations between vertices are added by computing a relation probability on the basis of the output probabilities of the answer given by the QA model. The probability that vertices $x,u$ are related:", "We compared our proposed AskBERT method with a non-neural, rule-based approach. This approach is based on the information extracted by OpenIE5, followed by some post-processing such as named-entity recognition and part-of-speech tagging. OpenIE5 combines several cutting-edge ideas from several existing papers BIBREF17, BIBREF18, BIBREF19 to create a powerful information extraction tools. For a given sentence, OpenIE5 generates multiple triples in the format of $\\langle entity, relation, entity\\rangle $ as concise representations of the sentence, each with a confidence score. These triples are also occasionally annotated with location information indicating that a triple happened in a location.", "As in the neural AskBERT model, we attempt to extract information regarding locations, characters, and objects. The entire story plot is passed into the OpenIE5 and we receive a set of triples. The location annotations on the triples are used to create a set of locations. We mark which sentences in the story contain these locations. POS tagging based on marking noun-phrases is then used in conjunction with NER to further filter the set of triples—identifying the set of characters and objects in the story.", "The graph is constructed by linking the set of triples on the basis of the location they belong to. While some sentences contain very explicit location information for OpenIE5 to mark it out in the triples, most of them do not. We therefore make the assumption that the location remains the same for all triples extracted in between sentences where locations are explicitly mentioned. For example, if there exists $location A$ in the 1st sentence and $location B$ in the 5th sentence of the story, all the events described in sentences 1-4 are considered to take place in $location A$. The entities mentioned in these events are connected to $location A$ in the graph." ], "highlighted_evidence": [ "The first phase is to extract a knowledge graph from the story that depicts locations, characters, objects, and the relations between these entities. We present two techniques. The first uses neural question-answering technique to extract relations from a story text. The second, provided as a baseline, uses OpenIE5, a commonly used rule-based information extraction technique. For the sake of simplicity, we considered primarily the location-location and location-character/object relations, represented by the “next to” and “has” edges respectively in Figure FIGREF4.", "We instead propose a new method that leverages models trained for context-grounded question-answering tasks to do entity extraction with no task dependent data or fine-tuning necessary. Our method, dubbed AskBERT, leverages the Question-Answering (QA) model ALBERT BIBREF15. AskBERT consists of two main steps as shown in Figure FIGREF7: vertex extraction and graph construction.", "The first step is to extract the set of entities—graph vertices—from the story. We are looking to extract information specifically regarding characters, locations, and objects. This is done by using asking the QA model questions such as “Who is a character in the story?”. BIBREF16 have shown that the phrasing of questions given to a QA model is important and this forms the basis of how we formulate our questions—questions are asked so that they are more likely to return a single answer, e.g. asking “Where is a location in the story?” as opposed to “Where are the locations in the story?”. In particular, we notice that pronoun choice can be crucial; “Where is a location in the story?” yielded more consistent extraction than “What is a location in the story?”. ALBERT QA is trained to also output a special <$no$-$answer$> token when it cannot find an answer to the question within the story. Our method makes use of this by iteratively asking QA model a question and masking out the most likely answer outputted on the previous step. This process continues until the <$no$-$answer$> token becomes the most likely answer.", "The first step is to extract the set of entities—graph vertices—from the story. We are looking to extract information specifically regarding characters, locations, and objects. This is done by using asking the QA model questions such as “Who is a character in the story?”. BIBREF16 have shown that the phrasing of questions given to a QA model is important and this forms the basis of how we formulate our questions—questions are asked so that they are more likely to return a single answer, e.g. asking “Where is a location in the story?” as opposed to “Where are the locations in the story?”. In particular, we notice that pronoun choice can be crucial; “Where is a location in the story?” yielded more consistent extraction than “What is a location in the story?”. ALBERT QA is trained to also output a special <$no$-$answer$> token when it cannot find an answer to the question within the story. Our method makes use of this by iteratively asking QA model a question and masking out the most likely answer outputted on the previous step. This process continues until the <$no$-$answer$> token becomes the most likely answer.\n\nThe next step is graph construction. Typical interactive fiction worlds are usually structured as trees, i.e. no cycles except between locations. Using this fact, we use an approach that builds a graph from the vertex set by one relation—or edge—at a time. Once again using the entire story plot as context, we query the ALBERT-QA model picking a random starting location $x$ from the set of vertices previously extracted.and asking the questions “What location can I visit from $x$?” and “Who/What is in $x$?”. The methodology for phrasing these questions follows that described for the vertex extraction. The answer given by the QA model is matched to the vertex set by picking the vertex $u$ that contains the best word-token overlap with the answer. Relations between vertices are added by computing a relation probability on the basis of the output probabilities of the answer given by the QA model.", "We compared our proposed AskBERT method with a non-neural, rule-based approach. This approach is based on the information extracted by OpenIE5, followed by some post-processing such as named-entity recognition and part-of-speech tagging. OpenIE5 combines several cutting-edge ideas from several existing papers BIBREF17, BIBREF18, BIBREF19 to create a powerful information extraction tools. For a given sentence, OpenIE5 generates multiple triples in the format of $\\langle entity, relation, entity\\rangle $ as concise representations of the sentence, each with a confidence score. These triples are also occasionally annotated with location information indicating that a triple happened in a location.\n\nAs in the neural AskBERT model, we attempt to extract information regarding locations, characters, and objects. The entire story plot is passed into the OpenIE5 and we receive a set of triples. The location annotations on the triples are used to create a set of locations. We mark which sentences in the story contain these locations. POS tagging based on marking noun-phrases is then used in conjunction with NER to further filter the set of triples—identifying the set of characters and objects in the story.\n\nThe graph is constructed by linking the set of triples on the basis of the location they belong to. While some sentences contain very explicit location information for OpenIE5 to mark it out in the triples, most of them do not. We therefore make the assumption that the location remains the same for all triples extracted in between sentences where locations are explicitly mentioned. For example, if there exists $location A$ in the 1st sentence and $location B$ in the 5th sentence of the story, all the events described in sentences 1-4 are considered to take place in $location A$. The entities mentioned in these events are connected to $location A$ in the graph." ] } ], "annotation_id": [ "04bebc96449f09fef78fb3a5bf8b4e9de9dcd3e4" ], "worker_id": [ "2cfd959e433f290bb50b55722370f0d22fe090b7" ] } ] }
{ "caption": [ "Figure 1: Example player interaction in the deep neural generated mystery setting.", "Figure 2: Example knowledge graph constructed by AskBERT.", "Figure 3: Overall AskBERT pipeline for graph construction.", "Figure 4: Overview for neural description generation.", "Table 3: Results of the knowledge graph evaluation study.", "Table 2: Edge and degree statistics: Average edge count , average degree count, and degree standard deviation of the graphs per genre.", "Table 1: Vertex statistics: Average vertex count by type per genre. The random model has the same vertex statistics as the neural model.", "Table 4: Results of the full game evaluation participant study. *Indicates statistical significance (p < 0.05)." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "3-Figure3-1.png", "5-Figure4-1.png", "5-Table3-1.png", "5-Table2-1.png", "5-Table1-1.png", "6-Table4-1.png" ] }
1909.00279
Generating Classical Chinese Poems from Vernacular Chinese
Classical Chinese poetry is a jewel in the treasure house of Chinese culture. Previous poem generation models only allow users to employ keywords to interfere the meaning of generated poems, leaving the dominion of generation to the model. In this paper, we propose a novel task of generating classical Chinese poems from vernacular, which allows users to have more control over the semantic of generated poems. We adapt the approach of unsupervised machine translation (UMT) to our task. We use segmentation-based padding and reinforcement learning to address under-translation and over-translation respectively. According to experiments, our approach significantly improve the perplexity and BLEU compared with typical UMT models. Furthermore, we explored guidelines on how to write the input vernacular to generate better poems. Human evaluation showed our approach can generate high-quality poems which are comparable to amateur poems.
{ "section_name": [ "Introduction", "Related Works", "Model ::: Main Architecture", "Model ::: Addressing Under-Translation and Over-Translation", "Model ::: Addressing Under-Translation and Over-Translation ::: Under-Translation", "Model ::: Addressing Under-Translation and Over-Translation ::: Over-Translation", "Experiment", "Experiment ::: Datasets", "Experiment ::: Evaluation Metrics", "Experiment ::: Baselines", "Experiment ::: Reborn Poems: Generating Poems from Vernacular Translations", "Experiment ::: Interpoetry: Generating Poems from Various Literature Forms", "Experiment ::: Human Discrimination Test", "Discussion", "Conclusion" ], "paragraphs": [ [ "During thousands of years, millions of classical Chinese poems have been written. They contain ancient poets' emotions such as their appreciation for nature, desiring for freedom and concerns for their countries. Among various types of classical poetry, quatrain poems stand out. On the one hand, their aestheticism and terseness exhibit unique elegance. On the other hand, composing such poems is extremely challenging due to their phonological, tonal and structural restrictions.", "Most previous models for generating classical Chinese poems BIBREF0, BIBREF1 are based on limited keywords or characters at fixed positions (e.g., acrostic poems). Since users could only interfere with the semantic of generated poems using a few input words, models control the procedure of poem generation. In this paper, we proposed a novel model for classical Chinese poem generation. As illustrated in Figure FIGREF1, our model generates a classical Chinese poem based on a vernacular Chinese paragraph. Our objective is not only to make the model generate aesthetic and terse poems, but also keep rich semantic of the original vernacular paragraph. Therefore, our model gives users more control power over the semantic of generated poems by carefully writing the vernacular paragraph.", "Although a great number of classical poems and vernacular paragraphs are easily available, there exist only limited human-annotated pairs of poems and their corresponding vernacular translations. Thus, it is unlikely to train such poem generation model using supervised approaches. Inspired by unsupervised machine translation (UMT) BIBREF2, we treated our task as a translation problem, namely translating vernacular paragraphs to classical poems.", "However, our work is not just a straight-forward application of UMT. In a training example for UMT, the length difference of source and target languages are usually not large, but this is not true in our task. Classical poems tend to be more concise and abstract, while vernacular text tends to be detailed and lengthy. Based on our observation on gold-standard annotations, vernacular paragraphs usually contain more than twice as many Chinese characters as their corresponding classical poems. Therefore, such discrepancy leads to two main problems during our preliminary experiments: (1) Under-translation: when summarizing vernacular paragraphs to poems, some vernacular sentences are not translated and ignored by our model. Take the last two vernacular sentences in Figure FIGREF1 as examples, they are not covered in the generated poem. (2) Over-translation: when expanding poems to vernacular paragraphs, certain words are unnecessarily translated for multiple times. For example, the last sentence in the generated poem of Figure FIGREF1, as green as sapphire, is back-translated as as green as as as sapphire.", "Inspired by the phrase segmentation schema in classical poems BIBREF3, we proposed the method of phrase-segmentation-based padding to handle with under-translation. By padding poems based on the phrase segmentation custom of classical poems, our model better aligns poems with their corresponding vernacular paragraphs and meanwhile lowers the risk of under-translation. Inspired by Paulus2018ADR, we designed a reinforcement learning policy to penalize the model if it generates vernacular paragraphs with too many repeated words. Experiments show our method can effectively decrease the possibility of over-translation.", "The contributions of our work are threefold:", "(1) We proposed a novel task for unsupervised Chinese poem generation from vernacular text.", "(2) We proposed using phrase-segmentation-based padding and reinforcement learning to address two important problems in this task, namely under-translation and over-translation.", "(3) Through extensive experiments, we proved the effectiveness of our models and explored how to write the input vernacular to inspire better poems. Human evaluation shows our models are able to generate high quality poems, which are comparable to amateur poems." ], [ "Classical Chinese Poem Generation Most previous works in classical Chinese poem generation focus on improving the semantic coherence of generated poems. Based on LSTM, Zhang and Lapata Zhang2014ChinesePG purposed generating poem lines incrementally by taking into account the history of what has been generated so far. Yan Yan2016iPA proposed a polishing generation schema, each poem line is generated incrementally and iteratively by refining each line one-by-one. Wang et al. Wang2016ChinesePG and Yi et al. Yi2018ChinesePG proposed models to keep the generated poems coherent and semantically consistent with the user's intent. There are also researches that focus on other aspects of poem generation. (Yang et al. Yang2018StylisticCP explored increasing the diversity of generated poems using an unsupervised approach. Xu et al. Xu2018HowII explored generating Chinese poems from images. While most previous works generate poems based on topic words, our work targets at a novel task: generating poems from vernacular Chinese paragraphs.", "Unsupervised Machine Translation Compared with supervised machine translation approaches BIBREF4, BIBREF5, unsupervised machine translation BIBREF6, BIBREF2 does not rely on human-labeled parallel corpora for training. This technique is proved to greatly improve the performance of low-resource languages translation systems. (e.g. English-Urdu translation). The unsupervised machine translation framework is also applied to various other tasks, e.g. image captioning BIBREF7, text style transfer BIBREF8, speech to text translation BIBREF9 and clinical text simplification BIBREF10. The UMT framework makes it possible to apply neural models to tasks where limited human labeled data is available. However, in previous tasks that adopt the UMT framework, the abstraction levels of source and target language are the same. This is not the case for our task.", "Under-Translation & Over-Translation Both are troublesome problems for neural sequence-to-sequence models. Most previous related researches adopt the coverage mechanism BIBREF11, BIBREF12, BIBREF13. However, as far as we know, there were no successful attempt applying coverage mechanism to transformer-based models BIBREF14." ], [ "We transform our poem generation task as an unsupervised machine translation problem. As illustrated in Figure FIGREF1, based on the recently proposed UMT framework BIBREF2, our model is composed of the following components:", "Encoder $\\textbf {E}_s$ and decoder $\\textbf {D}_s$ for vernacular paragraph processing", "Encoder $\\textbf {E}_t$ and decoder $\\textbf {D}_t$ for classical poem processing", "where $\\textbf {E}_s$ (or $\\textbf {E}_t$) takes in a vernacular paragraph (or a classical poem) and converts it into a hidden representation, and $\\textbf {D}_s$ (or $\\textbf {D}_t$) takes in the hidden representation and converts it into a vernacular paragraph (or a poem). Our model relies on a vernacular texts corpus $\\textbf {\\emph {S}}$ and a poem corpus $\\textbf {\\emph {T}}$. We denote $S$ and $T$ as instances in $\\textbf {\\emph {S}}$ and $\\textbf {\\emph {T}}$ respectively.", "The training of our model relies on three procedures, namely parameter initialization, language modeling and back-translation. We will give detailed introduction to each procedure.", "Parameter initialization As both vernacular and classical poem use Chinese characters, we initialize the character embedding of both languages in one common space, the same character in two languages shares the same embedding. This initialization helps associate characters with their plausible translations in the other language.", "Language modeling It helps the model generate texts that conform to a certain language. A well-trained language model is able to detect and correct minor lexical and syntactic errors. We train the language models for both vernacular and classical poem by minimizing the following loss:", "where $S_N$ (or $T_N$) is generated by adding noise (drop, swap or blank a few words) in $S$ (or $T$).", "Back-translation Based on a vernacular paragraph $S$, we generate a poem $T_S$ using $\\textbf {E}_s$ and $\\textbf {D}_t$, we then translate $T_S$ back into a vernacular paragraph $S_{T_S} = \\textbf {D}_s(\\textbf {E}_t(T_S))$. Here, $S$ could be used as gold standard for the back-translated paragraph $S_{T_s}$. In this way, we could turn the unsupervised translation into a supervised task by maximizing the similarity between $S$ and $S_{T_S}$. The same also applies to using poem $T$ as gold standard for its corresponding back-translation $T_{S_T}$. We define the following loss:", "Note that $\\mathcal {L}^{bt}$ does not back propagate through the generation of $T_S$ and $S_T$ as we observe no improvement in doing so. When training the model, we minimize the composite loss:", "where $\\alpha _1$ and $\\alpha _2$ are scaling factors." ], [ "During our early experiments, we realize that the naive UMT framework is not readily applied to our task. Classical Chinese poems are featured for its terseness and abstractness. They usually focus on depicting broad poetic images rather than details. We collected a dataset of classical Chinese poems and their corresponding vernacular translations, the average length of the poems is $32.0$ characters, while for vernacular translations, it is $73.3$. The huge gap in sequence length between source and target language would induce over-translation and under-translation when training UMT models. In the following sections, we explain the two problems and introduce our improvements." ], [ "By nature, classical poems are more concise and abstract while vernaculars are more detailed and lengthy, to express the same meaning, a vernacular paragraph usually contains more characters than a classical poem. As a result, when summarizing a vernacular paragraph $S$ to a poem $T_S$, $T_S$ may not cover all information in $S$ due to its length limit. In real practice, we notice the generated poems usually only cover the information in the front part of the vernacular paragraph, while the latter part is unmentioned.", "To alleviate under-translation, we propose phrase segmentation-based padding. Specifically, we first segment each line in a classical poem into several sub-sequences, we then join these sub-sequences with the special padding tokens <p>. During training, the padded lines are used instead of the original poem lines. As illustrated in Figure FIGREF10, padding would create better alignments between a vernacular paragraph and a prolonged poem, making it more likely for the latter part of the vernacular paragraph to be covered in the poem. As we mentioned before, the length of the vernacular translation is about twice the length of its corresponding classical poem, so we pad each segmented line to twice its original length.", "According to Ye jia:1984, to present a stronger sense of rhythm, each type of poem has its unique phrase segmentation schema, for example, most seven-character quatrain poems adopt the 2-2-3 schema, i.e. each quatrain line contains 3 phrases, the first, second and third phrase contains 2, 2, 3 characters respectively. Inspired by this law, we segment lines in a poem according to the corresponding phrase segmentation schema. In this way, we could avoid characters within the scope of a phrase to be cut apart, thus best preserve the semantic of each phrase.BIBREF15" ], [ "In NMT, when decoding is complete, the decoder would generate an <EOS>token, indicating it has reached the end of the output sequence. However, when expending a poem $T$ into a vernacular Chinese paragraph $S_T$, due to the conciseness nature of poems, after finishing translating every source character in $T$, the output sequence $S_T$ may still be much shorter than the expected length of a poem‘s vernacular translation. As a result, the decoder would believe it has not finished decoding. Instead of generating the <EOS>token, the decoder would continue to generate new output characters from previously translated source characters. This would cause the decoder to repetitively output a piece of text many times.", "To remedy this issue, in addition to minimizing the original loss function $\\mathcal {L}$, we propose to minimize a specific discrete metric, which is made possible with reinforcement learning.", "We define repetition ratio $RR(S)$ of a paragraph $S$ as:", "where $vocab(S)$ refers to the number of distinctive characters in $S$, $len(S)$ refers the number of all characters in $S$. Obviously, if a generated sequence contains many repeated characters, it would have high repetition ratio. Following the self-critical policy gradient training BIBREF16, we define the following loss function:", "where $\\tau $ is a manually set threshold. Intuitively, minimizing $\\mathcal {L}^{rl}$ is equivalent to maximizing the conditional likelihood of the sequence $S$ given $S_{T_S}$ if its repetition ratio is lower than the threshold $\\tau $. Following BIBREF17, we revise the composite loss as:", "where $\\alpha _1, \\alpha _2, \\alpha _3$ are scaling factors." ], [ "The objectives of our experiment are to explore the following questions: (1) How much do our models improve the generated poems? (Section SECREF23) (2) What are characteristics of the input vernacular paragraph that lead to a good generated poem? (Section SECREF26) (3) What are weaknesses of generated poems compared to human poems? (Section SECREF27) To this end, we built a dataset as described in Section SECREF18. Evaluation metrics and baselines are described in Section SECREF21 and SECREF22. For the implementation details of building the dataset and models, please refer to supplementary materials." ], [ "Training and Validation Sets We collected a corpus of poems and a corpus of vernacular literature from online resources. The poem corpus contains 163K quatrain poems from Tang Poems and Song Poems, the vernacular literature corpus contains 337K short paragraphs from 281 famous books, the corpus covers various literary forms including prose, fiction and essay. Note that our poem corpus and a vernacular corpus are not aligned. We further split the two corpora into a training set and a validation set.", "Test Set From online resources, we collected 487 seven-character quatrain poems from Tang Poems and Song Poems, as well as their corresponding high quality vernacular translations. These poems could be used as gold standards for poems generated from their corresponding vernacular translations. Table TABREF11 shows the statistics of our training, validation and test set." ], [ "Perplexity Perplexity reflects the probability a model generates a certain poem. Intuitively, a better model would yield higher probability (lower perplexity) on the gold poem.", "BLEU As a standard evaluation metric for machine translation, BLEU BIBREF18 measures the intersection of n-grams between the generated poem and the gold poem. A better generated poem usually achieves higher BLEU score, as it shares more n-gram with the gold poem.", "Human evaluation While perplexity and BLEU are objective metrics that could be applied to large-volume test set, evaluating Chinese poems is after all a subjective task. We invited 30 human evaluators to join our human evaluation. The human evaluators were divided into two groups. The expert group contains 15 people who hold a bachelor degree in Chinese literature, and the amateur group contains 15 people who holds a bachelor degree in other fields. All 30 human evaluators are native Chinese speakers.", "We ask evaluators to grade each generated poem from four perspectives: 1) Fluency: Is the generated poem grammatically and rhythmically well formed, 2) Semantic coherence: Is the generated poem itself semantic coherent and meaningful, 3) Semantic preservability: Does the generated poem preserve the semantic of the modern Chinese translation, 4) Poeticness: Does the generated poem display the characteristic of a poem and does the poem build good poetic image. The grading scale for each perspective is from 1 to 5." ], [ "We compare the performance of the following models: (1) LSTM BIBREF19; (2)Naive transformer BIBREF14; (3)Transformer + Anti OT (RL loss); (4)Transformer + Anti UT (phrase segmentation-based padding); (5)Transformer + Anti OT&UT." ], [ "As illustrated in Table TABREF12 (ID 1). Given the vernacular translation of each gold poem in test set, we generate five poems using our models. Intuitively, the more the generated poem resembles the gold poem, the better the model is. We report mean perplexity and BLEU scores in Table TABREF19 (Where +Anti OT refers to adding the reinforcement loss to mitigate over-fitting and +Anti UT refers to adding phrase segmentation-based padding to mitigate under-translation), human evaluation results in Table TABREF20.", "According to experiment results, perplexity, BLEU scores and total scores in human evaluation are consistent with each other. We observe all BLEU scores are fairly low, we believe it is reasonable as there could be multiple ways to compose a poem given a vernacular paragraph. Among transformer-based models, both +Anti OT and +Anti UT outperforms the naive transformer, while Anti OT&UT shows the best performance, this demonstrates alleviating under-translation and over-translation both helps generate better poems. Specifically, +Anti UT shows bigger improvement than +Anti OT. According to human evaluation, among the four perspectives, our Anti OT&UT brought most score improvement in Semantic preservability, this proves our improvement on semantic preservability was most obvious to human evaluators. All transformer-based models outperform LSTM. Note that the average length of the vernacular translation is over 70 characters, comparing with transformer-based models, LSTM may only keep the information in the beginning and end of the vernacular. We anticipated some score inconsistency between expert group and amateur group. However, after analyzing human evaluation results, we did not observed big divergence between two groups." ], [ "Chinese literature is not only featured for classical poems, but also various other literature forms. Song lyricUTF8gbsn(宋词), or ci also gained tremendous popularity in its palmy days, standing out in classical Chinese literature. Modern prose, modern poems and pop song lyrics have won extensive praise among Chinese people in modern days. The goal of this experiment is to transfer texts of other literature forms into quatrain poems. We expect the generated poems to not only keep the semantic of the original text, but also demonstrate terseness, rhythm and other characteristics of ancient poems. Specifically, we chose 20 famous fragments from four types of Chinese literature (5 fragments for each of modern prose, modern poems, pop song lyrics and Song lyrics). We try to As no ground truth is available, we resorted to human evaluation with the same grading standard in Section SECREF23.", "Comparing the scores of different literature forms, we observe Song lyric achieves higher scores than the other three forms of modern literature. It is not surprising as both Song lyric and quatrain poems are written in classical Chinese, while the other three literature forms are all in vernacular.", "Comparing the scores within the same literature form, we observe the scores of poems generated from different paragraphs tends to vary. After carefully studying the generated poems as well as their scores, we have the following observation:", "1) In classical Chinese poems, poetic images UTF8gbsn(意象) were widely used to express emotions and to build artistic conception. A certain poetic image usually has some fixed implications. For example, autumn is usually used to imply sadness and loneliness. However, with the change of time, poetic images and their implications have also changed. According to our observation, if a vernacular paragraph contains more poetic images used in classical literature, its generated poem usually achieves higher score. As illustrated in Table TABREF12, both paragraph 2 and 3 are generated from pop song lyrics, paragraph 2 uses many poetic images from classical literature (e.g. pear flowers, makeup), while paragraph 3 uses modern poetic images (e.g. sparrows on the utility pole). Obviously, compared with poem 2, sentences in poem 3 seems more confusing, as the poetic images in modern times may not fit well into the language model of classical poems.", "2) We also observed that poems generated from descriptive paragraphs achieve higher scores than from logical or philosophical paragraphs. For example, in Table TABREF12, both paragraph 4 (more descriptive) and paragraph 5 (more philosophical) were selected from famous modern prose. However, compared with poem 4, poem 5 seems semantically more confusing. We offer two explanations to the above phenomenon: i. Limited by the 28-character restriction, it is hard for quatrain poems to cover complex logical or philosophical explanation. ii. As vernacular paragraphs are more detailed and lengthy, some information in a vernacular paragraph may be lost when it is summarized into a classical poem. While losing some information may not change the general meaning of a descriptive paragraph, it could make a big difference in a logical or philosophical paragraph." ], [ "We manually select 25 generated poems from vernacular Chinese translations and pair each one with its corresponding human written poem. We then present the 25 pairs to human evaluators and ask them to differentiate which poem is generated by human poet.", "As demonstrated in Table TABREF29, although the general meanings in human poems and generated poems seem to be the same, the wordings they employ are quite different. This explains the low BLEU scores in Section 4.3. According to the test results in Table TABREF30, human evaluators only achieved 65.8% in mean accuracy. This indicates the best generated poems are somewhat comparable to poems written by amateur poets.", "We interviewed evaluators who achieved higher than 80% accuracy on their differentiation strategies. Most interviewed evaluators state they realize the sentences in a human written poem are usually well organized to highlight a theme or to build a poetic image, while the correlation between sentences in a generated poem does not seem strong. As demonstrated in Table TABREF29, the last two sentences in both human poems (marked as red) echo each other well, while the sentences in machine-generated poems seem more independent. This gives us hints on the weakness of generated poems: While neural models may generate poems that resemble human poems lexically and syntactically, it's still hard for them to compete with human beings in building up good structures." ], [ "Addressing Under-Translation In this part, we wish to explore the effect of different phrase segmentation schemas on our phrase segmentation-based padding. According to Ye jia:1984, most seven-character quatrain poems adopt the 2-2-3 segmentation schema. As shown in examples in Figure FIGREF31, we compare our phrase segmentation-based padding (2-2-3 schema) to two less common schemas (i.e. 2-3-2 and 3-2-2 schema) we report our experiment results in Table TABREF32.", "The results show our 2-2-3 segmentation-schema greatly outperforms 2-3-2 and 3-2-2 schema in both perplexity and BLEU scores. Note that the BLEU scores of 2-3-2 and 3-2-2 schema remains almost the same as our naive baseline (Without padding). According to the observation, we have the following conclusions: 1) Although padding better aligns the vernacular paragraph to the poem, it may not improve the quality of the generated poem. 2) The padding tokens should be placed according to the phrase segmentation schema of the poem as it preserves the semantic within the scope of each phrase.", "Addressing Over-Translation To explore the effect of our reinforcement learning policy on alleviating over-translation, we calculate the repetition ratio of vernacular paragraphs generated from classical poems in our validation set. We found naive transformer achieves $40.8\\%$ in repetition ratio, while our +Anti OT achieves $34.9\\%$. Given the repetition ratio of vernacular paragraphs (written by human beings) in our validation set is $30.1\\%$, the experiment results demonstrated our RL loss effectively alleviate over-translation, which in turn leads to better generated poems." ], [ "In this paper, we proposed a novel task of generating classical Chinese poems from vernacular paragraphs. We adapted the unsupervised machine translation model to our task and meanwhile proposed two novel approaches to address the under-translation and over-translation problems. Experiments show that our task can give users more controllability in generating poems. In addition, our approaches are very effective to solve the problems when the UMT model is directly used in this task. In the future, we plan to explore: (1) Applying the UMT model in the tasks where the abstraction levels of source and target languages are different (e.g., unsupervised automatic summarization); (2) Improving the quality of generated poems via better structure organization approaches." ] ] }
{ "question": [ "What are some guidelines in writing input vernacular so model can generate ", "How much is proposed model better in perplexity and BLEU score than typical UMT models?", "What dataset is used for training?" ], "question_id": [ "6b9310b577c6232e3614a1612cbbbb17067b3886", "d484a71e23d128f146182dccc30001df35cdf93f", "5787ac3e80840fe4cf7bfae7e8983fa6644d6220" ], "nlp_background": [ "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ " if a vernacular paragraph contains more poetic images used in classical literature, its generated poem usually achieves higher score", "poems generated from descriptive paragraphs achieve higher scores than from logical or philosophical paragraphs" ], "yes_no": null, "free_form_answer": "", "evidence": [ "1) In classical Chinese poems, poetic images UTF8gbsn(意象) were widely used to express emotions and to build artistic conception. A certain poetic image usually has some fixed implications. For example, autumn is usually used to imply sadness and loneliness. However, with the change of time, poetic images and their implications have also changed. According to our observation, if a vernacular paragraph contains more poetic images used in classical literature, its generated poem usually achieves higher score. As illustrated in Table TABREF12, both paragraph 2 and 3 are generated from pop song lyrics, paragraph 2 uses many poetic images from classical literature (e.g. pear flowers, makeup), while paragraph 3 uses modern poetic images (e.g. sparrows on the utility pole). Obviously, compared with poem 2, sentences in poem 3 seems more confusing, as the poetic images in modern times may not fit well into the language model of classical poems.", "2) We also observed that poems generated from descriptive paragraphs achieve higher scores than from logical or philosophical paragraphs. For example, in Table TABREF12, both paragraph 4 (more descriptive) and paragraph 5 (more philosophical) were selected from famous modern prose. However, compared with poem 4, poem 5 seems semantically more confusing. We offer two explanations to the above phenomenon: i. Limited by the 28-character restriction, it is hard for quatrain poems to cover complex logical or philosophical explanation. ii. As vernacular paragraphs are more detailed and lengthy, some information in a vernacular paragraph may be lost when it is summarized into a classical poem. While losing some information may not change the general meaning of a descriptive paragraph, it could make a big difference in a logical or philosophical paragraph." ], "highlighted_evidence": [ "According to our observation, if a vernacular paragraph contains more poetic images used in classical literature, its generated poem usually achieves higher score.", "We also observed that poems generated from descriptive paragraphs achieve higher scores than from logical or philosophical paragraphs." ] } ], "annotation_id": [ "04c432ed960ff69bb335b3eac687be8fe4ecf97a" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Perplexity of the best model is 65.58 compared to best baseline 105.79.\nBleu of the best model is 6.57 compared to best baseline 5.50.", "evidence": [ "As illustrated in Table TABREF12 (ID 1). Given the vernacular translation of each gold poem in test set, we generate five poems using our models. Intuitively, the more the generated poem resembles the gold poem, the better the model is. We report mean perplexity and BLEU scores in Table TABREF19 (Where +Anti OT refers to adding the reinforcement loss to mitigate over-fitting and +Anti UT refers to adding phrase segmentation-based padding to mitigate under-translation), human evaluation results in Table TABREF20.", "FLOAT SELECTED: Table 3: Perplexity and BLEU scores of generating poems from vernacular translations. Since perplexity and BLEU scores on the test set fluctuates from epoch to epoch, we report the mean perplexity and BLEU scores over 5 consecutive epochs after convergence." ], "highlighted_evidence": [ "We report mean perplexity and BLEU scores in Table TABREF19 (Where +Anti OT refers to adding the reinforcement loss to mitigate over-fitting and +Anti UT refers to adding phrase segmentation-based padding to mitigate under-translation), human evaluation results in Table TABREF20.", "FLOAT SELECTED: Table 3: Perplexity and BLEU scores of generating poems from vernacular translations. Since perplexity and BLEU scores on the test set fluctuates from epoch to epoch, we report the mean perplexity and BLEU scores over 5 consecutive epochs after convergence." ] } ], "annotation_id": [ "e025375e4b5390c1b05ad8d0b226d6f05b5faa4c" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "We collected a corpus of poems and a corpus of vernacular literature from online resources" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Training and Validation Sets We collected a corpus of poems and a corpus of vernacular literature from online resources. The poem corpus contains 163K quatrain poems from Tang Poems and Song Poems, the vernacular literature corpus contains 337K short paragraphs from 281 famous books, the corpus covers various literary forms including prose, fiction and essay. Note that our poem corpus and a vernacular corpus are not aligned. We further split the two corpora into a training set and a validation set." ], "highlighted_evidence": [ "We collected a corpus of poems and a corpus of vernacular literature from online resources. The poem corpus contains 163K quatrain poems from Tang Poems and Song Poems, the vernacular literature corpus contains 337K short paragraphs from 281 famous books, the corpus covers various literary forms including prose, fiction and essay. Note that our poem corpus and a vernacular corpus are not aligned. We further split the two corpora into a training set and a validation set." ] } ], "annotation_id": [ "2a6d7e0c7dfd73525cb559488b4c967b42f06831" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: An example of the training procedures of our model. Here we depict two procedures, namely back translation and language modeling. Back translation has two paths, namely ES → DT → ET → DS and DT → ES → DS → ET . Language modeling also has two paths, namely ET → DT and ES → DS . Figure 1 shows only the former one for each training procedure.", "Figure 2: A real example to show the effectiveness of our phrase-segmentation-based padding. Without padding, the vernacular paragraph could not be aligned well with the poem. Therefore, the text in South Yangtze ends but the grass and trees have not withered in red is not covered in the poem. By contrast, they are covered well after using our padding method.", "Table 1: Statistics of our dataset", "Table 2: A few poems generated by our model from their corresponding vernacular paragraphs.", "Table 3: Perplexity and BLEU scores of generating poems from vernacular translations. Since perplexity and BLEU scores on the test set fluctuates from epoch to epoch, we report the mean perplexity and BLEU scores over 5 consecutive epochs after convergence.", "Table 4: Human evaluation results of generating poems from vernacular translations. We report the mean scores for each evaluation metric and total scores of four metrics.", "Table 5: Human evaluation results for generating poems from various literature forms. We show the results obtained from our best model (Transformer+Anti OT&UT).", "Table 6: Examples of generated poems and their corresponding gold poems used in human discrimination test.", "Table 7: The performance of human discrimination test.", "Table 8: Perplexity and BLEU scores of different padding schemas." ], "file": [ "2-Figure1-1.png", "4-Figure2-1.png", "5-Table1-1.png", "6-Table2-1.png", "7-Table3-1.png", "7-Table4-1.png", "8-Table5-1.png", "8-Table6-1.png", "9-Table7-1.png", "9-Table8-1.png" ] }
1909.06762
Entity-Consistent End-to-end Task-Oriented Dialogue System with KB Retriever
Querying the knowledge base (KB) has long been a challenge in the end-to-end task-oriented dialogue system. Previous sequence-to-sequence (Seq2Seq) dialogue generation work treats the KB query as an attention over the entire KB, without the guarantee that the generated entities are consistent with each other. In this paper, we propose a novel framework which queries the KB in two steps to improve the consistency of generated entities. In the first step, inspired by the observation that a response can usually be supported by a single KB row, we introduce a KB retrieval component which explicitly returns the most relevant KB row given a dialogue history. The retrieval result is further used to filter the irrelevant entities in a Seq2Seq response generation model to improve the consistency among the output entities. In the second step, we further perform the attention mechanism to address the most correlated KB column. Two methods are proposed to make the training feasible without labeled retrieval data, which include distant supervision and Gumbel-Softmax technique. Experiments on two publicly available task oriented dialog datasets show the effectiveness of our model by outperforming the baseline systems and producing entity-consistent responses.
{ "section_name": [ "Introduction", "Definition", "Definition ::: Dialogue History", "Definition ::: Knowledge Base", "Definition ::: Seq2Seq Dialogue Generation", "Our Framework", "Our Framework ::: Encoder", "Our Framework ::: Vanilla Attention-based Decoder", "Our Framework ::: Entity-Consistency Augmented Decoder", "Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection", "Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection ::: Dialogue History Representation:", "Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection ::: KB Row Representation:", "Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Row Selection ::: Memory Network-Based Retriever:", "Our Framework ::: Entity-Consistency Augmented Decoder ::: KB Column Selection", "Our Framework ::: Entity-Consistency Augmented Decoder ::: Decoder with Retrieved Entity", "Training the KB-Retriever", "Training the KB-Retriever ::: Training with Distant Supervision", "Training the KB-Retriever ::: Training with Gumbel-Softmax", "Training the KB-Retriever ::: Experimental Settings", "Training the KB-Retriever ::: Baseline Models", "Results", "Results ::: The proportion of responses that can be supported by a single KB row", "Results ::: Generation Consistency", "Results ::: Correlation between the number of KB rows and generation consistency", "Results ::: Visualization", "Results ::: Human Evaluation", "Related Work", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Task-oriented dialogue system, which helps users to achieve specific goals with natural language, is attracting more and more research attention. With the success of the sequence-to-sequence (Seq2Seq) models in text generation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, several works tried to model the task-oriented dialogue as the Seq2Seq generation of response from the dialogue history BIBREF5, BIBREF6, BIBREF7. This kind of modeling scheme frees the task-oriented dialogue system from the manually designed pipeline modules and heavy annotation labor for these modules.", "Different from typical text generation, the successful conversations for task-oriented dialogue system heavily depend on accurate knowledge base (KB) queries. Taking the dialogue in Figure FIGREF1 as an example, to answer the driver's query on the gas station, the dialogue system is required to retrieve the entities like “200 Alester Ave” and “Valero”. For the task-oriented system based on Seq2Seq generation, there is a trend in recent study towards modeling the KB query as an attention network over the entire KB entity representations, hoping to learn a model to pay more attention to the relevant entities BIBREF6, BIBREF7, BIBREF8, BIBREF9. Though achieving good end-to-end dialogue generation with over-the-entire-KB attention mechanism, these methods do not guarantee the generation consistency regarding KB entities and sometimes yield responses with conflict entities, like “Valero is located at 899 Ames Ct” for the gas station query (as shown in Figure FIGREF1). In fact, the correct address for Valero is 200 Alester Ave. A consistent response is relatively easy to achieve for the conventional pipeline systems because they query the KB by issuing API calls BIBREF10, BIBREF11, BIBREF12, and the returned entities, which typically come from a single KB row, are consistently related to the object (like the “gas station”) that serves the user's request. This indicates that a response can usually be supported by a single KB row. It's promising to incorporate such observation into the Seq2Seq dialogue generation model, since it encourages KB relevant generation and avoids the model from producing responses with conflict entities.", "To achieve entity-consistent generation in the Seq2Seq task-oriented dialogue system, we propose a novel framework which query the KB in two steps. In the first step, we introduce a retrieval module — KB-retriever to explicitly query the KB. Inspired by the observation that a single KB row usually supports a response, given the dialogue history and a set of KB rows, the KB-retriever uses a memory network BIBREF13 to select the most relevant row. The retrieval result is then fed into a Seq2Seq dialogue generation model to filter the irrelevant KB entities and improve the consistency within the generated entities. In the second step, we further perform attention mechanism to address the most correlated KB column. Finally, we adopt the copy mechanism to incorporate the retrieved KB entity.", "Since dialogue dataset is not typically annotated with the retrieval results, training the KB-retriever is non-trivial. To make the training feasible, we propose two methods: 1) we use a set of heuristics to derive the training data and train the retriever in a distant supervised fashion; 2) we use Gumbel-Softmax BIBREF14 as an approximation of the non-differentiable selecting process and train the retriever along with the Seq2Seq dialogue generation model. Experiments on two publicly available datasets (Camrest BIBREF11 and InCar Assistant BIBREF6) confirm the effectiveness of the KB-retriever. Both the retrievers trained with distant-supervision and Gumbel-Softmax technique outperform the compared systems in the automatic and human evaluations. Analysis empirically verifies our assumption that more than 80% responses in the dataset can be supported by a single KB row and better retrieval results lead to better task-oriented dialogue generation performance." ], [ "In this section, we will describe the input and output of the end-to-end task-oriented dialogue system, and the definition of Seq2Seq task-oriented dialogue generation." ], [ "Given a dialogue between a user ($u$) and a system ($s$), we follow eric:2017:SIGDial and represent the $k$-turned dialogue utterances as $\\lbrace (u_{1}, s_{1} ), (u_{2} , s_{2} ), ... , (u_{k}, s_{k})\\rbrace $. At the $i^{\\text{th}}$ turn of the dialogue, we aggregate dialogue context which consists of the tokens of $(u_{1}, s_{1}, ..., s_{i-1}, u_{i})$ and use $\\mathbf {x} = (x_{1}, x_{2}, ..., x_{m})$ to denote the whole dialogue history word by word, where $m$ is the number of tokens in the dialogue history." ], [ "In this paper, we assume to have the access to a relational-database-like KB $B$, which consists of $|\\mathcal {R}|$ rows and $|\\mathcal {C}|$ columns. The value of entity in the $j^{\\text{th}}$ row and the $i^{\\text{th}}$ column is noted as $v_{j, i}$." ], [ "We define the Seq2Seq task-oriented dialogue generation as finding the most likely response $\\mathbf {y}$ according to the input dialogue history $\\mathbf {x}$ and KB $B$. Formally, the probability of a response is defined as", "where $y_t$ represents an output token." ], [ "In this section, we describe our framework for end-to-end task-oriented dialogues. The architecture of our framework is demonstrated in Figure FIGREF3, which consists of two major components including an memory network-based retriever and the seq2seq dialogue generation with KB Retriever. Our framework first uses the KB-retriever to select the most relevant KB row and further filter the irrelevant entities in a Seq2Seq response generation model to improve the consistency among the output entities. While in decoding, we further perform the attention mechanism to choose the most probable KB column. We will present the details of our framework in the following sections." ], [ "In our encoder, we adopt the bidirectional LSTM BIBREF15 to encode the dialogue history $\\mathbf {x}$, which captures temporal relationships within the sequence. The encoder first map the tokens in $\\mathbf {x}$ to vectors with embedding function $\\phi ^{\\text{emb}}$, and then the BiLSTM read the vector forwardly and backwardly to produce context-sensitive hidden states $(\\mathbf {h}_{1}, \\mathbf {h}_2, ..., \\mathbf {h}_{m})$ by repeatedly applying the recurrence $\\mathbf {h}_{i}=\\text{BiLSTM}\\left( \\phi ^{\\text{emb}}\\left( x_{i}\\right) , \\mathbf {h}_{i-1}\\right)$." ], [ "Here, we follow eric:2017:SIGDial to adopt the attention-based decoder to generation the response word by word. LSTM is also used to represent the partially generated output sequence $(y_{1}, y_2, ...,y_{t-1})$ as $(\\tilde{\\mathbf {h}}_{1}, \\tilde{\\mathbf {h}}_2, ...,\\tilde{\\mathbf {h}}_t)$. For the generation of next token $y_t$, their model first calculates an attentive representation $\\tilde{\\mathbf {h}}^{^{\\prime }}_t$ of the dialogue history as", "Then, the concatenation of the hidden representation of the partially outputted sequence $\\tilde{\\mathbf {h}}_t$ and the attentive dialogue history representation $\\tilde{\\mathbf {h}}^{^{\\prime }}_t$ are projected to the vocabulary space $\\mathcal {V}$ by $U$ as", "to calculate the score (logit) for the next token generation. The probability of next token $y_t$ is finally calculated as" ], [ "As shown in section SECREF7, we can see that the generation of tokens are just based on the dialogue history attention, which makes the model ignorant to the KB entities. In this section, we present how to query the KB explicitly in two steps for improving the entity consistence, which first adopt the KB-retriever to select the most relevant KB row and the generation of KB entities from the entities-augmented decoder is constrained to the entities within the most probable row, thus improve the entity generation consistency. Next, we perform the column attention to select the most probable KB column. Finally, we show how to use the copy mechanism to incorporate the retrieved entity while decoding." ], [ "In our framework, our KB-retriever takes the dialogue history and KB rows as inputs and selects the most relevant row. This selection process resembles the task of selecting one word from the inputs to answer questions BIBREF13, and we use a memory network to model this process. In the following sections, we will first describe how to represent the inputs, then we will talk about our memory network-based retriever" ], [ "We encode the dialogue history by adopting the neural bag-of-words (BoW) followed the original paper BIBREF13. Each token in the dialogue history is mapped into a vector by another embedding function $\\phi ^{\\text{emb}^{\\prime }}(x)$ and the dialogue history representation $\\mathbf {q}$ is computed as the sum of these vectors: $\\mathbf {q} = \\sum ^{m}_{i=1} \\phi ^{\\text{emb}^{\\prime }} (x_{i}) $." ], [ "In this section, we describe how to encode the KB row. Each KB cell is represented as the cell value $v$ embedding as $\\mathbf {c}_{j, k} = \\phi ^{\\text{value}}(v_{j, k})$, and the neural BoW is also used to represent a KB row $\\mathbf {r}_{j}$ as $\\mathbf {r}_{j} = \\sum _{k=1}^{|\\mathcal {C}|} \\mathbf {c}_{j,k}$." ], [ "We model the KB retrieval process as selecting the row that most-likely supports the response generation. Memory network BIBREF13 has shown to be effective to model this kind of selection. For a $n$-hop memory network, the model keeps a set of input matrices $\\lbrace R^{1}, R^{2}, ..., R^{n+1}\\rbrace $, where each $R^{i}$ is a stack of $|\\mathcal {R}|$ inputs $(\\mathbf {r}^{i}_1, \\mathbf {r}^{i}_2, ..., \\mathbf {r}^{i}_{|\\mathcal {R}|})$. The model also keeps query $\\mathbf {q}^{1}$ as the input. A single hop memory network computes the probability $\\mathbf {a}_j$ of selecting the $j^{\\text{th}}$ input as", "For the multi-hop cases, layers of single hop memory network are stacked and the query of the $(i+1)^{\\text{th}}$ layer network is computed as", "and the output of the last layer is used as the output of the whole network. For more details about memory network, please refer to the original paper BIBREF13.", "After getting $\\mathbf {a}$, we represent the retrieval results as a 0-1 matrix $T \\in \\lbrace 0, 1\\rbrace ^{|\\mathcal {R}|\\times \\mathcal {|C|}}$, where each element in $T$ is calculated as", "In the retrieval result, $T_{j, k}$ indicates whether the entity in the $j^{\\text{th}}$ row and the $k^{\\text{th}}$ column is relevant to the final generation of the response. In this paper, we further flatten T to a 0-1 vector $\\mathbf {t} \\in \\lbrace 0, 1\\rbrace ^{|\\mathcal {E}|}$ (where $|\\mathcal {E}|$ equals $|\\mathcal {R}|\\times \\mathcal {|C|}$) as our retrieval row results." ], [ "After getting the retrieved row result that indicates which KB row is the most relevant to the generation, we further perform column attention in decoding time to select the probable KB column. For our KB column selection, following the eric:2017:SIGDial we use the decoder hidden state $(\\tilde{\\mathbf {h}}_{1}, \\tilde{\\mathbf {h}}_2, ...,\\tilde{\\mathbf {h}}_t)$ to compute an attention score with the embedding of column attribute name. The attention score $\\mathbf {c}\\in R^{|\\mathcal {E}|}$ then become the logits of the column be selected, which can be calculated as", "where $\\mathbf {c}_j$ is the attention score of the $j^{\\text{th}}$ KB column, $\\mathbf {k}_j$ is represented with the embedding of word embedding of KB column name. $W^{^{\\prime }}_{1}$, $W^{^{\\prime }}_{2}$ and $\\mathbf {t}^{T}$ are trainable parameters of the model." ], [ "After the row selection and column selection, we can define the final retrieved KB entity score as the element-wise dot between the row retriever result and the column selection score, which can be calculated as", "where the $v^{t}$ indicates the final KB retrieved entity score. Finally, we follow eric:2017:SIGDial to use copy mechanism to incorporate the retrieved entity, which can be defined as", "where $\\mathbf {o}_t$’s dimensionality is $ |\\mathcal {V}|$ +$|\\mathcal {E}|$. In $\\mathbf {v}^t$ , lower $ |\\mathcal {V}|$ is zero and the rest$|\\mathcal {E}|$ is retrieved entity scores." ], [ "As mentioned in section SECREF9, we adopt the memory network to train our KB-retriever. However, in the Seq2Seq dialogue generation, the training data does not include the annotated KB row retrieval results, which makes supervised training the KB-retriever impossible. To tackle this problem, we propose two training methods for our KB-row-retriever. 1) In the first method, inspired by the recent success of distant supervision in information extraction BIBREF16, BIBREF17, BIBREF18, BIBREF19, we take advantage of the similarity between the surface string of KB entries and the reference response, and design a set of heuristics to extract training data for the KB-retriever. 2) In the second method, instead of training the KB-retriever as an independent component, we train it along with the training of the Seq2Seq dialogue generation. To make the retrieval process in Equation DISPLAY_FORM13 differentiable, we use Gumbel-Softmax BIBREF14 as an approximation of the $\\operatornamewithlimits{argmax}$ during training." ], [ "Although it's difficult to obtain the annotated retrieval data for the KB-retriever, we can “guess” the most relevant KB row from the reference response, and then obtain the weakly labeled data for the retriever. Intuitively, for the current utterance in the same dialogue which usually belongs to one topic and the KB row that contains the largest number of entities mentioned in the whole dialogue should support the utterance. In our training with distant supervision, we further simplify our assumption and assume that one dialogue which is usually belongs to one topic and can be supported by the most relevant KB row, which means for a $k$-turned dialogue, we construct $k$ pairs of training instances for the retriever and all the inputs $(u_{1}, s_{1}, ..., s_{i-1}, u_{i} \\mid i \\le k)$ are associated with the same weakly labeled KB retrieval result $T^*$.", "In this paper, we compute each row's similarity to the whole dialogue and choose the most similar row as $T^*$. We define the similarity of each row as the number of matched spans with the surface form of the entities in the row. Taking the dialogue in Figure FIGREF1 for an example, the similarity of the 4$^\\text{th}$ row equals to 4 with “200 Alester Ave”, “gas station”, “Valero”, and “road block nearby” matching the dialogue context; and the similarity of the 7$^\\text{th}$ row equals to 1 with only “road block nearby” matching.", "In our model with the distantly supervised retriever, the retrieval results serve as the input for the Seq2Seq generation. During training the Seq2Seq generation, we use the weakly labeled retrieval result $T^{*}$ as the input." ], [ "In addition to treating the row retrieval result as an input to the generation model, and training the kb-row-retriever independently, we can train it along with the training of the Seq2Seq dialogue generation in an end-to-end fashion. The major difficulty of such a training scheme is that the discrete retrieval result is not differentiable and the training signal from the generation model cannot be passed to the parameters of the retriever. Gumbel-softmax technique BIBREF14 has been shown an effective approximation to the discrete variable and proved to work in sentence representation. In this paper, we adopt the Gumbel-Softmax technique to train the KB retriever. We use", "as the approximation of $T$, where $\\mathbf {g}_{j}$ are i.i.d samples drawn from $\\text{Gumbel}(0,1)$ and $\\tau $ is a constant that controls the smoothness of the distribution. $T^{\\text{approx}}_{j}$ replaces $T^{\\text{}}_{j}$ in equation DISPLAY_FORM13 and goes through the same flattening and expanding process as $\\mathbf {V}$ to get $\\mathbf {v}^{\\mathbf {t}^{\\text{approx}^{\\prime }}}$ and the training signal from Seq2Seq generation is passed via the logit", "To make training with Gumbel-Softmax more stable, we first initialize the parameters by pre-training the KB-retriever with distant supervision and further fine-tuning our framework." ], [ "We choose the InCar Assistant dataset BIBREF6 including three distinct domains: navigation, weather and calendar domain. For weather domain, we follow wen2018sequence to separate the highest temperature, lowest temperature and weather attribute into three different columns. For calendar domain, there are some dialogues without a KB or incomplete KB. In this case, we padding a special token “-” in these incomplete KBs. Our framework is trained separately in these three domains, using the same train/validation/test split sets as eric:2017:SIGDial. To justify the generalization of the proposed model, we also use another public CamRest dataset BIBREF11 and partition the datasets into training, validation and testing set in the ratio 3:1:1. Especially, we hired some human experts to format the CamRest dataset by equipping the corresponding KB to every dialogues.", "All hyper-parameters are selected according to validation set. We use a three-hop memory network to model our KB-retriever. The dimensionalities of the embedding is selected from $\\lbrace 100, 200\\rbrace $ and LSTM hidden units is selected from $\\lbrace 50, 100, 150, 200, 350\\rbrace $. The dropout we use in our framework is selected from $\\lbrace 0.25, 0.5, 0.75\\rbrace $ and the batch size we adopt is selected from $\\lbrace 1,2\\rbrace $. L2 regularization is used on our model with a tension of $5\\times 10^{-6}$ for reducing overfitting. For training the retriever with distant supervision, we adopt the weight typing trick BIBREF20. We use Adam BIBREF21 to optimize the parameters in our model and adopt the suggested hyper-parameters for optimization.", "We adopt both the automatic and human evaluations in our experiments." ], [ "We compare our model with several baselines including:", "Attn seq2seq BIBREF22: A model with simple attention over the input context at each time step during decoding.", "Ptr-UNK BIBREF23: Ptr-UNK is the model which augments a sequence-to-sequence architecture with attention-based copy mechanism over the encoder context.", "KV Net BIBREF6: The model adopted and argumented decoder which decodes over the concatenation of vocabulary and KB entities, which allows the model to generate entities.", "Mem2Seq BIBREF7: Mem2Seq is the model that takes dialogue history and KB entities as input and uses a pointer gate to control either generating a vocabulary word or selecting an input as the output.", "DSR BIBREF9: DSR leveraged dialogue state representation to retrieve the KB implicitly and applied copying mechanism to retrieve entities from knowledge base while decoding.", "In InCar dataset, for the Attn seq2seq, Ptr-UNK and Mem2seq, we adopt the reported results from madotto2018mem2seq. In CamRest dataset, for the Mem2Seq, we adopt their open-sourced code to get the results while for the DSR, we run their code on the same dataset to obtain the results." ], [ "Follow the prior works BIBREF6, BIBREF7, BIBREF9, we adopt the BLEU and the Micro Entity F1 to evaluate our model performance. The experimental results are illustrated in Table TABREF30.", "In the first block of Table TABREF30, we show the Human, rule-based and KV Net (with*) result which are reported from eric:2017:SIGDial. We argue that their results are not directly comparable because their work uses the entities in thier canonicalized forms, which are not calculated based on real entity value. It's noticing that our framework with two methods still outperform KV Net in InCar dataset on whole BLEU and Entity F metrics, which demonstrates the effectiveness of our framework.", "In the second block of Table TABREF30, we can see that our framework trained with both the distant supervision and the Gumbel-Softmax beats all existing models on two datasets. Our model outperforms each baseline on both BLEU and F1 metrics. In InCar dataset, Our model with Gumbel-Softmax has the highest BLEU compared with baselines, which which shows that our framework can generate more fluent response. Especially, our framework has achieved 2.5% improvement on navigate domain, 1.8% improvement on weather domain and 3.5% improvement on calendar domain on F1 metric. It indicates that the effectiveness of our KB-retriever module and our framework can retrieve more correct entity from KB. In CamRest dataset, the same trend of improvement has been witnessed, which further show the effectiveness of our framework.", "Besides, we observe that the model trained with Gumbel-Softmax outperforms with distant supervision method. We attribute this to the fact that the KB-retriever and the Seq2Seq module are fine-tuned in an end-to-end fashion, which can refine the KB-retriever and further promote the dialogue generation." ], [ "In this section, we verify our assumption by examining the proportion of responses that can be supported by a single row.", "We define a response being supported by the most relevant KB row as all the responded entities are included by that row. We study the proportion of these responses over the test set. The number is 95% for the navigation domain, 90% for the CamRest dataset and 80% for the weather domain. This confirms our assumption that most responses can be supported by the relevant KB row. Correctly retrieving the supporting row should be beneficial.", "We further study the weather domain to see the rest 20% exceptions. Instead of being supported by multiple rows, most of these exceptions cannot be supported by any KB row. For example, there is one case whose reference response is “It 's not rainy today”, and the related KB entity is sunny. These cases provide challenges beyond the scope of this paper. If we consider this kind of cases as being supported by a single row, such proportion in the weather domain is 99%." ], [ "In this paper, we expect the consistent generation from our model. To verify this, we compute the consistency recall of the utterances that have multiple entities. An utterance is considered as consistent if it has multiple entities and these entities belong to the same row which we annotated with distant supervision.", "The consistency result is shown in Table TABREF37. From this table, we can see that incorporating retriever in the dialogue generation improves the consistency." ], [ "To further explore the correlation between the number of KB rows and generation consistency, we conduct experiments with distant manner to study the correlation between the number of KB rows and generation consistency.", "We choose KBs with different number of rows on a scale from 1 to 5 for the generation. From Figure FIGREF32, as the number of KB rows increase, we can see a decrease in generation consistency. This indicates that irrelevant information would harm the dialogue generation consistency." ], [ "To gain more insights into how the our retriever module influences the whole KB score distribution, we visualized the KB entity probability at the decoding position where we generate the entity 200_Alester_Ave. From the example (Fig FIGREF38), we can see the $4^\\text{th}$ row and the $1^\\text{th}$ column has the highest probabilities for generating 200_Alester_Ave, which verify the effectiveness of firstly selecting the most relevant KB row and further selecting the most relevant KB column." ], [ "We provide human evaluation on our framework and the compared models. These responses are based on distinct dialogue history. We hire several human experts and ask them to judge the quality of the responses according to correctness, fluency, and humanlikeness on a scale from 1 to 5. In each judgment, the expert is presented with the dialogue history, an output of a system with the name anonymized, and the gold response.", "The evaluation results are illustrated in Table TABREF37. Our framework outperforms other baseline models on all metrics according to Table TABREF37. The most significant improvement is from correctness, indicating that our model can retrieve accurate entity from KB and generate more informative information that the users want to know." ], [ "Sequence-to-sequence (Seq2Seq) models in text generation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 has gained more popular and they are applied for the open-domain dialogs BIBREF24, BIBREF25 in the end-to-end training method. Recently, the Seq2Seq can be used for learning task oriented dialogs and how to query the structured KB is the remaining challenges.", "Properly querying the KB has long been a challenge in the task-oriented dialogue system. In the pipeline system, the KB query is strongly correlated with the design of language understanding, state tracking, and policy management. Typically, after obtaining the dialogue state, the policy management module issues an API call accordingly to query the KB. With the development of neural network in natural language processing, efforts have been made to replacing the discrete and pre-defined dialogue state with the distributed representation BIBREF10, BIBREF11, BIBREF12, BIBREF26. In our framework, our retrieval result can be treated as a numeric representation of the API call return.", "Instead of interacting with the KB via API calls, more and more recent works tried to incorporate KB query as a part of the model. The most popular way of modeling KB query is treating it as an attention network over the entire KB entities BIBREF6, BIBREF27, BIBREF8, BIBREF28, BIBREF29 and the return can be a fuzzy summation of the entity representations. madotto2018mem2seq's practice of modeling the KB query with memory network can also be considered as learning an attentive preference over these entities. wen2018sequence propose the implicit dialogue state representation to query the KB and achieve the promising performance. Different from their modes, we propose the KB-retriever to explicitly query the KB, and the query result is used to filter the irrelevant entities in the dialogue generation to improve the consistency among the output entities." ], [ "In this paper, we propose a novel framework to improve entities consistency by querying KB in two steps. In the first step, inspired by the observation that a response can usually be supported by a single KB row, we introduce the KB retriever to return the most relevant KB row, which is used to filter the irrelevant KB entities and encourage consistent generation. In the second step, we further perform attention mechanism to select the most relevant KB column. Experimental results show the effectiveness of our method. Extensive analysis further confirms the observation and reveal the correlation between the success of KB query and the success of task-oriented dialogue generation." ], [ "We thank the anonymous reviewers for their helpful comments and suggestions. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153." ] ] }
{ "question": [ "What were the evaluation metrics?", "What were the baseline systems?", "Which dialog datasets did they experiment with?", "What KB is used?" ], "question_id": [ "ee31c8a94e07b3207ca28caef3fbaf9a38d94964", "66d743b735ba75589486e6af073e955b6bb9d2a4", "b9f852256113ef468d60e95912800fab604966f6", "88f8ab2a417eae497338514142ac12c3cec20876" ], "nlp_background": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "search_query": [ "", "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "BLEU", "Micro Entity F1", "quality of the responses according to correctness, fluency, and humanlikeness on a scale from 1 to 5" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Follow the prior works BIBREF6, BIBREF7, BIBREF9, we adopt the BLEU and the Micro Entity F1 to evaluate our model performance. The experimental results are illustrated in Table TABREF30.", "We provide human evaluation on our framework and the compared models. These responses are based on distinct dialogue history. We hire several human experts and ask them to judge the quality of the responses according to correctness, fluency, and humanlikeness on a scale from 1 to 5. In each judgment, the expert is presented with the dialogue history, an output of a system with the name anonymized, and the gold response." ], "highlighted_evidence": [ "Follow the prior works BIBREF6, BIBREF7, BIBREF9, we adopt the BLEU and the Micro Entity F1 to evaluate our model performance. ", "We provide human evaluation on our framework and the compared models. ", "We hire several human experts and ask them to judge the quality of the responses according to correctness, fluency, and humanlikeness on a scale from 1 to 5." ] } ], "annotation_id": [ "04edffd0e0e45be486c34361a9d8bf98eab34704" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Attn seq2seq", "Ptr-UNK", "KV Net", "Mem2Seq", "DSR" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We compare our model with several baselines including:", "Attn seq2seq BIBREF22: A model with simple attention over the input context at each time step during decoding.", "Ptr-UNK BIBREF23: Ptr-UNK is the model which augments a sequence-to-sequence architecture with attention-based copy mechanism over the encoder context.", "KV Net BIBREF6: The model adopted and argumented decoder which decodes over the concatenation of vocabulary and KB entities, which allows the model to generate entities.", "Mem2Seq BIBREF7: Mem2Seq is the model that takes dialogue history and KB entities as input and uses a pointer gate to control either generating a vocabulary word or selecting an input as the output.", "DSR BIBREF9: DSR leveraged dialogue state representation to retrieve the KB implicitly and applied copying mechanism to retrieve entities from knowledge base while decoding." ], "highlighted_evidence": [ "We compare our model with several baselines including:\n\nAttn seq2seq BIBREF22: A model with simple attention over the input context at each time step during decoding.\n\nPtr-UNK BIBREF23: Ptr-UNK is the model which augments a sequence-to-sequence architecture with attention-based copy mechanism over the encoder context.\n\nKV Net BIBREF6: The model adopted and argumented decoder which decodes over the concatenation of vocabulary and KB entities, which allows the model to generate entities.\n\nMem2Seq BIBREF7: Mem2Seq is the model that takes dialogue history and KB entities as input and uses a pointer gate to control either generating a vocabulary word or selecting an input as the output.\n\nDSR BIBREF9: DSR leveraged dialogue state representation to retrieve the KB implicitly and applied copying mechanism to retrieve entities from knowledge base while decoding." ] } ], "annotation_id": [ "1991049e4206374336627642891278443381b4f8" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Camrest", "InCar Assistant" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Since dialogue dataset is not typically annotated with the retrieval results, training the KB-retriever is non-trivial. To make the training feasible, we propose two methods: 1) we use a set of heuristics to derive the training data and train the retriever in a distant supervised fashion; 2) we use Gumbel-Softmax BIBREF14 as an approximation of the non-differentiable selecting process and train the retriever along with the Seq2Seq dialogue generation model. Experiments on two publicly available datasets (Camrest BIBREF11 and InCar Assistant BIBREF6) confirm the effectiveness of the KB-retriever. Both the retrievers trained with distant-supervision and Gumbel-Softmax technique outperform the compared systems in the automatic and human evaluations. Analysis empirically verifies our assumption that more than 80% responses in the dataset can be supported by a single KB row and better retrieval results lead to better task-oriented dialogue generation performance." ], "highlighted_evidence": [ "Experiments on two publicly available datasets (Camrest BIBREF11 and InCar Assistant BIBREF6) confirm the effectiveness of the KB-retriever." ] } ], "annotation_id": [ "856a4d7636c1b71c6d3f20ce88255ad419f3a7e8" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "79a38cdbf1c7c9dd2c947e00a41850ab61d1d04f" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Figure 1: An example of a task-oriented dialogue that incorporates a knowledge base (KB). The fourth row in KB supports the second turn of the dialogue. A dialogue system will produce a response with conflict entities if it includes the POI in the fourth row and the address in the fifth row, like “Valero is located at 899 Ames Ct”.", "Figure 2: The workflow of our Seq2Seq task-oriented dialogue generation model with KB-retriever. For simplification, we draw the single-hop memory network instead of the multiple-hop one we use in our model.", "Table 1: Comparison of our model with baselines", "Figure 3: Correlation between the number of KB rows and generation consistency on navigation domain.", "Table 2: The generation consistency and Human Evaluation on navigation domain. Cons. represents Consistency. Cor. represents Correctness. Flu. represents Fluency and Hum. represents Humanlikeness.", "Figure 4: KB score distribution. The distribution is the timestep when generate entity 200 Alester Ave for response “ Valero is located at 200 Alester Ave”" ], "file": [ "1-Figure1-1.png", "3-Figure2-1.png", "7-Table1-1.png", "7-Figure3-1.png", "8-Table2-1.png", "8-Figure4-1.png" ] }
1812.07023
From FiLM to Video: Multi-turn Question Answering with Multi-modal Context
Understanding audio-visual content and the ability to have an informative conversation about it have both been challenging areas for intelligent systems. The Audio Visual Scene-aware Dialog (AVSD) challenge, organized as a track of the Dialog System Technology Challenge 7 (DSTC7), proposes a combined task, where a system has to answer questions pertaining to a video given a dialogue with previous question-answer pairs and the video itself. We propose for this task a hierarchical encoder-decoder model which computes a multi-modal embedding of the dialogue context. It first embeds the dialogue history using two LSTMs. We extract video and audio frames at regular intervals and compute semantic features using pre-trained I3D and VGGish models, respectively. Before summarizing both modalities into fixed-length vectors using LSTMs, we use FiLM blocks to condition them on the embeddings of the current question, which allows us to reduce the dimensionality considerably. Finally, we use an LSTM decoder that we train with scheduled sampling and evaluate using beam search. Compared to the modality-fusing baseline model released by the AVSD challenge organizers, our model achieves a relative improvements of more than 16%, scoring 0.36 BLEU-4 and more than 33%, scoring 0.997 CIDEr.
{ "section_name": [ "Introduction", "Related Work", "The avsd dataset and challenge", "Models", "Utterance-level Encoder", "Description Encoder", "Video Encoder with Time-Extended FiLM", "Audio Encoder", "Fusing Modalities for Dialogue Context", "Decoders", "Loss Function", "Experiments", "Conclusions" ], "paragraphs": [ [ "Deep neural networks have been successfully applied to several computer vision tasks such as image classification BIBREF0 , object detection BIBREF1 , video action classification BIBREF2 , etc. They have also been successfully applied to natural language processing tasks such as machine translation BIBREF3 , machine reading comprehension BIBREF4 , etc. There has also been an explosion of interest in tasks which combine multiple modalities such as audio, vision, and language together. Some popular multi-modal tasks combining these three modalities, and their differences are highlighted in Table TABREF1 .", "Given an image and a question related to the image, the vqa challenge BIBREF5 tasked users with selecting an answer to the question. BIBREF6 identified several sources of bias in the vqa dataset, which led to deep neural models answering several questions superficially. They found that in several instances, deep architectures exploited the statistics of the dataset to select answers ignoring the provided image. This prompted the release of vqa 2.0 BIBREF7 which attempts to balance the original dataset. In it, each question is paired to two similar images which have different answers. Due to the complexity of vqa, understanding the failures of deep neural architectures for this task has been a challenge. It is not easy to interpret whether the system failed in understanding the question or in understanding the image or in reasoning over it. The CLEVR dataset BIBREF8 was hence proposed as a useful benchmark to evaluate such systems on the task of visual reasoning. Extending question answering over images to videos, BIBREF9 have proposed MovieQA, where the task is to select the correct answer to a provided question given the movie clip on which it is based.", "Intelligent systems that can interact with human users for a useful purpose are highly valuable. To this end, there has been a recent push towards moving from single-turn qa to multi-turn dialogue, which is a natural and intuitive setting for humans. Among multi-modal dialogue tasks, visdial BIBREF10 provides an image and dialogue where each turn is a qa pair. The task is to train a model to answer these questions within the dialogue. The avsd challenge extends the visdial task from images to the audio-visual domain.", "We present our modelname model for the avsd task. modelname combines a hred for encoding and generating qa-dialogue with a novel FiLM-based audio-visual feature extractor for videos and an auxiliary multi-task learning-based decoder for decoding a summary of the video. It outperforms the baseline results for the avsd dataset BIBREF11 and was ranked 2nd overall among the dstc7 avsd challenge participants.", "In Section SECREF2 , we discuss existing literature on end-to-end dialogue systems with a special focus on multi-modal dialogue systems. Section SECREF3 describes the avsd dataset. In Section SECREF4 , we present the architecture of our modelname model. We describe our evaluation and experimental setup in Section SECREF5 and then conclude in Section SECREF6 ." ], [ "With the availability of large conversational corpora from sources like Reddit and Twitter, there has been a lot of recent work on end-to-end modelling of dialogue for open domains. BIBREF12 treated dialogue as a machine translation problem where they translate from the stimulus to the response. They observed this to be more challenging than machine translation tasks due the larger diversity of possible responses. Among approaches that just use the previous utterance to generate the current response, BIBREF13 proposed a response generation model based on the encoder decoder framework. BIBREF14 also proposed an encoder-decoder based neural network architecture that uses the previous two utterances to generate the current response. Among discriminative methods (i.e. methods that produce a score for utterances from a set and then rank them), BIBREF15 proposed a neural architecture to select the best next response from a list of responses by measuring their similarity to the dialogue context. BIBREF16 extended prior work on encoder-decoder-based models to multi-turn conversations. They trained a hierarchical model called hred for generating dialogue utterances where a recurrent neural network encoder encodes each utterance. A higher-level recurrent neural network maintains the dialogue state by further encoding the individual utterance encodings. This dialogue state is then decoded by another recurrent decoder to generate the response at that point in time. In followup work, BIBREF17 used a latent stochastic variable to condition the generation process which aided their model in producing longer coherent outputs that better retain the context.", "Datasets and tasks BIBREF10 , BIBREF18 , BIBREF19 have also been released recently to study visual-input based conversations. BIBREF10 train several generative and discriminative deep neural models for the visdial task. They observe that on this task, discriminative models outperform generative models and that models making better use of the dialogue history do better than models that do not use dialogue history at all. Unexpectedly, the performance between models that use the image features and models that do no use these features is not significantly different. As we discussed in Section SECREF1 , this is similar to the issues vqa models faced initially due to the imbalanced nature of the dataset, which leads us to believe that language is a strong prior on the visdial dataset too. BIBREF20 train two separate agents to play a cooperative game where one agent has to answer the other agent's questions, which in turn has to predict the fc7 features of the Image obtained from VGGNet. Both agents are based on hred models and they show that agents fine-tuned with rl outperform agents trained solely with supervised learning. BIBREF18 train both generative and discriminative deep neural models on the igc dataset, where the task is to generate questions and answers to carry on a meaningful conversation. BIBREF19 train hred-based models on GuessWhat?! dataset in which agents have to play a guessing game where one player has to find an object in the picture which the other player knows about and can answer questions about them.", "Moving from image-based dialogue to video-based dialogue adds further complexity and challenges. Limited availability of such data is one of the challenges. Apart from the avsd dataset, there does not exist a video dialogue dataset to the best of our knowledge and the avsd data itself is fairly limited in size. Extracting relevant features from videos also contains the inherent complexity of extracting features from individual frames and additionally requires understanding their temporal interaction. The temporal nature of videos also makes it important to be able to focus on a varying-length subset of video frames as the action which is being asked about might be happening within them. There is also the need to encode the additional modality of audio which would be required for answering questions that rely on the audio track. With limited size of publicly available datasets based on the visual modality, learning useful features from high dimensional visual data has been a challenge even for the visdial dataset, and we anticipate this to be an even more significant challenge on the avsd dataset as it involves videos.", "On the avsd task, BIBREF11 train an attention-based audio-visual scene-aware dialogue model which we use as the baseline model for this paper. They divide each video into multiple equal-duration segments and, from each of them, extract video features using an I3D BIBREF21 model, and audio features using a VGGish BIBREF22 model. The I3D model was pre-trained on Kinetics BIBREF23 dataset and the VGGish model was pre-trained on Audio Set BIBREF24 . The baseline encodes the current utterance's question with a lstm BIBREF25 and uses the encoding to attend to the audio and video features from all the video segments and to fuse them together. The dialogue history is modelled with a hierarchical recurrent lstm encoder where the input to the lower level encoder is a concatenation of question-answer pairs. The fused feature representation is concatenated with the question encoding and the dialogue history encoding and the resulting vector is used to decode the current answer using an lstm decoder. Similar to the visdial models, the performance difference between the best model that uses text and the best model that uses both text and video features is small. This indicates that the language is a stronger prior here and the baseline model is unable to make good use of the highly relevant video.", "Automated evaluation of both task-oriented and non-task-oriented dialogue systems has been a challenge BIBREF26 , BIBREF27 too. Most such dialogue systems are evaluated using per-turn evaluation metrics since there is no suitable per-dialogue metric as conversations do not need to happen in a deterministic ordering of turns. These per-turn evaluation metrics are mostly word-overlap-based metrics such as BLEU, METEOR, ROUGE, and CIDEr, borrowed from the machine translation literature. Due to the diverse nature of possible responses, world-overlap metrics are not highly suitable for evaluating these tasks. Human evaluation of generated responses is considered the most reliable metric for such tasks but it is cost prohibitive and hence the dialogue system literature continues to rely widely on word-overlap-based metrics." ], [ "The avsd dataset BIBREF28 consists of dialogues collected via amt. Each dialogue is associated with a video from the Charades BIBREF29 dataset and has conversations between two amt workers related to the video. The Charades dataset has multi-action short videos and it provides text descriptions for these videos, which the avsd challenge also distributes as the caption. The avsd dataset has been collected using similar methodology as the visdial dataset. In avsd, each dialogue turn consists of a question and answer pair. One of the amt workers assumes the role of questioner while the other amt worker assumes the role of answerer. The questioner sees three static frames from the video and has to ask questions. The answerer sees the video and answers the questions asked by the questioner. After 10 such qa turns, the questioner wraps up by writing a summary of the video based on the conversation.", "Dataset statistics such as the number of dialogues, turns, and words for the avsd dataset are presented in Table TABREF5 . For the initially released prototype dataset, the training set of the avsd dataset corresponds to videos taken from the training set of the Charades dataset while the validation and test sets of the avsd dataset correspond to videos taken from the validation set of the Charades dataset. For the official dataset, training, validation and test sets are drawn from the corresponding Charades sets.", "The Charades dataset also provides additional annotations for the videos such as action, scene, and object annotations, which are considered to be external data sources by the avsd challenge, for which there is a special sub-task in the challenge. The action annotations also include the start and end time of the action in the video." ], [ "Our modelname model is based on the hred framework for modelling dialogue systems. In our model, an utterance-level recurrent lstm encoder encodes utterances and a dialogue-level recurrent lstm encoder encodes the final hidden states of the utterance-level encoders, thus maintaining the dialogue state and dialogue coherence. We use the final hidden states of the utterance-level encoders in the attention mechanism that is applied to the outputs of the description, video, and audio encoders. The attended features from these encoders are fused with the dialogue-level encoder's hidden states. An utterance-level decoder decodes the response for each such dialogue state following a question. We also add an auxiliary decoding module which is similar to the response decoder except that it tries to generate the caption and/or the summary of the video. We present our model in Figure FIGREF2 and describe the individual components in detail below." ], [ "The utterance-level encoder is a recurrent neural network consisting of a single layer of lstm cells. The input to the lstm are word embeddings for each word in the utterance. The utterance is concatenated with a special symbol <eos> marking the end of the sequence. We initialize our word embeddings using 300-dimensional GloVe BIBREF30 and then fine-tune them during training. For words not present in the GloVe vocabulary, we initialize their word embeddings from a random uniform distribution." ], [ "Similar to the utterance-level encoder, the description encoder is also a single-layer lstm recurrent neural network. Its word embeddings are also initialized with GloVe and then fine-tuned during training. For the description, we use the caption and/or the summary for the video provided with the dataset. The description encoder also has access to the last hidden state of the utterance-level encoder, which it uses to generate an attention map over the hidden states of its lstm. The final output of this module is the attention-weighted sum of the lstm hidden states." ], [ "For the video encoder, we use an I3D model pre-trained on the Kinetics dataset BIBREF23 and extract the output of its Mixed_7c layer for INLINEFORM0 (30 for our models) equi-distant segments of the video. Over these features, we add INLINEFORM1 (2 for our models) FiLM BIBREF31 blocks which have been highly successful in visual reasoning problems. Each FiLM block applies a conditional (on the utterance encoding) feature-wise affine transformation on the features input to it, ultimately leading to the extraction of more relevant features. The FiLM blocks are followed by fully connected layers which are further encoded by a single layer recurrent lstm network. The last hidden state of the utterance-level encoder then generates an attention map over the hidden states of its lstm, which is multiplied by the hidden states to provide the output of this module. We also experimented with using convolutional Mixed_5c features to capture spatial information but on the limited avsd dataset they did not yield any improvement. When not using the FiLM blocks, we use the final layer I3D features (provided by the avsd organizers) and encode them with the lstm directly, followed by the attention step. We present the video encoder in Figure FIGREF3 ." ], [ "The audio encoder is structurally similar to the video encoder. We use the VGGish features provided by the avsd challenge organizers. Also similar to the video encoder, when not using the FiLM blocks, we use the VGGish features and encode them with the lstm directly, followed by the attention step. The audio encoder is depicted in Figure FIGREF4 ." ], [ "The outputs of the encoders for past utterances, descriptions, video, and audio together form the dialogue context INLINEFORM0 which is the input of the decoder. We first combine past utterances using a dialogue-level encoder which is a single-layer lstm recurrent neural network. The input to this encoder are the final hidden states of the utterance-level lstm. To combine the hidden states of these diverse modalities, we found concatenation to perform better on the validation set than averaging or the Hadamard product." ], [ "The answer decoder consists of a single-layer recurrent lstm network and generates the answer to the last question utterance. At each time-step, it is provided with the dialogue-level state and produces a softmax over a vector corresponding to vocabulary words and stops when 30 words were produced or an end of sentence token is encountered.", "The auxiliary decoder is functionally similar to the answer decoder. The decoded sentence is the caption and/or description of the video. We use the Video Encoder state instead of the Dialogue-level Encoder state as input since with this module we want to learn a better video representation capable of decoding the description." ], [ "For a given context embedding INLINEFORM0 at dialogue turn INLINEFORM1 , we minimize the negative log-likelihood of the answer word INLINEFORM2 (vocabulary size), normalized by the number of words INLINEFORM3 in the ground truth response INLINEFORM4 , L(Ct, r) = -1Mm=1MiV( [rt,m=i] INLINEFORM5 ) , where the probabilities INLINEFORM6 are given by the decoder LSTM output, r*t,m-1 ={ll rt,m-1 ; s>0.2, sU(0, 1)", "v INLINEFORM0 ; else . is given by scheduled sampling BIBREF32 , and INLINEFORM1 is a symbol denoting the start of a sequence. We optimize the model using the AMSGrad algorithm BIBREF33 and use a per-condition random search to determine hyperparameters. We train the model using the BLEU-4 score on the validation set as our stopping citerion." ], [ "The avsd challenge tasks we address here are:", "We train our modelname model for Task 1.a and Task 2.a of the challenge and we present the results in Table TABREF9 . Our model outperforms the baseline model released by BIBREF11 on all of these tasks. The scores for the winning team have been released to challenge participants and are also included. Their approach, however, is not public as of yet. We observe the following for our models:", "Since the official test set has not been released publicly, results reported on the official test set have been provided by the challenge organizers. For the prototype test set and for the ablation study presented in Table TABREF24 , we use the same code for evaluation metrics as used by BIBREF11 for fairness and comparability. We attribute the significant performance gain of our model over the baseline to a combination of several factors as described below:", "Our primary architectural differences over the baseline model are: not concatenating the question, answer pairs before encoding them, the auxiliary decoder module, and using the Time-Extended FiLM module for feature extraction. These, combined with using scheduled sampling and running hyperparameter optimization over the validation set to select hyperparameters, give us the observed performance boost.", "We observe that our models generate fairly relevant responses to questions in the dialogues, and models with audio-visual inputs respond to audio-visual questions (e.g. “is there any voices or music ?”) correctly more often.", "We conduct an ablation study on the effectiveness of different components (eg., text, video and audio) and present it in Table TABREF24 . Our experiments show that:" ], [ "We presented modelname, a state-of-the-art dialogue model for conversations about videos. We evaluated the model on the official AVSD test set, where it achieves a relative improvement of more than 16% over the baseline model on BLEU-4 and more than 33% on CIDEr. The challenging aspect of multi-modal dialogue is fusing modalities with varying information density. On AVSD, it is easiest to learn from the input text, while video features remain largely opaque to the decoder. modelname uses a generalization of FiLM to video that conditions video feature extraction on a question. However, similar to related work, absolute improvements of incorporating video features into dialogue are consistent but small. Thus, while our results indicate the suitability of our FiLM generalization, they also highlight that applications at the intersection between language and video are currently constrained by the quality of video features, and emphasizes the need for larger datasets." ] ] }
{ "question": [ "At which interval do they extract video and audio frames?", "Do they use pretrained word vectors for dialogue context embedding?", "Do they train a different training method except from scheduled sampling?" ], "question_id": [ "05e3b831e4c02bbd64a6e35f6c52f0922a41539a", "bd74452f8ea0d1d82bbd6911fbacea1bf6e08cab", "6472f9d0a385be81e0970be91795b1b97aa5a9cf" ], "nlp_background": [ "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "ee2861105f2d63096676c4b63554fe0593a9c6a0" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [], "highlighted_evidence": [ "The utterance is concatenated with a special symbol marking the end of the sequence. We initialize our word embeddings using 300-dimensional GloVe BIBREF30 and then fine-tune them during training." ] } ], "annotation_id": [ "04f7cd52b0492dc423550fd5e96c757cec3066cc" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Answer with content missing: (list missing) \nScheduled sampling: In our experiments, we found that models trained with scheduled sampling performed better (about 0.004 BLEU-4 on validation set) than the ones trained using teacher-forcing for the AVSD dataset. Hence, we use scheduled sampling for all the results we report in this paper.\n\nYes.", "evidence": [ "Since the official test set has not been released publicly, results reported on the official test set have been provided by the challenge organizers. For the prototype test set and for the ablation study presented in Table TABREF24 , we use the same code for evaluation metrics as used by BIBREF11 for fairness and comparability. We attribute the significant performance gain of our model over the baseline to a combination of several factors as described below:" ], "highlighted_evidence": [ "We attribute the significant performance gain of our model over the baseline to a combination of several factors as described below:" ] } ], "annotation_id": [ "88bf278c9f23fbbb3cee3410c62d8760350ddb7d" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Table 1: Tasks with audio, visual and text modalities", "Figure 1: FA-HRED uses the last question’s encoding to attend to video description, audio, and video features. These features along with the dialogue state enable the model to generate the answer to the current question. The ground truth answer is encoded into the dialogue history for the next turn.", "Figure 2: Video Encoder Module: FiLM for video features. Question encoding of the current question is used here.", "Figure 3: Audio Encoder Module: FiLM for audio features. Question encoding of the current question is used here.", "Table 2: AVSD: Dataset Statistics. Top: official dataset. Bottom half: prototype dataset released earlier.", "Table 3: Scores achieved by our model on different tasks of the AVSD challenge test set. Task 1 model configurations use both video and text features while Task 2 model configurations only use text features. First section: train on official, test on official. Second section: train on prototype, test on official. Third section: train on prototype, test on prototype.", "Table 4: Model ablation Study comparing BLEU-4 on the validation set: The best model makes use of all modalities and the video summary. Applying FiLM to audio and video features consistently outperforms unconditioned feature extraction. Video features (I3D) are more important than audio (VGGish). Combining all multi-modal components (e.g., text, audio and video) helps improve performance only when using FiLM blocks." ], "file": [ "1-Table1-1.png", "3-Figure1-1.png", "3-Figure2-1.png", "3-Figure3-1.png", "4-Table2-1.png", "5-Table3-1.png", "6-Table4-1.png" ] }
1610.04377
Civique: Using Social Media to Detect Urban Emergencies
We present the Civique system for emergency detection in urban areas by monitoring micro blogs like Tweets. The system detects emergency related events, and classifies them into appropriate categories like"fire","accident","earthquake", etc. We demonstrate our ideas by classifying Twitter posts in real time, visualizing the ongoing event on a map interface and alerting users with options to contact relevant authorities, both online and offline. We evaluate our classifiers for both the steps, i.e., emergency detection and categorization, and obtain F-scores exceeding 70% and 90%, respectively. We demonstrate Civique using a web interface and on an Android application, in realtime, and show its use for both tweet detection and visualization.
{ "section_name": [ "Introduction", "Motivation and Challenges", "Our Approach", "Pre-Processing Modules", "Emergency Classification", "Type Classification", "Location Visualizer", "Evaluation", "Dataset Creation", "Classifier Evaluation", "Demostration Description", "Conclusions" ], "paragraphs": [ [ "With the surge in the use of social media, micro-blogging sites like Twitter, Facebook, and Foursquare have become household words. Growing ubiquity of mobile phones in highly populated developing nations has spurred an exponential rise in social media usage. The heavy volume of social media posts tagged with users' location information on micro-blogging website Twitter presents a unique opportunity to scan these posts. These Short texts (e.g. \"tweets\") on social media contain information about various events happening around the globe, as people post about events and incidents alike. Conventional web outlets provide emergency phone numbers (i.e. 100, 911), etc., and are fast and accurate. Our system, on the other hand, connects its users through a relatively newer platform i.e. social media, and provides an alternative to these conventional methods. In case of their failure or when such means are busy/occupied, an alternative could prove to be life saving.", "These real life events are reported on Twitter with different perspectives, opinions, and sentiment. Every day, people discuss events thousands of times across social media sites. We would like to detect such events in case of an emergency. Some previous studies BIBREF0 investigate the use of features such as keywords in the tweet, number of words, and context to devise a classifier for event detection. BIBREF1 discusses various techniques researchers have used previously to detect events from Twitter. BIBREF2 describe a system to automatically detect events about known entities from Twitter. This work is highly specific to detection of events only related to known entities. BIBREF3 discuss a system that returns a ranked list of relevant events given a user query.", "Several research efforts have focused on identifying events in real time( BIBREF4 BIBREF5 BIBREF6 BIBREF0 ). These include systems to detect emergent topics from Twitter in real time ( BIBREF4 BIBREF7 ), an online clustering technique for identifying tweets in real time BIBREF5 , a system to detect localized events and also track evolution of such events over a period of time BIBREF6 . Our focus is on detecting urban emergencies as events from Twitter messages. We classify events ranging from natural disasters to fire break outs, and accidents. Our system detects whether a tweet, which contains a keyword from a pre-decided list, is related to an actual emergency or not. It also classifies the event into its appropriate category, and visualizes the possible location of the emergency event on the map. We also support notifications to our users, containing the contacts of specifically concerned authorities, as per the category of their tweet.", "The rest of the paper is as follows: Section SECREF2 provides the motivation for our work, and the challenges in building such a system. Section SECREF3 describes the step by step details of our work, and its results. We evaluate our system and present the results in Section SECREF4 . Section SECREF5 showcases our demonstrations in detail, and Section SECREF6 concludes the paper by briefly describing the overall contribution, implementation and demonstration." ], [ "In 2015, INLINEFORM0 of all unnatural deaths in India were caused by accidents, and INLINEFORM1 by accidental fires. Moreover, the Indian subcontinent suffered seven earthquakes in 2015, with the recent Nepal earthquake alone killing more than 9000 people and injuring INLINEFORM2 . We believe we can harness the current social media activity on the web to minimize losses by quickly connecting affected people and the concerned authorities. Our work is motivated by the following factors, (a) Social media is very accessible in the current scenario. (The “Digital India” initiative by the Government of India promotes internet activity, and thus a pro-active social media.) (b) As per the Internet trends reported in 2014, about 117 million Indians are connected to the Internet through mobile devices. (c) A system such as ours can point out or visualize the affected areas precisely and help inform the authorities in a timely fashion. (d) Such a system can be used on a global scale to reduce the effect of natural calamities and prevent loss of life.", "There are several challenges in building such an application: (a) Such a system expects a tweet to be location tagged. Otherwise, event detection techniques to extract the spatio-temporal data from the tweet can be vague, and lead to false alarms. (b) Such a system should also be able to verify the user's credibility as pranksters may raise false alarms. (c) Tweets are usually written in a very informal language, which requires a sophisticated language processing component to sanitize the tweet input before event detection. (d) A channel with the concerned authorities should be established for them to take serious action, on alarms raised by such a system. (e) An urban emergency such as a natural disaster could affect communications severely, in case of an earthquake or a cyclone, communications channels like Internet connectivity may get disrupted easily. In such cases, our system may not be of help, as it requires the user to be connected to the internet. We address the above challenges and present our approach in the next section." ], [ "We propose a software architecture for Emergency detection and visualization as shown in figure FIGREF9 . We collect data using Twitter API, and perform language pre-processing before applying a classification model. Tweets are labelled manually with <emergency>and <non-emergency>labels, and later classified manually to provide labels according to the type of emergency they indicate. We use the manually labeled data for training our classifiers.", "We use traditional classification techniques such as Support Vector Machines(SVM), and Naive Bayes(NB) for training, and perform 10-fold cross validation to obtain f-scores. Later, in real time, our system uses the Twitter streaming APIs to get data, pre-processes it using the same modules, and detects emergencies using the classifiers built above. The tweets related to emergencies are displayed on the web interface along with the location and information for the concerned authorities. The pre-processing of Twitter data obtained is needed as it usually contains ad-hoc abbreviations, phonetic substitutions, URLs, hashtags, and a lot of misspelled words. We use the following language processing modules for such corrections." ], [ "We implement a cleaning module to automate the cleaning of tweets obtained from the Twitter API. We remove URLs, special symbols like @ along with the user mentions, Hashtags and any associated text. We also replace special symbols by blank spaces, and inculcate the module as shown in figure FIGREF9 .", "An example of such a sample tweet cleaning is shown in table TABREF10 .", "While tweeting, users often express their emotions by stressing over a few characters in the word. For example, usage of words like hellpppp, fiiiiiireeee, ruuuuunnnnn, druuuuuunnnkkk, soooooooo actually corresponds to help, fire, run, drunk, so etc. We use the compression module implemented by BIBREF8 for converting terms like “pleeeeeeeaaaaaassseeee” to “please”.", "It is unlikely for an English word to contain the same character consecutively for three or more times. We, hence, compress all the repeated windows of character length greater than two, to two characters. For example “pleeeeeaaaassee” is converted to “pleeaassee”. Each window now contains two characters of the same alphabet in cases of repetition. Let n be the number of windows, obtained from the previous step. We, then, apply brute force search over INLINEFORM0 possibilities to select a valid dictionary word.", "Table TABREF13 contains sanitized sample output from our compression module for further processing.", "Text Normalization is the process of translating ad-hoc abbreviations, typographical errors, phonetic substitution and ungrammatical structures used in text messaging (Tweets and SMS) to plain English. Use of such language (often referred as Chatting Language) induces noise which poses additional processing challenges.", "We use the normalization module implemented by BIBREF8 for text normalization. Training process requires a Language Model of the target language and a parallel corpora containing aligned un-normalized and normalized word pairs. Our language model consists of 15000 English words taken from various sources on the web.", "Parallel corpora was collected from the following sources:", "Stanford Normalization Corpora which consists of 9122 pairs of un-normalized and normalized words / phrases.", "The above corpora, however, lacked acronyms and short hand texts like 2mrw, l8r, b4, hlp, flor which are frequently used in chatting. We collected 215 pairs un-normalized to normalized word/phrase mappings via crowd-sourcing.", "Table TABREF16 contains input and normalized output from our module.", "Users often make spelling mistakes while tweeting. A spell checker makes sure that a valid English word is sent to the classification system. We take this problem into account by introducing a spell checker as a pre-processing module by using the JAVA API of Jazzy spell checker for handling spelling mistakes.", "An example of correction provided by the Spell Checker module is given below:-", "Input: building INLINEFORM0 flor, help", "Output: building INLINEFORM0 floor, help", "Please note that, our current system performs compression, normalization and spell-checking if the language used is English. The classifier training and detection process are described below." ], [ "The first classifier model acts as a filter for the second stage of classification. We use both SVM and NB to compare the results and choose SVM later for stage one classification model, owing to a better F-score. The training is performed on tweets labeled with classes <emergency>, and <non-emergency> based on unigrams as features. We create word vectors of strings in the tweet using a filter available in the WEKA API BIBREF9 , and perform cross validation using standard classification techniques." ], [ "We employ a multi-class Naive Bayes classifier as the second stage classification mechanism, for categorizing tweets appropriately, depending on the type of emergencies they indicate. This multi-class classifier is trained on data manually labeled with classes. We tokenize the training data using “NgramTokenizer” and then, apply a filter to create word vectors of strings before training. We use “trigrams” as features to build a model which, later, classifies tweets into appropriate categories, in real time. We then perform cross validation using standard techniques to calculate the results, which are shown under the label “Stage 2”, in table TABREF20 ." ], [ "We use Google Maps Geocoding API to display the possible location of the tweet origin based on longitude and latitude. Our visualizer presents the user with a map and pinpoints the location with custom icons for earthquake, cyclone, fire accident etc. Since we currently collect tweets with a location filter for the city of \"Mumbai\", we display its map location on the interface. The possible occurrences of such incidents are displayed on the map as soon as our system is able to detect it.", "We also display the same on an Android device using the WebView functionality available to developers, thus solving the issue of portability. Our system displays visualization of the various emergencies detected on both web browsers and mobile devices." ], [ "We evaluate our system using automated, and manual evaluation techniques. We perform 10-fold cross validation to obtain the F-scores for our classification systems. We use the following technique for dataset creation. We test the system in realtime environments, and tweet about fires at random locations in our city, using test accounts. Our system was able to detect such tweets and detect them with locations shown on the map." ], [ "We collect data by using the Twitter API for saved data, available for public use. For our experiments we collect 3200 tweets filtered by keywords like “fire”, “earthquake”, “theft”, “robbery”, “drunk driving”, “drunk driving accident” etc. Later, we manually label tweets with <emergency>and <non-emergency>labels for classification as stage one. Our dataset contains 1313 tweet with positive label <emergency>and 1887 tweets with a negative label <non-emergency>. We create another dataset with the positively labeled tweets and provide them with category labels like “fire”, “accident”, “earthquake” etc.", "" ], [ "The results of 10-fold cross-validation performed for stage one are shown in table TABREF20 , under the label “Stage 1”. In table TABREF20 , For “Stage 1” of classification, F-score obtained using SVM classifier is INLINEFORM0 as shown in row 2, column 2. We also provide the system with sample tweets in real time and assess its ability to detect the emergency, and classify it accordingly. The classification training for Stage 1 was performed using two traditional classification techniques SVM and NB. SVM outperformed NB by around INLINEFORM1 and became the choice of classification technique for stage one.", "Some false positives obtained during manual evaluation are, “I am sooooo so drunk right nowwwwwwww” and “fire in my office , the boss is angry”. These occurrences show the need of more labeled gold data for our classifiers, and some other features, like Part-of-Speech tags, Named Entity recognition, Bigrams, Trigrams etc. to perform better.", "The results of 10-fold cross-validation performed for stage two classfication model are also shown in table TABREF20 , under the label “Stage 2”. The training for stage two was also performed using both SVM and NB, but NB outperformed SVM by around INLINEFORM0 to become a choice for stage two classification model.", "We also perform attribute evaluation for the classification model, and create a word cloud based on the output values, shown in figure FIGREF24 . It shows that our classifier model is trained on appropriate words, which are very close to the emergency situations viz. “fire”, “earthquake”, “accident”, “break” (Unigram representation here, but possibly occurs in a bigram phrase with “fire”) etc. In figure FIGREF24 , the word cloud represents the word “respond” as the most frequently occurring word as people need urgent help, and quick response from the assistance teams." ], [ "Users interact with Civique through its Web-based user interface and Android based application interface. The features underlying Civique are demonstrated through the following two show cases:", "Show case 1: Tweet Detection and Classification", "This showcase aims at detecting related tweets, and classifying them into appropriate categories. For this, we have created a list of filter words, which are used to filter tweets from the Twitter streaming API. These set of words help us filter the tweets related to any incident. We will tweet, and users are able to see how our system captures such tweets and classifies them. Users should be able to see the tweet emerge as an incident on the web-interface, as shown in figure FIGREF26 and the on the android application, as shown in figure FIGREF27 . Figure FIGREF27 demonstrates how a notification is generated when our system detects an emergency tweet. When a user clicks the emerged spot, the system should be able to display the sanitized version / extracted spatio-temporal data from the tweet. We test the system in a realtime environment, and validate our experiments. We also report the false positives generated during the process in section SECREF25 above.", "Show case 2: User Notification and Contact Info.", "Civique includes a set of local contacts for civic authorities who are to be / who can be contacted in case of various emergencies. Users can see how Civique detects an emergency and classifies it. They can also watch how the system generates a notification on the web interface and the Android interface, requesting them to contact the authorities for emergencies. Users can change their preferences on the mobile device anytime and can also opt not to receive notifications. Users should be able to contact the authorities online using the application, but in case the online contact is not responsive, or in case of a sudden loss of connectivity, we provide the user with the offline contact information of the concerned civic authorities along with the notifications." ], [ "Civique is a system which detects urban emergencies like earthquakes, cyclones, fire break out, accidents etc. and visualizes them on both on a browsable web interface and an Android application. We collect data from the popular micro-blogging site Twitter and use language processing modules to sanitize the input. We use this data as input to train a two step classification system, which indicates whether a tweet is related to an emergency or not, and if it is, then what category of emergency it belongs to. We display such positively classified tweets along with their type and location on a Google map, and notify our users to inform the concerned authorities, and possibly evacuate the area, if his location matches the affected area. We believe such a system can help the disaster management machinery, and government bodies like Fire department, Police department, etc., to act swiftly, thus minimizing the loss of life.", "Twitter users use slang, profanity, misspellings and neologisms. We, use standard cleaning methods, and combine NLP with Machine Learning (ML) to further our cause of tweet classification. At the current stage, we also have an Android application ready for our system, which shows the improvised, mobile-viewable web interface.", "In the future, we aim to develop detection of emergency categories on the fly, obscure emergencies like “airplane hijacking” should also be detected by our system. We plan to analyze the temporal sequence of the tweet set from a single location to determine whether multiple problems on the same location are the result of a single event, or relate to multiple events." ] ] }
{ "question": [ "Is the web interface publicly accessible?", "Is the Android application publicly available?", "What classifier is used for emergency categorization?", "What classifier is used for emergency detection?", "Do the tweets come from any individual?", "How many categories are there?", "What was the baseline?", "Are the tweets specific to a region?" ], "question_id": [ "2173809eb117570d289cefada6971e946b902bd6", "293e9a0b30670f4f0a380e574a416665a8c55bc2", "17de58c17580350c9da9c2f3612784b432154d11", "ff27d6e6eb77e55b4d39d343870118d1a6debd5e", "29772ba04886bee2d26b7320e1c6d9b156078891", "94dc437463f7a7d68b8f6b57f1e3606eacc4490a", "9d9d84822a9c42eb0257feb7c18086d390dae3be", "d27e3a099954e917b6491e81b2e907478d7f1233" ], "nlp_background": [ "", "", "", "", "", "", "", "" ], "topic_background": [ "", "", "", "", "", "", "", "" ], "paper_read": [ "", "", "", "", "", "", "", "" ], "search_query": [ "", "", "", "", "", "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "aa7c4541890cc2730d2bfda8e30dd452dd843d67" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "d00389c4bdc595c2037cbb572cce4d394fd63f7f" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "multi-class Naive Bayes" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We employ a multi-class Naive Bayes classifier as the second stage classification mechanism, for categorizing tweets appropriately, depending on the type of emergencies they indicate. This multi-class classifier is trained on data manually labeled with classes. We tokenize the training data using “NgramTokenizer” and then, apply a filter to create word vectors of strings before training. We use “trigrams” as features to build a model which, later, classifies tweets into appropriate categories, in real time. We then perform cross validation using standard techniques to calculate the results, which are shown under the label “Stage 2”, in table TABREF20 ." ], "highlighted_evidence": [ "We employ a multi-class Naive Bayes classifier as the second stage classification mechanism, for categorizing tweets appropriately, depending on the type of emergencies they indicate." ] } ], "annotation_id": [ "f8e2eeaa8cfe709dd639cdf4b4f3ca79e16859c1" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "SVM" ], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [ "The first classifier model acts as a filter for the second stage of classification. We use both SVM and NB to compare the results and choose SVM later for stage one classification model, owing to a better F-score. The training is performed on tweets labeled with classes , and based on unigrams as features. We create word vectors of strings in the tweet using a filter available in the WEKA API BIBREF9 , and perform cross validation using standard classification techniques." ] } ], "annotation_id": [ "051cfb7e9a7a2d7b9c376a8643e8391bf8d66a7d" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "We collect data by using the Twitter API for saved data, available for public use. For our experiments we collect 3200 tweets filtered by keywords like “fire”, “earthquake”, “theft”, “robbery”, “drunk driving”, “drunk driving accident” etc. Later, we manually label tweets with <emergency>and <non-emergency>labels for classification as stage one. Our dataset contains 1313 tweet with positive label <emergency>and 1887 tweets with a negative label <non-emergency>. We create another dataset with the positively labeled tweets and provide them with category labels like “fire”, “accident”, “earthquake” etc." ], "highlighted_evidence": [ "We collect data by using the Twitter API for saved data, available for public use. For our experiments we collect 3200 tweets filtered by keywords like “fire”, “earthquake”, “theft”, “robbery”, “drunk driving”, “drunk driving accident” etc. " ] } ], "annotation_id": [ "a804c55ae0c0cbce8c379772c86692cafadabf59" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "9bc42b21ef3520dd8565a2320d83529b21664c8c" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "fef4a71e45541caceeecb44c8de4ef4d8595fd10" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [ "We collect data by using the Twitter API for saved data, available for public use. For our experiments we collect 3200 tweets filtered by keywords like “fire”, “earthquake”, “theft”, “robbery”, “drunk driving”, “drunk driving accident” etc. Later, we manually label tweets with and labels for classification as stage one. Our dataset contains 1313 tweet with positive label and 1887 tweets with a negative label . We create another dataset with the positively labeled tweets and provide them with category labels like “fire”, “accident”, “earthquake” etc." ] } ], "annotation_id": [ "d01159da96747c4eba3815ffe050a3634ecbab62" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] } ] }
{ "caption": [ "Fig. 1. System Architecture", "Table 2. Sample output of Compression module", "Table 4. Classification Results", "Fig. 2. Word Cloud of top attributes", "Fig. 3. Screenshot: Web Interface", "Fig. 4. Screenshot: Mobile Interface Fig. 5. Screenshot: Generated Notification" ], "file": [ "3-Figure1-1.png", "5-Table2-1.png", "6-Table4-1.png", "7-Figure2-1.png", "8-Figure3-1.png", "9-Figure4-1.png" ] }
1906.06448
Can neural networks understand monotonicity reasoning?
Monotonicity reasoning is one of the important reasoning skills for any intelligent natural language inference (NLI) model in that it requires the ability to capture the interaction between lexical and syntactic structures. Since no test set has been developed for monotonicity reasoning with wide coverage, it is still unclear whether neural models can perform monotonicity reasoning in a proper way. To investigate this issue, we introduce the Monotonicity Entailment Dataset (MED). Performance by state-of-the-art NLI models on the new test set is substantially worse, under 55%, especially on downward reasoning. In addition, analysis using a monotonicity-driven data augmentation method showed that these models might be limited in their generalization ability in upward and downward reasoning.
{ "section_name": [ "Introduction", "Monotonicity", "Human-oriented dataset", "Linguistics-oriented dataset", "Statistics", "Baselines", "Data augmentation for analysis", "Discussion", "Conclusion", "Acknowledgement" ], "paragraphs": [ [ "Natural language inference (NLI), also known as recognizing textual entailment (RTE), has been proposed as a benchmark task for natural language understanding. Given a premise $P$ and a hypothesis $H$ , the task is to determine whether the premise semantically entails the hypothesis BIBREF0 . A number of recent works attempt to test and analyze what type of inferences an NLI model may be performing, focusing on various types of lexical inferences BIBREF1 , BIBREF2 , BIBREF3 and logical inferences BIBREF4 , BIBREF5 .", "Concerning logical inferences, monotonicity reasoning BIBREF6 , BIBREF7 , which is a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures. Consider examples in ( \"Introduction\" ) and ( \"Introduction\" ).", "All [ workers $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ] [joined for a French dinner $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] All workers joined for a dinner All new workers joined for a French dinner Not all [new workers $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] joined for a dinner Not all workers joined for a dinner ", "A context is upward entailing (shown by [... $\\leavevmode {\\color {red!80!black}\\uparrow }$ ]) that allows an inference from ( \"Introduction\" ) to ( \"Introduction\" ), where French dinner is replaced by a more general concept dinner. On the other hand, a downward entailing context (shown by [... $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ]) allows an inference from ( \"Introduction\" ) to ( \"Introduction\" ), where workers is replaced by a more specific concept new workers. Interestingly, the direction of monotonicity can be reversed again by embedding yet another downward entailing context (e.g., not in ( \"Introduction\" )), as witness the fact that ( \"Introduction\" ) entails ( \"Introduction\" ). To properly handle both directions of monotonicity, NLI models must detect monotonicity operators (e.g., all, not) and their arguments from the syntactic structure.", "For previous datasets containing monotonicity inference problems, FraCaS BIBREF8 and the GLUE diagnostic dataset BIBREF9 are manually-curated datasets for testing a wide range of linguistic phenomena. However, monotonicity problems are limited to very small sizes (FraCaS: 37/346 examples and GLUE: 93/1650 examples). The limited syntactic patterns and vocabularies in previous test sets are obstacles in accurately evaluating NLI models on monotonicity reasoning.", "To tackle this issue, we present a new evaluation dataset that covers a wide range of monotonicity reasoning that was created by crowdsourcing and collected from linguistics publications (Section \"Dataset\" ). Compared with manual or automatic construction, we can collect naturally-occurring examples by crowdsourcing and well-designed ones from linguistics publications. To enable the evaluation of skills required for monotonicity reasoning, we annotate each example in our dataset with linguistic tags associated with monotonicity reasoning.", "We measure the performance of state-of-the-art NLI models on monotonicity reasoning and investigate their generalization ability in upward and downward reasoning (Section \"Results and Discussion\" ). The results show that all models trained with SNLI BIBREF4 and MultiNLI BIBREF10 perform worse on downward inferences than on upward inferences.", "In addition, we analyzed the performance of models trained with an automatically created monotonicity dataset, HELP BIBREF11 . The analysis with monotonicity data augmentation shows that models tend to perform better in the same direction of monotonicity with the training set, while they perform worse in the opposite direction. This indicates that the accuracy on monotonicity reasoning depends solely on the majority direction in the training set, and models might lack the ability to capture the structural relations between monotonicity operators and their arguments." ], [ "As an example of a monotonicity inference, consider the example with the determiner every in ( \"Monotonicity\" ); here the premise $P$ entails the hypothesis $H$ .", " $P$ : Every [ $_{\\scriptsize \\mathsf {NP}}$ person $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ] [ $_{\\scriptsize \\mathsf {VP}}$ bought a movie ticket $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] $H$ : Every young person bought a ticket ", "Every is downward entailing in the first argument ( $\\mathsf {NP}$ ) and upward entailing in the second argument ( $\\mathsf {VP}$ ), and thus the term person can be more specific by adding modifiers (person $\\sqsupseteq $ young person), replacing it with its hyponym (person $\\sqsupseteq $ spectator), or adding conjunction (person $\\sqsupseteq $ person and alien). On the other hand, the term buy a ticket can be more general by removing modifiers (bought a movie ticket $\\sqsubseteq $ bought a ticket), replacing it with its hypernym (bought a movie ticket $\\sqsubseteq $ bought a show ticket), or adding disjunction (bought a movie ticket $\\sqsubseteq $ bought or sold a movie ticket). Table 1 shows determiners modeled as binary operators and their polarities with respect to the first and second arguments.", "There are various types of downward operators, not limited to determiners (see Table 2 ). As shown in ( \"Monotonicity\" ), if a propositional object is embedded in a downward monotonic context (e.g., when), the polarity of words over its scope can be reversed.", " $P$ : When [every [ $_{\\scriptsize \\mathsf {NP}}$ young person $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] [ $_{\\scriptsize \\mathsf {VP}}$ bought a ticket $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ]], [that shop was open] $H$ : When [every [ $_{\\scriptsize \\mathsf {NP}}$ person] [ $_{\\scriptsize \\mathsf {VP}}$ bought a movie ticket]], [that shop was open] ", "Thus, the polarity ( $\\leavevmode {\\color {red!80!black}\\uparrow }$ and $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ), where the replacement with more general (specific) phrases licenses entailment, needs to be determined by the interaction of monotonicity properties and syntactic structures; polarity of each constituent is calculated based on a monotonicity operator of functional expressions (e.g., every, when) and their function-term relations." ], [ "To create monotonicity inference problems, we should satisfy three requirements: (a) detect the monotonicity operators and their arguments; (b) based on the syntactic structure, induce the polarity of the argument positions; and (c) replace the phrase in the argument position with a more general or specific phrase in natural and various ways (e.g., by using lexical knowledge or logical connectives). For (a) and (b), we first conduct polarity computation on a syntactic structure for each sentence, and then select premises involving upward/downward expressions.", "For (c), we use crowdsourcing to narrow or broaden the arguments. The motivation for using crowdsourcing is to collect naturally alike monotonicity inference problems that include various expressions. One problem here is that it is unclear how to instruct workers to create monotonicity inference problems without knowledge of natural language syntax and semantics. We must make tasks simple for workers to comprehend and provide sound judgements. Moreover, recent studies BIBREF12 , BIBREF3 , BIBREF13 point out that previous crowdsourced datasets, such as SNLI BIBREF14 and MultiNLI BIBREF10 , include hidden biases. As these previous datasets are motivated by approximated entailments, workers are asked to freely write hypotheses given a premise, which does not strictly restrict them to creating logically complex inferences.", "Taking these concerns into consideration, we designed two-step tasks to be performed via crowdsourcing for creating a monotonicity test set; (i) a hypothesis creation task and (ii) a validation task. The task (i) is to create a hypothesis by making some polarized part of an original sentence more specific. Instead of writing a complete sentence from scratch, workers are asked to rewrite only a relatively short sentence. By restricting workers to rewrite only a polarized part, we can effectively collect monotonicity inference examples. The task (ii) is to annotate an entailment label for the premise-hypothesis pair generated in (i). Figure 1 summarizes the overview of our human-oriented dataset creation. We used the crowdsourcing platform Figure Eight for both tasks.", "As a resource, we use declarative sentences with more than five tokens from the Parallel Meaning Bank (PMB) BIBREF15 . The PMB contains syntactically correct sentences annotated with its syntactic category in Combinatory Categorial Grammar (CCG; BIBREF16 , BIBREF16 ) format, which is suitable for our purpose. To get a whole CCG derivation tree, we parse each sentence by the state-of-the-art CCG parser, depccg BIBREF17 . Then, we add a polarity to every constituent of the CCG tree by the polarity computation system ccg2mono BIBREF18 and make the polarized part a blank field.", "We ran a trial rephrasing task on 500 examples and detected 17 expressions that were too general and thus difficult to rephrase them in a natural way (e.g., every one, no time). We removed examples involving such expressions. To collect more downward inference examples, we select examples involving determiners in Table 1 and downward operators in Table 2 . As a result, we selected 1,485 examples involving expressions having arguments with upward monotonicity and 1,982 examples involving expressions having arguments with downward monotonicity.", "We present crowdworkers with a sentence whose polarized part is underlined, and ask them to replace the underlined part with more specific phrases in three different ways. In the instructions, we showed examples rephrased in various ways: by adding modifiers, by adding conjunction phrases, and by replacing a word with its hyponyms.", "Workers were paid US$0.05 for each set of substitutions, and each set was assigned to three workers. To remove low-quality examples, we set the minimum time it should take to complete each set to 200 seconds. The entry in our task was restricted to workers from native speaking English countries. 128 workers contributed to the task, and we created 15,339 hypotheses (7,179 upward examples and 8,160 downward examples).", "The gold label of each premise-hypothesis pair created in the previous task is automatically determined by monotonicity calculus. That is, a downward inference pair is labeled as entailment, while an upward inference pair is labeled as non-entailment.", "However, workers sometimes provided some ungrammatical or unnatural sentences such as the case where a rephrased phrase does not satisfy the selectional restrictions (e.g., original: Tom doesn't live in Boston, rephrased: Tom doesn't live in yes), making it difficult to judge their entailment relations. Thus, we performed an annotation task to ensure accurate labeling of gold labels. We asked workers about the entailment relation of each premise-hypothesis pair as well as how natural it is.", "Worker comprehension of an entailment relation directly affects the quality of inference problems. To avoid worker misunderstandings, we showed workers the following definitions of labels and five examples for each label:", "entailment: the case where the hypothesis is true under any situation that the premise describes.", "non-entailment: the case where the hypothesis is not always true under a situation that the premise describes.", "unnatural: the case where either the premise and/or the hypothesis is ungrammatical or does not make sense.", "Workers were paid US$0.04 for each question, and each question was assigned to three workers. To collect high-quality annotation results, we imposed ten test questions on each worker, and removed workers who gave more than three wrong answers. We also set the minimum time it should take to complete each question to 200 seconds. 1,237 workers contributed to this task, and we annotated gold labels of 15,339 premise-hypothesis pairs.", "Table 3 shows the numbers of cases where answers matched gold labels automatically determined by monotonicity calculus. This table shows that there exist inference pairs whose labels are difficult even for humans to determine; there are 3,354 premise-hypothesis pairs whose gold labels as annotated by polarity computations match with those answered by all workers. We selected these naturalistic monotonicity inference pairs for the candidates of the final test set.", "To make the distribution of gold labels symmetric, we checked these pairs to determine if we can swap the premise and the hypothesis, reverse their gold labels, and create another monotonicity inference pair. In some cases, shown below, the gold label cannot be reversed if we swap the premise and the hypothesis.", "In ( UID15 ), child and kid are not hyponyms but synonyms, and the premise $P$ and the hypothesis $H$ are paraphrases.", " $P$ : Tom is no longer a child $H$ : Tom is no longer a kid ", "These cases are not strict downward inference problems, in the sense that a phrase is not replaced by its hyponym/hypernym.", "Consider the example ( UID16 ).", " $P$ : The moon has no atmosphere $H$ : The moon has no atmosphere, and the gravity force is too low ", "The hypothesis $H$ was created by asking workers to make atmosphere in the premise $P$ more specific. However, the additional phrase and the gravity force is too low does not form constituents with atmosphere. Thus, such examples are not strict downward monotone inferences.", "In such cases as (a) and (b), we do not swap the premise and the hypothesis. In the end, we collected 4,068 examples from crowdsourced datasets." ], [ "We also collect monotonicity inference problems from previous manually curated datasets and linguistics publications. The motivation is that previous linguistics publications related to monotonicity reasoning are expected to contain well-designed inference problems, which might be challenging problems for NLI models.", "We collected 1,184 examples from 11 linguistics publications BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 . Regarding previous manually-curated datasets, we collected 93 examples for monotonicity reasoning from the GLUE diagnostic dataset, and 37 single-premise problems from FraCaS.", "Both the GLUE diagnostic dataset and FraCaS categorize problems by their types of monotonicity reasoning, but we found that each dataset has different classification criteria. Thus, following GLUE, we reclassified problems into three types of monotone reasoning (upward, downward, and non-monotone) by checking if they include (i) the target monotonicity operator in both the premise and the hypothesis and (ii) the phrase replacement in its argument position. In the GLUE diagnostic dataset, there are several problems whose gold labels are contradiction. We regard them as non-entailment in that the premise does not semantically entail the hypothesis." ], [ "We merged the human-oriented dataset created via crowdsourcing and the linguistics-oriented dataset created from linguistics publications to create the current version of the monotonicity entailment dataset (MED). Table 4 shows some examples from the MED dataset. We can see that our dataset contains various phrase replacements (e.g., conjunction, relative clauses, and comparatives). Table 5 reports the statistics of the MED dataset, including 5,382 premise-hypothesis pairs (1,820 upward examples, 3,270 downward examples, and 292 non-monotone examples). Regarding non-monotone problems, gold labels are always non-entailment, whether a hypothesis is more specific or general than its premise, and thus almost all non-monotone problems are labeled as non-entailment. The size of the word vocabulary in the MED dataset is 4,023, and overlap ratios of vocabulary with previous standard NLI datasets is 95% with MultiNLI and 90% with SNLI.", "We assigned a set of annotation tags for linguistic phenomena to each example in the test set. These tags allow us to analyze how well models perform on each linguistic phenomenon related to monotonicity reasoning. We defined 6 tags (see Table 4 for examples):", "lexical knowledge (2,073 examples): inference problems that require lexical relations (i.e., hypernyms, hyponyms, or synonyms)", "reverse (240 examples): inference problems where a propositional object is embedded in a downward environment more than once", "conjunction (283 examples): inference problems that include the phrase replacement by adding conjunction (and) to the hypothesis", "disjunction (254 examples): inference problems that include the phrase replacement by adding disjunction (or) to the hypothesis", "conditionals (149 examples): inference problems that include conditionals (e.g., if, when, unless) in the hypothesis ", "negative polarity items (NPIs) (338 examples): inference problems that include NPIs (e.g., any, ever, at all, anything, anyone, anymore, anyhow, anywhere) in the hypothesis" ], [ "To test the difficulty of our dataset, we checked the majority class label and the accuracies of five state-of-the-art NLI models adopting different approaches: BiMPM (Bilateral Multi-Perspective Matching Model; BIBREF31 , BIBREF31 ), ESIM (Enhanced Sequential Inference Model; BIBREF32 , BIBREF32 ), Decomposable Attention Model BIBREF33 , KIM (Knowledge-based Inference Model; BIBREF34 , BIBREF34 ), and BERT (Bidirectional Encoder Representations from Transformers model; BIBREF35 , BIBREF35 ). Regarding BERT, we checked the performance of a model pretrained on Wikipedia and BookCorpus for language modeling and trained with SNLI and MultiNLI. For other models, we checked the performance trained with SNLI. In agreement with our dataset, we regarded the prediction label contradiction as non-entailment.", "Table 6 shows that the accuracies of all models were better on upward inferences, in accordance with the reported results of the GLUE leaderboard. The overall accuracy of each model was low. In particular, all models underperformed the majority baseline on downward inferences, despite some models having rich lexical knowledge from a knowledge base (KIM) or pretraining (BERT). This indicates that downward inferences are difficult to perform even with the expansion of lexical knowledge. In addition, it is interesting to see that if a model performed better on upward inferences, it performed worse on downward inferences. We will investigate these results in detail below." ], [ "To explore whether the performance of models on monotonicity reasoning depends on the training set or the model themselves, we conducted further analysis performed by data augmentation with the automatically generated monotonicity dataset HELP BIBREF11 . HELP contains 36K monotonicity inference examples (7,784 upward examples, 21,192 downward examples, and 1,105 non-monotone examples). The size of the HELP word vocabulary is 15K, and the overlap ratio of vocabulary between HELP and MED is 15.2%.", "We trained BERT on MultiNLI only and on MultiNLI augmented with HELP, and compared their performance. Following BIBREF3 , we also checked the performance of a hypothesis-only model trained with each training set to test whether our test set contains undesired biases.", "Table 7 shows that the performance of BERT with the hypothesis-only training set dropped around 10-40% as compared with the one with the premise-hypothesis training set, even if we use the data augmentation technique. This indicates that the MED test set does not allow models to predict from hypotheses alone. Data augmentation by HELP improved the overall accuracy to 71.6%, but there is still room for improvement. In addition, while adding HELP increased the accuracy on downward inferences, it slightly decreased accuracy on upward inferences. The size of downward examples in HELP is much larger than that of upward examples. This might improve accuracy on downward inferences, but might decrease accuracy on upward inferences.", "To investigate the relationship between accuracy on upward inferences and downward inferences, we checked the performance throughout training BERT with only upward and downward inference examples in HELP (Figure 2 (i), (ii)). These two figures show that, as the size of the upward training set increased, BERT performed better on upward inferences but worse on downward inferences, and vice versa.", "Figure 2 (iii) shows performance on a different ratio of upward and downward inference training sets. When downward inference examples constitute more than half of the training set, accuracies on upward and downward inferences were reversed. As the ratio of downward inferences increased, BERT performed much worse on upward inferences. This indicates that a training set in one direction (upward or downward entailing) of monotonicity might be harmful to models when learning the opposite direction of monotonicity.", "Previous work using HELP BIBREF11 reported that the BERT trained with MultiNLI and HELP containing both upward and downward inferences improved accuracy on both directions of monotonicity. MultiNLI rarely comes from downward inferences (see Section \"Discussion\" ), and its size is large enough to be immune to the side-effects of downward inference examples in HELP. This indicates that MultiNLI might act as a buffer against side-effects of the monotonicity-driven data augmentation technique.", "Table 8 shows the evaluation results by genre. This result shows that inference problems collected from linguistics publications are more challenging than crowdsourced inference problems, even if we add HELP to training sets. As shown in Figure 2 , the change in performance on problems from linguistics publications is milder than that on problems from crowdsourcing. This result also indicates the difficulty of problems from linguistics publications. Regarding non-monotone problems collected via crowdsourcing, there are very few non-monotone problems, so accuracy is 100%. Adding non-monotone problems to our test set is left for future work.", "Table 9 shows the evaluation results by type of linguistic phenomenon. While accuracy on problems involving NPIs and conditionals was improved on both upward and downward inferences, accuracy on problems involving conjunction and disjunction was improved on only one direction. In addition, it is interesting to see that the change in accuracy on conjunction was opposite to that on disjunction. Downward inference examples involving disjunction are similar to upward inference ones; that is, inferences from a sentence to a shorter sentence are valid (e.g., Not many campers have had a sunburn or caught a cold $\\Rightarrow $ Not many campers have caught a cold). Thus, these results were also caused by addition of downward inference examples. Also, accuracy on problems annotated with reverse tags was apparently better without HELP because all examples are upward inferences embedded in a downward environment twice.", "Table 9 also shows that accuracy on conditionals was better on upward inferences than that on downward inferences. This indicates that BERT might fail to capture the monotonicity property that conditionals create a downward entailing context in their scope while they create an upward entailing context out of their scope.", "Regarding lexical knowledge, the data augmentation technique improved the performance much better on downward inferences which do not require lexical knowledge. However, among the 394 problems for which all models provided wrong answers, 244 problems are non-lexical inference problems. This indicates that some non-lexical inference problems are more difficult than lexical inference problems, though accuracy on non-lexical inference problems was better than that on lexical inference problems." ], [ "One of our findings is that there is a type of downward inferences to which every model fails to provide correct answers. One such example is concerned with the contrast between few and a few. Among 394 problems for which all models provided wrong answers, 148 downward inference problems were problems involving the downward monotonicity operator few such as in the following example:", " $P$ : Few of the books had typical or marginal readers $H$ : Few of the books had some typical readers We transformed these downward inference problems to upward inference problems in two ways: (i) by replacing the downward operator few with the upward operator a few, and (ii) by removing the downward operator few. We tested BERT using these transformed test sets. The results showed that BERT predicted the same answers for the transformed test sets. This suggests that BERT does not understand the difference between the downward operator few and the upward operator a few.", "The results of crowdsourcing tasks in Section 3.1.3 showed that some downward inferences can naturally be performed in human reasoning. However, we also found that the MultiNLI training set BIBREF10 , which is one of the dataset created from naturally-occurring texts, contains only 77 downward inference problems, including the following one.", " $P$ : No racin' on the Range $H$ : No horse racing is allowed on the Range ", "One possible reason why there are few downward inferences is that certain pragmatic factors can block people to draw a downward inference. For instance, in the case of the inference problem in ( \"Discussion\" ), unless the added disjunct in $H$ , i.e., a small cat with green eyes, is salient in the context, it would be difficult to draw the conclusion $H$ from the premise $P$ .", " $P$ : I saw a dog $H$ : I saw a dog or a small cat with green eyes ", "Such pragmatic factors would be one of the reasons why it is difficult to obtain downward inferences in naturally occurring texts." ], [ "We introduced a large monotonicity entailment dataset, called MED. To illustrate the usefulness of MED, we tested state-of-the-art NLI models, and found that performance on the new test set was substantially worse for all state-of-the-art NLI models. In addition, the accuracy on downward inferences was inversely proportional to the one on upward inferences.", "An experiment with the data augmentation technique showed that accuracy on upward and downward inferences depends on the proportion of upward and downward inferences in the training set. This indicates that current neural models might have limitations on their generalization ability in monotonicity reasoning. We hope that the MED will be valuable for future research on more advanced models that are capable of monotonicity reasoning in a proper way." ], [ "This work was partially supported by JST AIP- PRISM Grant Number JPMJCR18Y1, Japan, and JSPS KAKENHI Grant Number JP18H03284, Japan. We thank our three anonymous reviewers for helpful suggestions. We are also grateful to Koki Washio, Masashi Yoshikawa, and Thomas McLachlan for helpful discussion." ] ] }
{ "question": [ "Do they release MED?", "What NLI models do they analyze?", "How do they define upward and downward reasoning?", "What is monotonicity reasoning?" ], "question_id": [ "c0a11ba0f6bbb4c69b5a0d4ae9d18e86a4a8f354", "dfc393ba10ec4af5a17e5957fcbafdffdb1a6443", "311a7fa62721e82265f4e0689b4adc05f6b74215", "82bcacad668351c0f81bd841becb2dbf115f000e" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "To tackle this issue, we present a new evaluation dataset that covers a wide range of monotonicity reasoning that was created by crowdsourcing and collected from linguistics publications (Section \"Dataset\" ). Compared with manual or automatic construction, we can collect naturally-occurring examples by crowdsourcing and well-designed ones from linguistics publications. To enable the evaluation of skills required for monotonicity reasoning, we annotate each example in our dataset with linguistic tags associated with monotonicity reasoning." ], "highlighted_evidence": [ "To tackle this issue, we present a new evaluation dataset that covers a wide range of monotonicity reasoning that was created by crowdsourcing and collected from linguistics publications (Section \"Dataset\" )." ] } ], "annotation_id": [ "9ae76059d33b24d99445adb910a6ebc0ebc8a559" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "BiMPM", "ESIM", "Decomposable Attention Model", "KIM", "BERT" ], "yes_no": null, "free_form_answer": "", "evidence": [ "To test the difficulty of our dataset, we checked the majority class label and the accuracies of five state-of-the-art NLI models adopting different approaches: BiMPM (Bilateral Multi-Perspective Matching Model; BIBREF31 , BIBREF31 ), ESIM (Enhanced Sequential Inference Model; BIBREF32 , BIBREF32 ), Decomposable Attention Model BIBREF33 , KIM (Knowledge-based Inference Model; BIBREF34 , BIBREF34 ), and BERT (Bidirectional Encoder Representations from Transformers model; BIBREF35 , BIBREF35 ). Regarding BERT, we checked the performance of a model pretrained on Wikipedia and BookCorpus for language modeling and trained with SNLI and MultiNLI. For other models, we checked the performance trained with SNLI. In agreement with our dataset, we regarded the prediction label contradiction as non-entailment." ], "highlighted_evidence": [ "To test the difficulty of our dataset, we checked the majority class label and the accuracies of five state-of-the-art NLI models adopting different approaches: BiMPM (Bilateral Multi-Perspective Matching Model; BIBREF31 , BIBREF31 ), ESIM (Enhanced Sequential Inference Model; BIBREF32 , BIBREF32 ), Decomposable Attention Model BIBREF33 , KIM (Knowledge-based Inference Model; BIBREF34 , BIBREF34 ), and BERT (Bidirectional Encoder Representations from Transformers model; BIBREF35 , BIBREF35 ). Regarding BERT, we checked the performance of a model pretrained on Wikipedia and BookCorpus for language modeling and trained with SNLI and MultiNLI." ] } ], "annotation_id": [ "faa8cc896618919e0565306b4eaf03e0dc18eaa0" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Upward reasoning is defined as going from one specific concept to a more general one. Downward reasoning is defined as the opposite, going from a general concept to one that is more specific.", "evidence": [ "A context is upward entailing (shown by [... $\\leavevmode {\\color {red!80!black}\\uparrow }$ ]) that allows an inference from ( \"Introduction\" ) to ( \"Introduction\" ), where French dinner is replaced by a more general concept dinner. On the other hand, a downward entailing context (shown by [... $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ]) allows an inference from ( \"Introduction\" ) to ( \"Introduction\" ), where workers is replaced by a more specific concept new workers. Interestingly, the direction of monotonicity can be reversed again by embedding yet another downward entailing context (e.g., not in ( \"Introduction\" )), as witness the fact that ( \"Introduction\" ) entails ( \"Introduction\" ). To properly handle both directions of monotonicity, NLI models must detect monotonicity operators (e.g., all, not) and their arguments from the syntactic structure.", "All [ workers $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ] [joined for a French dinner $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] All workers joined for a dinner All new workers joined for a French dinner Not all [new workers $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] joined for a dinner Not all workers joined for a dinner" ], "highlighted_evidence": [ "A context is upward entailing (shown by [... $\\leavevmode {\\color {red!80!black}\\uparrow }$ ]) that allows an inference from ( \"Introduction\" ) to ( \"Introduction\" ), where French dinner is replaced by a more general concept dinner. ", "On the other hand, a downward entailing context (shown by [... $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ]) allows an inference from ( \"Introduction\" ) to ( \"Introduction\" ), where workers is replaced by a more specific concept new workers.", "All [ workers $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ] [joined for a French dinner $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] All workers joined for a dinner All new workers joined for a French dinner Not all [new workers $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] joined for a dinner Not all workers joined for a dinner" ] } ], "annotation_id": [ "10cc4f11be85ffb0eaabd7017d5df80c4c9b309f" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Concerning logical inferences, monotonicity reasoning BIBREF6 , BIBREF7 , which is a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures. Consider examples in ( \"Introduction\" ) and ( \"Introduction\" )." ], "highlighted_evidence": [ "Concerning logical inferences, monotonicity reasoning BIBREF6 , BIBREF7 , which is a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures." ] } ], "annotation_id": [ "0558e97a25b01a79de670fda145e072bdecc0aed" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Table 1: Determiners and their polarities.", "Table 2: Examples of downward operators.", "Figure 1: Overview of our human-oriented dataset creation. E: entailment, NE: non-entailment.", "Table 3: Numbers of cases where answers matched automatically determined gold labels.", "Table 4: Examples in the MED dataset. Crowd: problems collected through crowdsourcing, Paper: problems collected from linguistics publications, up: upward monotone, down: downward monotone, non: non-monotone, cond: condisionals, rev: reverse, conj: conjunction, disj: disjunction, lex: lexical knowledge, E: entailment, NE: non-entailment.", "Table 5: Statistics for the MED dataset.", "Table 6: Accuracies (%) for different models and training datasets.", "Table 7: Evaluation results on types of monotonicity reasoning. –Hyp: Hypothesis-only model.", "Figure 2: Accuracy throughout training BERT (i) with only upward examples and (ii) with only downward examples. We checked the accuracy at sizes [50, 100, 200, 500, 1000, 2000, 5000] for each direction. (iii) Performance on different ratios of upward/downward training sets. The total size of the training sets was 5,000 examples.", "Table 8: Evaluation results by genre. Paper: problems collected from linguistics publications, Crowd: problems via crowdsourcing.", "Table 9: Evaluation results by linguistic phenomenon type. (non-)Lexical: problems that (do not) require lexical relations. Numbers in parentheses are numbers of problems." ], "file": [ "2-Table1-1.png", "2-Table2-1.png", "3-Figure1-1.png", "4-Table3-1.png", "5-Table4-1.png", "5-Table5-1.png", "6-Table6-1.png", "6-Table7-1.png", "7-Figure2-1.png", "7-Table8-1.png", "8-Table9-1.png" ] }
1912.00819
Enriching Existing Conversational Emotion Datasets with Dialogue Acts using Neural Annotators.
The recognition of emotion and dialogue acts enrich conversational analysis and help to build natural dialogue systems. Emotion makes us understand feelings and dialogue acts reflect the intentions and performative functions in the utterances. However, most of the textual and multi-modal conversational emotion datasets contain only emotion labels but not dialogue acts. To address this problem, we propose to use a pool of various recurrent neural models trained on a dialogue act corpus, with or without context. These neural models annotate the emotion corpus with dialogue act labels and an ensemble annotator extracts the final dialogue act label. We annotated two popular multi-modal emotion datasets: IEMOCAP and MELD. We analysed the co-occurrence of emotion and dialogue act labels and discovered specific relations. For example, Accept/Agree dialogue acts often occur with the Joy emotion, Apology with Sadness, and Thanking with Joy. We make the Emotional Dialogue Act (EDA) corpus publicly available to the research community for further study and analysis.
{ "section_name": [ "Introduction", "Annotation of Emotional Dialogue Acts ::: Data for Conversational Emotion Analysis", "Annotation of Emotional Dialogue Acts ::: Dialogue Act Tagset and SwDA Corpus", "Annotation of Emotional Dialogue Acts ::: Neural Model Annotators", "Annotation of Emotional Dialogue Acts ::: Ensemble of Neural Annotators", "Annotation of Emotional Dialogue Acts ::: Reliability of Neural Annotators", "EDAs Analysis", "Conclusion and Future Work", "Acknowledgements" ], "paragraphs": [ [ "With the growing demand for human-computer/robot interaction systems, detecting the emotional state of the user can heavily benefit a conversational agent to respond at an appropriate emotional level. Emotion recognition in conversations has proven important for potential applications such as response recommendation or generation, emotion-based text-to-speech, personalisation, etc. Human emotional states can be expressed verbally and non-verbally BIBREF0, BIBREF1, however, while building an interactive dialogue system, the interface needs dialogue acts. A typical dialogue system consists of a language understanding module which requires to determine the meaning of and intention in the human input utterances BIBREF2, BIBREF3. Also, in discourse or conversational analysis, dialogue acts are the main linguistic features to consider BIBREF4. A dialogue act provides an intention and performative function in an utterance of the dialogue. For example, it can infer a user's intention by distinguishing Question, Answer, Request, Agree/Reject, etc. and performative functions such as Acknowledgement, Conversational-opening or -closing, Thanking, etc. The dialogue act information together with emotional states can be very useful for a spoken dialogue system to produce natural interaction BIBREF5.", "The research in emotion recognition is growing very rapidly and many datasets are available, such as text-based, speech- or vision-level, and multimodal emotion data. Emotion expression recognition is a challenging task and hence multimodality is crucial BIBREF0. However, few conversational multi-modal emotion recognition datasets are available, for example, IEMOCAP BIBREF6, SEMAINE BIBREF7, MELD BIBREF8. They are multi-modal dyadic conversational datasets containing audio-visual and conversational transcripts. Every utterance in these datasets is labeled with an emotion label.", "In this work, we apply an automated neural ensemble annotation process for dialogue act labeling. Several neural models are trained with the Switchboard Dialogue Act (SwDA) Corpus BIBREF9, BIBREF10 and used for inferring dialogue acts on the emotion datasets. We ensemble five model output labels by checking majority occurrences (most of the model labels are the same) and ranking confidence values of the models. We have annotated two potential multi-modal conversation datasets for emotion recognition: IEMOCAP (Interactive Emotional dyadic MOtion CAPture database) BIBREF6 and MELD (Multimodal EmotionLines Dataset) BIBREF8. Figure FIGREF2, shows an example of dialogue acts with emotion and sentiment labels from the MELD dataset. We confirmed the reliability of annotations with inter-annotator metrics. We analysed the co-occurrences of the dialogue act and emotion labels and discovered a key relationship between them; certain dialogue acts of the utterances show significant and useful association with respective emotional states. For example, Accept/Agree dialogue act often occurs with the Joy emotion while Reject with Anger, Acknowledgements with Surprise, Thanking with Joy, and Apology with Sadness, etc. The detailed analysis of the emotional dialogue acts (EDAs) and annotated datasets are being made available at the SECURE EU Project website." ], [ "There are two emotion taxonomies: (1) discrete emotion categories (DEC) and (2) fined-grained dimensional basis of emotion states (DBE). The DECs are Joy, Sadness, Fear, Surprise, Disgust, Anger and Neutral; identified by Ekman et al. ekman1987universalemos. The DBE of the emotion is usually elicited from two or three dimensions BIBREF1, BIBREF11, BIBREF12. A two-dimensional model is commonly used with Valence and Arousal (also called activation), and in the three-dimensional model, the third dimension is Dominance. IEMOCAP is annotated with all DECs and two additional emotion classes, Frustration and Excited. IEMOCAP is also annotated with three DBE, that includes Valance, Arousal and Dominance BIBREF6. MELD BIBREF8, which is an evolved version of the Emotionlines dataset developed by BIBREF13, is annotated with exactly 7 DECs and sentiments (positive, negative and neutral)." ], [ "There have been many taxonomies for dialogue acts: speech acts BIBREF14 refer to the utterance, not only to present information but to the action at is performed. Speech acts were later modified into five classes (Assertive, Directive, Commissive, Expressive, Declarative) BIBREF15. There are many such standard taxonomies and schemes to annotate conversational data, and most of them follow the discourse compositionality. These schemes have proven their importance for discourse or conversational analysis BIBREF16. During the increased development of dialogue systems and discourse analysis, the standard taxonomy was introduced in recent decades, called Dialogue Act Markup in Several Layers (DAMSL) tag set. According to DAMSL, each DA has a forward-looking function (such as Statement, Info-request, Thanking) and a backwards-looking function (such as Accept, Reject, Answer) BIBREF17.", "The DAMSL annotation includes not only the utterance-level but also segmented-utterance labelling. However, in the emotion datasets, the utterances are not segmented, as we can see in Figure FIGREF2 first or fourth utterances are not segmented as two separate. The fourth utterance, it could be segmented to have two dialogue act labels, for example, a statement (sd) and a question (qy). That provides very fine-grained DA classes and follows the concept of discourse compositionality. DAMSL distinguishes wh-question (qw), yes-no question (qy), open-ended (qo), and or-question (qr) classes, not just because these questions are syntactically distinct, but also because they have different forward functions BIBREF18. For example, yes-no question is more likely to get a “yes\" answer than a wh-question (qw). This also gives an intuition that the answers follow the syntactic formulation of question, providing a context. For example, qy is used for a question that, from a discourse perspective, expects a Yes (ny) or No (nn) answer.", "We have investigated the annotation method and trained our neural models with the Switchboard Dialogue Act (SwDA) Corpus BIBREF9, BIBREF10. SwDA Corpus is annotated with the DAMSL tag set and it is been used for reporting and bench-marking state-of-the-art results in dialogue act recognition tasks BIBREF19, BIBREF20, BIBREF21 which makes it ideal for our use case. The Switchboard DAMSL Coders Manual can be followed for knowing more about the dialogue act labels." ], [ "We adopted the neural architectures based on Bothe et al. bothe2018discourse where two variants are: non-context model (classifying at utterance level) and context model (recognizing the dialogue act of the current utterance given a few preceding utterances). From conversational analysis using dialogue acts in Bothe et al. bothe2018interspeech, we learned that the preceding two utterances contribute significantly to recognizing the dialogue act of the current utterance. Hence, we adapt this setting for the context model and create a pool of annotators using recurrent neural networks (RNNs). RNNs can model the contextual information in the sequence of words of an utterance and in the sequence of utterances of a dialogue. Each word in an utterance is represented with a word embedding vector of dimension 1024. We use the word embedding vectors from pre-trained ELMo (Embeddings from Language Models) embeddings BIBREF22. We have a pool of five neural annotators as shown in Figure FIGREF6. Our online tool called Discourse-Wizard is available to practice automated dialogue act labeling. In this tool we use the same neural architectures but model-trained embeddings (while, in this work we use pre-trained ELMo embeddings, as they are better performant but computationally and size-wise expensive to be hosted in the online tool). The annotators are:", "Utt-level 1 Dialogue Act Neural Annotator (DANA) is an utterance-level classifier that uses word embeddings ($w$) as an input to an RNN layer, attention mechanism and computes the probability of dialogue acts ($da$) using the softmax function (see in Figure FIGREF10, dotted line utt-l1). This model achieved 75.13% accuracy on the SwDA corpus test set.", "Context 1 DANA is a context model that uses 2 preceding utterances while recognizing the dialogue act of the current utterance (see context model with con1 line in Figure FIGREF10). It uses a hierarchical RNN with the first RNN layer to encode the utterance from word embeddings ($w$) and the second RNN layer is provided with three utterances ($u$) (current and two preceding) composed from the first layer followed by the attention mechanism ($a$), where $\\sum _{n=0}^{n} a_{t-n} = 1$. Finally, the softmax function is used to compute the probability distribution. This model achieved 77.55% accuracy on the SwDA corpus test set.", "Utt-level 2 DANA is another utterance-level classifier which takes an average of the word embeddings in the input utterance and uses a feedforward neural network hidden layer (see utt-l2 line in Figure FIGREF10, where $mean$ passed to $softmax$ directly). Similar to the previous model, it computes the probability of dialogue acts using the softmax function. This model achieved 72.59% accuracy on the test set of the SwDA corpus.", "Context 2 DANA is another context model that uses three utterances similar to the Context 1 DANA model, but the utterances are composed as the mean of the word embeddings over each utterance, similar to the Utt-level 2 model ($mean$ passed to context model in Figure FIGREF10 with con2 line). Hence, the Context 2 DANA model is composed of one RNN layer with three input vectors, finally topped with the softmax function for computing the probability distribution of the dialogue acts. This model achieved 75.97% accuracy on the test set of the SwDA corpus.", "Context 3 DANA is a context model that uses three utterances similar to the previous models, but the utterance representations combine both features from the Context 1 and Context 2 models (con1 and con2 together in Figure FIGREF10). Hence, the Context 3 DANA model combines features of almost all the previous four models to provide the recognition of the dialogue acts. This model achieves 75.91% accuracy on the SwDA corpus test set." ], [ "First preference is given to the labels that are perfectly matching in all the neural annotators. In Table TABREF11, we can see that both datasets have about 40% of exactly matching labels over all models (AM). Then priority is given to the context-based models to check if the label in all context models is matching perfectly. In case two out of three context models are correct, then it is being checked if that label is also produced by at least one of the non-context models. Then, we allow labels to rely on these at least two context models. As a result, about 47% of the labels are taken based on the context models (CM). When we see that none of the context models is producing the same results, then we rank the labels with their respective confidence values produced as a probability distribution using the $softmax$ function. The labels are sorted in descending order according to confidence values. Then we check if the first three (case when one context model and both non-context models produce the same label) or at least two labels are matching, then we allow to pick that one. There are about 3% in IEMOCAP and 5% in MELD (BM).", "Finally, when none the above conditions are fulfilled, we leave out the label with an unknown category. This unknown category of the dialogue act is labeled with `xx' in the final annotations, and they are about 7% in IEMOCAP and 11% in MELD (NM). The statistics of the EDAs is reported in Table TABREF13 for both datasets. Total utterances in MELD includes training, validation and test datasets." ], [ "The pool of neural annotators provides a fair range of annotations, and we checked the reliability with the following metrics BIBREF23. Krippendorff's Alpha ($\\alpha $) is a reliability coefficient developed to measure the agreement among observers, annotators, and raters, and is often used in emotion annotation BIBREF24. We apply it on the five neural annotators at the nominal level of measurement of dialogue act categories. $\\alpha $ is computed as follows:", "where $D_{o}$ is the observed disagreement and $D_{e}$ is the disagreement that is expected by chance. $\\alpha =1$ means all annotators produce the same label, while $\\alpha =0$ would mean none agreed on any label. As we can see in Table TABREF20, both datasets IEMOCAP and MELD produce significant inter-neural annotator agreement, 0.553 and 0.494, respectively.", "A very popular inter-annotator metric is Fleiss' Kappa score, also reported in Table TABREF20, which determines consistency in the ratings. The kappa $k$ can be defined as,", "where the denominator $1 -\\bar{P}_e$ elicits the degree of agreement that is attainable above chance, and the numerator $\\bar{P} -\\bar{P}_e$ provides the degree of the agreement actually achieved above chance. Hence, $k = 1$ if the raters agree completely, and $k = 0$ when none reach any agreement. We got 0.556 and 0.502 for IEOMOCAP and MELD respectively with our five neural annotators. This indicated that the annotators are labeling the dialogue acts reliably and consistently. We also report the Spearman's correlation between context-based models (Context1 and Context2), and it shows a strong correlation between them (Table TABREF20). While using the labels we checked the absolute match between all context-based models and hence their strong correlation indicates their robustness." ], [ "We can see emotional dialogue act co-occurrences with respect to emotion labels in Figure FIGREF12 for both datasets. There are sets of three bars per dialogue act in the figure, the first and second bar represent emotion labels of IEMOCAP (IE) and MELD (ME), and the third bar is for MELD sentiment (MS) labels. MELD emotion and sentiment statistics are interesting as they are strongly correlated to each other. The bars contain the normalized number of utterances for emotion labels with respect to the total number of utterances for that particular dialogue act category. The statements without-opinion (sd) and with-opinion (sv) contain utterances with almost all emotions. Many neutral utterances are spanning over all the dialogue acts.", "Quotation (⌃q) dialogue acts, on the other hand, are mostly used with `Anger' and `Frustration' (in case of IEMOCAP), however, some utterances with `Joy' or `Sadness' as well (see examples in Table TABREF21). Action Directive (ad) dialogue act utterances, which are usually orders, frequently occur with `Anger' or `Frustration' although many with `Happy' emotion in case of the MELD dataset. Acknowledgements (b) are mostly with positive or neutral, however, Appreciation (ba) and Rhetorical (bh) backchannels often occur with a greater number in `Surprise', `Joy' and/or with `Excited' (in case of IEMOCAP). Questions (qh, qw, qy and qy⌃d) are mostly asked with emotions `Surprise', `Excited', `Frustration' or `Disgust' (in case of MELD) and many are neutral. No-answers (nn) are mostly `Sad' or `Frustrated' as compared to yes-answers (ny). Forward-functions such as Apology (fa) are mostly with `Sadness' whereas Thanking (ft) and Conventional-closing or -opening (fc or fp) are usually with `Joy' or `Excited'.", "We also noticed that both datasets exhibit a similar relation between dialogue act and emotion. It is important to notice that the dialogue act annotation is based on the given transcripts, however, the emotional expressions are better perceived with audio or video BIBREF6. We report some examples where we mark the utterances with an determined label (xx) in the last row of Table TABREF21. They are skipped from the final annotation because of not fulfilling the conditions explained in Section SECREF14 It is also interesting to see the previous utterance dialogue acts (P-DA) of those skipped utterances, and the sequence of the labels can be followed from Figure FIGREF6 (utt-l1, utt-l2, con1, con2, con3).", "In the first example, the previous utterance was b, and three DANA models produced labels of the current utterance as b, but it is skipped because the confidence values were not sufficient to bring it as a final label. The second utterance can be challenging even for humans to perceive with any of the dialogue acts. However, the third and fourth utterances are followed by a yes-no question (qy), and hence, we can see in the third example, that context models tried their best to at least perceive it as an answer (ng, ny, nn). The last utterance, “I'm so sorry!\", has been completely disagreed by all the five annotators. Similar apology phrases are mostly found with `Sadness' emotion label's, and the correct dialogue act is Apology (fa). However, they are placed either in the sd or in ba dialogue act category. We believe that with human annotator's help those labels of the utterances can be corrected with very limited efforts." ], [ "In this work, we presented a method to extend conversational multi-modal emotion datasets with dialogue act labels. We successfully show this on two well-established emotion datasets: IEMOCAP and MELD, which we labeled with dialogue acts and made publicly available for further study and research. As a first insight, we found that many of the dialogue acts and emotion labels follow certain relations. These relations can be useful to learn about the emotional behaviours with dialogue acts to build a natural dialogue system and for deeper conversational analysis. The conversational agent might benefit in generating an appropriate response when considering both emotional states and dialogue acts in the utterances.", "In future work, we foresee the human in the loop for the annotation process along with a pool of automated neural annotators. Robust annotations can be achieved with very little human effort and supervision, for example, observing and correcting the final labels produced by ensemble output labels from the neural annotators. The human-annotator might also help to achieve segmented-utterance labelling of the dialogue acts. We also plan to use these datasets for conversational analysis to infer interactive behaviours of the emotional states with respect to dialogue acts. In our recent work, where we used dialogue acts to build a dialogue system for a social robot, we find this study and dataset very helpful. For example, we can extend our robotic conversational system to consider emotion as an added linguistic feature to produce natural interaction." ], [ "We would like to acknowledge funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska Curie grant agreement No 642667 (SECURE)." ] ] }
{ "question": [ "What other relations were found in the datasets?", "How does the ensemble annotator extract the final label?", "How were dialogue act labels defined?", "How many models were used?" ], "question_id": [ "5937ebbf04f62d41b48cbc6b5c38fc309e5c2328", "dcd6f18922ac5c00c22cef33c53ff5ae08b42298", "2965c86467d12b79abc16e1457d848cb6ca88973", "b99948ac4810a7fe3477f6591b8cf211d6398e67" ], "nlp_background": [ "two", "two", "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Quotation (⌃q) dialogue acts, on the other hand, are mostly used with `Anger' and `Frustration'", "Action Directive (ad) dialogue act utterances, which are usually orders, frequently occur with `Anger' or `Frustration' although many with `Happy' emotion in case of the MELD dataset", "Acknowledgements (b) are mostly with positive or neutral", "Appreciation (ba) and Rhetorical (bh) backchannels often occur with a greater number in `Surprise', `Joy' and/or with `Excited' (in case of IEMOCAP)", "Questions (qh, qw, qy and qy⌃d) are mostly asked with emotions `Surprise', `Excited', `Frustration' or `Disgust' (in case of MELD) and many are neutral", "No-answers (nn) are mostly `Sad' or `Frustrated' as compared to yes-answers (ny).", "Forward-functions such as Apology (fa) are mostly with `Sadness' whereas Thanking (ft) and Conventional-closing or -opening (fc or fp) are usually with `Joy' or `Excited'" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We can see emotional dialogue act co-occurrences with respect to emotion labels in Figure FIGREF12 for both datasets. There are sets of three bars per dialogue act in the figure, the first and second bar represent emotion labels of IEMOCAP (IE) and MELD (ME), and the third bar is for MELD sentiment (MS) labels. MELD emotion and sentiment statistics are interesting as they are strongly correlated to each other. The bars contain the normalized number of utterances for emotion labels with respect to the total number of utterances for that particular dialogue act category. The statements without-opinion (sd) and with-opinion (sv) contain utterances with almost all emotions. Many neutral utterances are spanning over all the dialogue acts.", "Quotation (⌃q) dialogue acts, on the other hand, are mostly used with `Anger' and `Frustration' (in case of IEMOCAP), however, some utterances with `Joy' or `Sadness' as well (see examples in Table TABREF21). Action Directive (ad) dialogue act utterances, which are usually orders, frequently occur with `Anger' or `Frustration' although many with `Happy' emotion in case of the MELD dataset. Acknowledgements (b) are mostly with positive or neutral, however, Appreciation (ba) and Rhetorical (bh) backchannels often occur with a greater number in `Surprise', `Joy' and/or with `Excited' (in case of IEMOCAP). Questions (qh, qw, qy and qy⌃d) are mostly asked with emotions `Surprise', `Excited', `Frustration' or `Disgust' (in case of MELD) and many are neutral. No-answers (nn) are mostly `Sad' or `Frustrated' as compared to yes-answers (ny). Forward-functions such as Apology (fa) are mostly with `Sadness' whereas Thanking (ft) and Conventional-closing or -opening (fc or fp) are usually with `Joy' or `Excited'.", "FLOAT SELECTED: Figure 4: EDAs: Visualizing co-occurrence of utterances with respect to emotion states in the particular dialogue acts (only major and significant are shown here). IE: IEMOCAP, ME: MELD Emotion and MS: MELD Sentiment." ], "highlighted_evidence": [ "The statements without-opinion (sd) and with-opinion (sv) contain utterances with almost all emotions. Many neutral utterances are spanning over all the dialogue acts.\n\nQuotation (⌃q) dialogue acts, on the other hand, are mostly used with `Anger' and `Frustration' (in case of IEMOCAP), however, some utterances with `Joy' or `Sadness' as well (see examples in Table TABREF21). Action Directive (ad) dialogue act utterances, which are usually orders, frequently occur with `Anger' or `Frustration' although many with `Happy' emotion in case of the MELD dataset. Acknowledgements (b) are mostly with positive or neutral, however, Appreciation (ba) and Rhetorical (bh) backchannels often occur with a greater number in `Surprise', `Joy' and/or with `Excited' (in case of IEMOCAP). Questions (qh, qw, qy and qy⌃d) are mostly asked with emotions `Surprise', `Excited', `Frustration' or `Disgust' (in case of MELD) and many are neutral. No-answers (nn) are mostly `Sad' or `Frustrated' as compared to yes-answers (ny). Forward-functions such as Apology (fa) are mostly with `Sadness' whereas Thanking (ft) and Conventional-closing or -opening (fc or fp) are usually with `Joy' or `Excited'.\n\n", "FLOAT SELECTED: Figure 4: EDAs: Visualizing co-occurrence of utterances with respect to emotion states in the particular dialogue acts (only major and significant are shown here). IE: IEMOCAP, ME: MELD Emotion and MS: MELD Sentiment." ] } ], "annotation_id": [ "706d31c3b62c8a0164277513b424f6bb322e2f69" ], "worker_id": [ "2cfd959e433f290bb50b55722370f0d22fe090b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "First preference is given to the labels that are perfectly matching in all the neural annotators.", "In case two out of three context models are correct, then it is being checked if that label is also produced by at least one of the non-context models.", "When we see that none of the context models is producing the same results, then we rank the labels with their respective confidence values produced as a probability distribution using the $softmax$ function. The labels are sorted in descending order according to confidence values. Then we check if the first three (case when one context model and both non-context models produce the same label) or at least two labels are matching, then we allow to pick that one. ", "Finally, when none the above conditions are fulfilled, we leave out the label with an unknown category." ], "yes_no": null, "free_form_answer": "", "evidence": [ "First preference is given to the labels that are perfectly matching in all the neural annotators. In Table TABREF11, we can see that both datasets have about 40% of exactly matching labels over all models (AM). Then priority is given to the context-based models to check if the label in all context models is matching perfectly. In case two out of three context models are correct, then it is being checked if that label is also produced by at least one of the non-context models. Then, we allow labels to rely on these at least two context models. As a result, about 47% of the labels are taken based on the context models (CM). When we see that none of the context models is producing the same results, then we rank the labels with their respective confidence values produced as a probability distribution using the $softmax$ function. The labels are sorted in descending order according to confidence values. Then we check if the first three (case when one context model and both non-context models produce the same label) or at least two labels are matching, then we allow to pick that one. There are about 3% in IEMOCAP and 5% in MELD (BM).", "Finally, when none the above conditions are fulfilled, we leave out the label with an unknown category. This unknown category of the dialogue act is labeled with `xx' in the final annotations, and they are about 7% in IEMOCAP and 11% in MELD (NM). The statistics of the EDAs is reported in Table TABREF13 for both datasets. Total utterances in MELD includes training, validation and test datasets." ], "highlighted_evidence": [ "First preference is given to the labels that are perfectly matching in all the neural annotators. In Table TABREF11, we can see that both datasets have about 40% of exactly matching labels over all models (AM). Then priority is given to the context-based models to check if the label in all context models is matching perfectly. In case two out of three context models are correct, then it is being checked if that label is also produced by at least one of the non-context models. Then, we allow labels to rely on these at least two context models. As a result, about 47% of the labels are taken based on the context models (CM). When we see that none of the context models is producing the same results, then we rank the labels with their respective confidence values produced as a probability distribution using the $softmax$ function. The labels are sorted in descending order according to confidence values. Then we check if the first three (case when one context model and both non-context models produce the same label) or at least two labels are matching, then we allow to pick that one. There are about 3% in IEMOCAP and 5% in MELD (BM).\n\nFinally, when none the above conditions are fulfilled, we leave out the label with an unknown category. This unknown category of the dialogue act is labeled with `xx' in the final annotations, and they are about 7% in IEMOCAP and 11% in MELD (NM)." ] } ], "annotation_id": [ "33b18d270e3871d77ad11e6c8fd0fbf35e20cdf3" ], "worker_id": [ "2cfd959e433f290bb50b55722370f0d22fe090b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Dialogue Act Markup in Several Layers (DAMSL) tag set" ], "yes_no": null, "free_form_answer": "", "evidence": [ "There have been many taxonomies for dialogue acts: speech acts BIBREF14 refer to the utterance, not only to present information but to the action at is performed. Speech acts were later modified into five classes (Assertive, Directive, Commissive, Expressive, Declarative) BIBREF15. There are many such standard taxonomies and schemes to annotate conversational data, and most of them follow the discourse compositionality. These schemes have proven their importance for discourse or conversational analysis BIBREF16. During the increased development of dialogue systems and discourse analysis, the standard taxonomy was introduced in recent decades, called Dialogue Act Markup in Several Layers (DAMSL) tag set. According to DAMSL, each DA has a forward-looking function (such as Statement, Info-request, Thanking) and a backwards-looking function (such as Accept, Reject, Answer) BIBREF17." ], "highlighted_evidence": [ "There have been many taxonomies for dialogue acts: speech acts BIBREF14 refer to the utterance, not only to present information but to the action at is performed. Speech acts were later modified into five classes (Assertive, Directive, Commissive, Expressive, Declarative) BIBREF15. There are many such standard taxonomies and schemes to annotate conversational data, and most of them follow the discourse compositionality. These schemes have proven their importance for discourse or conversational analysis BIBREF16. During the increased development of dialogue systems and discourse analysis, the standard taxonomy was introduced in recent decades, called Dialogue Act Markup in Several Layers (DAMSL) tag set. According to DAMSL, each DA has a forward-looking function (such as Statement, Info-request, Thanking) and a backwards-looking function (such as Accept, Reject, Answer) BIBREF17." ] } ], "annotation_id": [ "1aaaeb22e8d034e77a3081a514770a00556dcd95" ], "worker_id": [ "2cfd959e433f290bb50b55722370f0d22fe090b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "five" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this work, we apply an automated neural ensemble annotation process for dialogue act labeling. Several neural models are trained with the Switchboard Dialogue Act (SwDA) Corpus BIBREF9, BIBREF10 and used for inferring dialogue acts on the emotion datasets. We ensemble five model output labels by checking majority occurrences (most of the model labels are the same) and ranking confidence values of the models. We have annotated two potential multi-modal conversation datasets for emotion recognition: IEMOCAP (Interactive Emotional dyadic MOtion CAPture database) BIBREF6 and MELD (Multimodal EmotionLines Dataset) BIBREF8. Figure FIGREF2, shows an example of dialogue acts with emotion and sentiment labels from the MELD dataset. We confirmed the reliability of annotations with inter-annotator metrics. We analysed the co-occurrences of the dialogue act and emotion labels and discovered a key relationship between them; certain dialogue acts of the utterances show significant and useful association with respective emotional states. For example, Accept/Agree dialogue act often occurs with the Joy emotion while Reject with Anger, Acknowledgements with Surprise, Thanking with Joy, and Apology with Sadness, etc. The detailed analysis of the emotional dialogue acts (EDAs) and annotated datasets are being made available at the SECURE EU Project website.", "We adopted the neural architectures based on Bothe et al. bothe2018discourse where two variants are: non-context model (classifying at utterance level) and context model (recognizing the dialogue act of the current utterance given a few preceding utterances). From conversational analysis using dialogue acts in Bothe et al. bothe2018interspeech, we learned that the preceding two utterances contribute significantly to recognizing the dialogue act of the current utterance. Hence, we adapt this setting for the context model and create a pool of annotators using recurrent neural networks (RNNs). RNNs can model the contextual information in the sequence of words of an utterance and in the sequence of utterances of a dialogue. Each word in an utterance is represented with a word embedding vector of dimension 1024. We use the word embedding vectors from pre-trained ELMo (Embeddings from Language Models) embeddings BIBREF22. We have a pool of five neural annotators as shown in Figure FIGREF6. Our online tool called Discourse-Wizard is available to practice automated dialogue act labeling. In this tool we use the same neural architectures but model-trained embeddings (while, in this work we use pre-trained ELMo embeddings, as they are better performant but computationally and size-wise expensive to be hosted in the online tool). The annotators are:" ], "highlighted_evidence": [ "n this work, we apply an automated neural ensemble annotation process for dialogue act labeling. Several neural models are trained with the Switchboard Dialogue Act (SwDA) Corpus BIBREF9, BIBREF10 and used for inferring dialogue acts on the emotion datasets. We ensemble five model output labels by checking majority occurrences (most of the model labels are the same) and ranking confidence values of the models.", "We adopted the neural architectures based on Bothe et al. bothe2018discourse where two variants are: non-context model (classifying at utterance level) and context model (recognizing the dialogue act of the current utterance given a few preceding utterances)." ] } ], "annotation_id": [ "058a263bde2c426f0df7b096a445571b1cca62b8" ], "worker_id": [ "2cfd959e433f290bb50b55722370f0d22fe090b7" ] } ] }
{ "caption": [ "Figure 1: Emotional Dialogue Acts: Example of a dialogue from MELD representing emotions and sentiment (rectangular boxes), in our work, we add dialogue acts (rounded boxes). Image source Poria et al. (2019).", "Figure 2: Setting of the annotation process of the EDAs, above example utterances (with speaker identity) and emotion labels are from IEMOCAP database.", "Figure 3: Recurrent neural attention architecture with the utterance-level and context-based models.", "Table 2: Number of utterances per DA in respective datasets. All values are in percentages (%) of the total number of utterances. IEMO is for IEMOCAP.", "Table 1: Annotations Statistics of EDAs - AM: All Absolute Match (in %), CM: Context-based Models Absolute Match (in %, matched all context models or at least two context models matched with one non-context model), BM: Based-on Confidence Ranking, and NM: No Match (in %) (these labeled as ‘xx’: determined in EDAs).", "Figure 4: EDAs: Visualizing co-occurrence of utterances with respect to emotion states in the particular dialogue acts (only major and significant are shown here). IE: IEMOCAP, ME: MELD Emotion and MS: MELD Sentiment.", "Table 3: Annotations Metrics of EDAs - α: Krippendorff’s Alpha coefficient, k: Fleiss’ Kappa score, and SCCM: Spearman Correlation between Context-based Models.", "Table 4: Examples of EDAs with annotation from the MELD dataset. Emotion and sentiment labels are given in the dataset, while EDAs are determined by our ensemble of models. P-DA: previous utterance dialogue act." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Figure3-1.png", "4-Table2-1.png", "4-Table1-1.png", "5-Figure4-1.png", "5-Table3-1.png", "6-Table4-1.png" ] }
1907.00758
Synchronising audio and ultrasound by learning cross-modal embeddings
Audiovisual synchronisation is the task of determining the time offset between speech audio and a video recording of the articulators. In child speech therapy, audio and ultrasound videos of the tongue are captured using instruments which rely on hardware to synchronise the two modalities at recording time. Hardware synchronisation can fail in practice, and no mechanism exists to synchronise the signals post hoc. To address this problem, we employ a two-stream neural network which exploits the correlation between the two modalities to find the offset. We train our model on recordings from 69 speakers, and show that it correctly synchronises 82.9% of test utterances from unseen therapy sessions and unseen speakers, thus considerably reducing the number of utterances to be manually synchronised. An analysis of model performance on the test utterances shows that directed phone articulations are more difficult to automatically synchronise compared to utterances containing natural variation in speech such as words, sentences, or conversations.
{ "section_name": [ "Introduction", "Background", "Audiovisual synchronisation for lip videos", "Lip videos vs. ultrasound tongue imaging (UTI)", "Model", "Data", "Preparing the data", "Creating samples using a self-supervision strategy", "Dividing samples for training, validation and testing", "Experiments", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "Ultrasound tongue imaging (UTI) is a non-invasive way of observing the vocal tract during speech production BIBREF0 . Instrumental speech therapy relies on capturing ultrasound videos of the patient's tongue simultaneously with their speech audio in order to provide a diagnosis, design treatments, and measure therapy progress BIBREF1 . The two modalities must be correctly synchronised, with a minimum shift of INLINEFORM0 45ms if the audio leads and INLINEFORM1 125ms if the audio lags, based on synchronisation standards for broadcast audiovisual signals BIBREF2 . Errors beyond this range can render the data unusable – indeed, synchronisation errors do occur, resulting in significant wasted effort if not corrected. No mechanism currently exists to automatically correct these errors, and although manual synchronisation is possible in the presence of certain audiovisual cues such as stop consonants BIBREF3 , it is time consuming and tedious.", "In this work, we exploit the correlation between the two modalities to synchronise them. We utilise a two-stream neural network architecture for the task BIBREF4 , using as our only source of supervision pairs of ultrasound and audio segments which have been automatically generated and labelled as positive (correctly synchronised) or negative (randomly desynchronised); a process known as self-supervision BIBREF5 . We demonstrate how this approach enables us to correctly synchronise the majority of utterances in our test set, and in particular, those exhibiting natural variation in speech.", "Section SECREF2 reviews existing approaches for audiovisual synchronisation, and describes the challenges specifically associated with UTI data, compared with lip videos for which automatic synchronisation has been previously attempted. Section SECREF3 describes our approach. Section SECREF4 describes the data we use, including data preprocessing and positive and negative sample creation using a self-supervision strategy. Section SECREF5 describes our experiments, followed by an analysis of the results. We conclude with a summary and future directions in Section SECREF6 ." ], [ "Ultrasound and audio are recorded using separate components, and hardware synchronisation is achieved by translating information from the visual signal into audio at recording time. Specifically, for every ultrasound frame recorded, the ultrasound beam-forming unit releases a pulse signal, which is translated by an external hardware synchroniser into an audio pulse signal and captured by the sound card BIBREF6 , BIBREF7 . Synchronisation is achieved by aligning the ultrasound frames with the audio pulse signal, which is already time-aligned with the speech audio BIBREF8 .", "Hardware synchronisation can fail for a number of reasons. The synchroniser is an external device which needs to be correctly connected and operated by therapists. Incorrect use can lead to missing the pulse signal, which would cause synchronisation to fail for entire therapy sessions BIBREF9 . Furthermore, low-quality sound cards report an approximate, rather than the exact, sample rate which leads to errors in the offset calculation BIBREF8 . There is currently no recovery mechanism for when synchronisation fails, and to the best of our knowledge, there has been no prior work on automatically correcting the synchronisation error between ultrasound tongue videos and audio. There is, however, some prior work on synchronising lip movement with audio which we describe next." ], [ "Speech audio is generated by articulatory movement and is therefore fundamentally correlated with other manifestations of this movement, such as lip or tongue videos BIBREF10 . An alternative to the hardware approach is to exploit this correlation to find the offset. Previous approaches have investigated the effects of using different representations and feature extraction techniques on finding dimensions of high correlation BIBREF11 , BIBREF12 , BIBREF13 . More recently, neural networks, which learn features directly from input, have been employed for the task. SyncNet BIBREF4 uses a two-stream neural network and self-supervision to learn cross-modal embeddings, which are then used to synchronise audio with lip videos. It achieves near perfect accuracy ( INLINEFORM0 99 INLINEFORM1 ) using manual evaluation where lip-sync error is not detectable to a human. It has since been extended to use different sample creation methods for self-supervision BIBREF5 , BIBREF14 and different training objectives BIBREF14 . We adopt the original approach BIBREF4 , as it is both simpler and significantly less expensive to train than the more recent variants." ], [ "Videos of lip movement can be obtained from various sources including TV, films, and YouTube, and are often cropped to include only the lips BIBREF4 . UTI data, on the other hand, is recorded in clinics by trained therapists BIBREF15 . An ultrasound probe placed under the chin of the patient captures the midsaggital view of their oral cavity as they speak. UTI data consists of sequences of 2D matrices of raw ultrasound reflection data, which can be interpreted as greyscale images BIBREF15 . There are several challenges specifically associated with UTI data compared with lip videos, which can potentially lower the performance of models relative to results reported on lip video data. These include:", "Poor image quality: Ultrasound data is noisy, containing arbitrary high-contrast edges, speckle noise, artefacts, and interruptions to the tongue's surface BIBREF0 , BIBREF16 , BIBREF17 . The oral cavity is not entirely visible, missing the lips, the palate, and the pharyngeal wall, and visually interpreting the data requires specialised training. In contrast, videos of lip movement are of much higher quality and suffer from none of these issues.", "Probe placement variation: Surfaces that are orthogonal to the ultrasound beam image better than those at an angle. Small shifts in probe placement during recording lead to high variation between otherwise similar tongue shapes BIBREF0 , BIBREF18 , BIBREF17 . In contrast, while the scaling and rotations of lip videos lead to variation, they do not lead to a degradation in image quality.", "Inter-speaker variation: Age and physiology affect the quality of ultrasound data, and subjects with smaller vocal tracts and less tissue fat image better BIBREF0 , BIBREF17 . Dryness in the mouth, as a result of nervousness during speech therapy, leads to poor imaging. While inter-speaker variation is expected in lip videos, again, the variation does not lead to quality degradation.", "Limited amount of data: Existing UTI datasets are considerably smaller than lip movement datasets. Consider for example VoxCeleb and VoxCeleb2 used to train SyncNet BIBREF4 , BIBREF14 , which together contain 1 million utterances from 7,363 identities BIBREF19 , BIBREF20 . In contrast, the UltraSuite repository (used in this work) contains 13,815 spoken utterances from 86 identities.", "Uncorrelated segments: Speech therapy data contains interactions between the therapist and patient. The audio therefore contains speech from both speakers, while the ultrasound captures only the patient's tongue BIBREF15 . As a result, parts of the recordings will consist of completely uncorrelated audio and ultrasound. This issue is similar to that of dubbed voices in lip videos BIBREF4 , but is more prevalent in speech therapy data." ], [ "We adopt the approach in BIBREF4 , modifying it to synchronise audio with UTI data. Our model, UltraSync, consists of two streams: the first takes as input a short segment of ultrasound and the second takes as input the corresponding audio. Both inputs are high-dimensional and are of different sizes. The objective is to learn a mapping from the inputs to a pair of low-dimensional vectors of the same length, such that the Euclidean distance between the two vectors is small when they correlate and large otherwise BIBREF21 , BIBREF22 . This model can be viewed as an extension of a siamese neural network BIBREF23 but with two asymmetrical streams and no shared parameters. Figure FIGREF1 illustrates the main architecture. The visual data INLINEFORM0 (ultrasound) and audio data INLINEFORM1 (MFCC), which have different shapes, are mapped to low dimensional embeddings INLINEFORM2 (visual) and INLINEFORM3 (audio) of the same size: DISPLAYFORM0 ", "The model is trained using a contrastive loss function BIBREF21 , BIBREF22 , INLINEFORM0 , which minimises the Euclidean distance INLINEFORM1 between INLINEFORM2 and INLINEFORM3 for positive pairs ( INLINEFORM4 ), and maximises it for negative pairs ( INLINEFORM5 ), for a number of training samples INLINEFORM6 : DISPLAYFORM0 ", "Given a pair of ultrasound and audio segments we can calculate the distance between them using our model. To predict the synchronisation offset for an utterance, we consider a discretised set of candidate offsets, calculate the average distance for each across utterance segments, and select the one with the minimum average distance. The candidate set is independent of the model, and is chosen based on task knowledge (Section SECREF5 )." ], [ "For our experiments, we select a dataset whose utterances have been correctly synchronised at recording time. This allows us to control how the model is trained and verify its performance using ground truth synchronisation offsets. We use UltraSuite: a repository of ultrasound and acoustic data gathered from child speech therapy sessions BIBREF15 . We used all three datasets from the repository: UXTD (recorded with typically developing children), and UXSSD and UPX (recorded with children with speech sound disorders). In total, the dataset contains 13,815 spoken utterances from 86 speakers, corresponding to 35.9 hours of recordings. The utterances have been categorised by the type of task the child was given, and are labelled as: Words (A), Non-words (B), Sentence (C), Articulatory (D), Non-speech (E), or Conversations (F). See BIBREF15 for details.", "Each utterance consists of 3 files: audio, ultrasound, and parameter. The audio file is a RIFF wave file, sampled at 22.05 KHz, containing the speech of the child and therapist. The ultrasound file consists of a sequence of ultrasound frames capturing the midsagittal view of the child's tongue. A single ultrasound frame is recorded as a 2D matrix where each column represents the ultrasound reflection intensities along a single scan line. Each ultrasound frame consists of 63 scan lines of 412 data points each, and is sampled at a rate of INLINEFORM0 121.5 fps. Raw ultrasound frames can be visualised as greyscale images and can thus be interpreted as videos. The parameter file contains the synchronisation offset value (in milliseconds), determined using hardware synchronisation at recording time and confirmed by the therapists to be correct for this dataset." ], [ "First, we exclude utterances of type “Non-speech\" (E) from our training data (and statistics). These are coughs recorded to obtain additional tongue shapes, or swallowing motions recorded to capture a trace of the hard palate. Both of these rarely contain audible content and are therefore not relevant to our task. Next, we apply the offset, which should be positive if the audio leads and negative if the audio lags. In this dataset, the offset is always positive. We apply it by cropping the leading audio and trimming the end of the longer signal to match the duration.", "To process the ultrasound more efficiently, we first reduce the frame rate from INLINEFORM0 121.5 fps to INLINEFORM1 24.3 fps by retaining 1 out of every 5 frames. We then downsample by a factor of (1, 3), shrinking the frame size from 63x412 to 63x138 using max pixel value. This retains the number of ultrasound vectors (63), but reduces the number of pixels per vector (from 412 to 138).", "The final pre-preprocessing step is to remove empty regions. UltraSuite was previously anonymised by zero-ing segments of audio which contained personally identifiable information. As a preprocessing step, we remove the zero regions from audio and corresponding ultrasound. We additionally experimented with removing regions of silence using voice activity detection, but obtained a higher performance by retaining them." ], [ "To train our model we need positive and negative training pairs. The model ingests short clips from each modality of INLINEFORM0 200ms long, calculated as INLINEFORM1 , where INLINEFORM2 is the time window, INLINEFORM3 is the number of ultrasound frames per window (5 in our case), and INLINEFORM4 is the ultrasound frame rate of the utterance ( INLINEFORM5 24.3 fps). For each recording, we split the ultrasound into non-overlapping windows of 5 frames each. We extract MFCC features (13 cepstral coefficients) from the audio using a window length of INLINEFORM6 20ms, calculated as INLINEFORM7 , and a step size of INLINEFORM8 10ms, calculated as INLINEFORM9 . This give us the input sizes shown in Figure FIGREF1 .", "Positive samples are pairs of ultrasound windows and the corresponding MFCC frames. To create negative samples, we randomise pairings of ultrasound windows to MFCC frames within the same utterance, generating as many negative as positive samples to achieve a balanced dataset. We obtain 243,764 samples for UXTD (13.5hrs), 333,526 for UXSSD (18.5hrs), and 572,078 for UPX (31.8 hrs), or a total 1,149,368 samples (63.9hrs) which we divide into training, validation and test sets." ], [ "We aim to test whether our model generalises to data from new speakers, and to data from new sessions recorded with known speakers. To simulate this, we select a group of speakers from each dataset, and hold out all of their data either for validation or for testing. Additionally, we hold out one entire session from each of the remaining speakers, and use the rest of their data for training. We aim to reserve approximately 80% of the created samples for training, 10% for validation, and 10% for testing, and select speakers and sessions on this basis.", "Each speaker in UXTD recorded 1 session, but sessions are of different durations. We reserve 45 speakers for training, 5 for validation, and 8 for testing. UXSSD and UPX contain fewer speakers, but each recorded multiple sessions. We hold out 1 speaker for validation and 1 for testing from each of the two datasets. We also hold out a session from the first half of the remaining speakers for validation, and a session from the second half of the remaining speakers for testing. This selection process results in 909,858 (pooled) samples for training (50.5hrs), 128,414 for validation (7.1hrs) and 111,096 for testing (6.2hrs). From the training set, we create shuffled batches which are balanced in the number of positive and negative samples." ], [ "We select the hyper-parameters of our model empirically by tuning on the validation set (Table ). Hyper-parameter exploration is guided by BIBREF24 . We train our model using the Adam optimiser BIBREF25 with a learning rate of 0.001, a batch size of 64 samples, and for 20 epochs. We implement learning rate scheduling which reduces the learning rate by a factor of 0.1 when the validation loss plateaus for 2 epochs.", "Upon convergence, the model achieves 0.193 training loss, 0.215 validation loss, and 0.213 test loss. By placing a threshold of 0.5 on predicted distances, the model achieves 69.9% binary classification accuracy on training samples, 64.7% on validation samples, and 65.3% on test samples.", "Synchronisation offset prediction: Section SECREF3 described briefly how to use our model to predict the synchronisation offset for test utterances. To obtain a discretised set of offset candidates, we retrieve the true offsets of the training utterances, and find that they fall in the range [0, 179] ms. We discretise this range taking 45ms steps and rendering 40 candidate values (45ms is the smaller of the absolute values of the detectability boundaries, INLINEFORM0 125 and INLINEFORM1 45 ms). We bin the true offsets in the candidate set and discard empty bins, reducing the set from 40 to 24 values. We consider all 24 candidates for each test utterance. We do this by aligning the two signals according to the given candidate, then producing the non-overlapping windows of ultrasound and MFCC pairs, as we did when preparing the data. We then use our model to predict the Euclidean distance for each pair, and average the distances. Finally, we select the offset with the smallest average distance as our prediction.", "Evaluation: Because the true offsets are known, we evaluate the performance of our model by computing the discrepancy between the predicted and the true offset for each utterance. If the discrepancy falls within the minimum detectability range ( INLINEFORM0 125 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 45) then the prediction is correct. Random prediction (averaged over 1000 runs) yields 14.6% accuracy with a mean and standard deviation discrepancy of 328 INLINEFORM5 518ms. We achieve 82.9% accuracy with a mean and standard deviation discrepancy of 32 INLINEFORM6 223ms. SyncNet reports INLINEFORM7 99% accuracy on lip video synchronisation using a manual evaluation where the lip error is not detectable to a human observer BIBREF4 . However, we argue that our data is more challenging (Section SECREF4 ).", "Analysis: We analyse the performance of our model across different conditions. Table shows the model accuracy broken down by utterance type. The model achieves 91.2% accuracy on utterances containing words, sentences, and conversations, all of which exhibit natural variation in speech. The model is less successful with Articulatory utterances, which contain isolated phones occurring once or repeated (e.g., “sh sh sh\"). Such utterances contain subtle tongue movement, making it more challenging to correlate the visual signal with the audio. And indeed, the model finds the correct offset for only 55.9% of Articulatory utterances. A further analysis shows that 84.4% (N INLINEFORM0 90) of stop consonants (e.g., “t”), which are relied upon by therapists as the most salient audiovisual synchronisation cues BIBREF3 , are correctly synchronised by our model, compared to 48.6% (N INLINEFORM1 140) of vowels, which contain less distinct movement and are also more challenging for therapists to synchronise.", "Table shows accuracy broken down by test set. The model performs better on test sets containing entirely new speakers compared with test sets containing new sessions from previously seen speakers. This is contrary to expectation but could be due to the UTI challenges (described in Section SECREF4 ) affecting different subsets to different degrees. Table shows that the model performs considerably worse on UXTD compared to other test sets (64.8% accuracy). However, a further breakdown of the results in Table by test set and utterance type explains this poor performance; the majority of UXTD utterances (71%) are Articulatory utterances which the model struggles to correctly synchronise. In fact, for other utterance types (where there is a large enough sample, such as Words) performance on UXTD is on par with other test sets." ], [ "We have shown how a two-stream neural network originally designed to synchronise lip videos with audio can be used to synchronise UTI data with audio. Our model exploits the correlation between the modalities to learn cross-model embeddings which are used to find the synchronisation offset. It generalises well to held-out data, allowing us to correctly synchronise the majority of test utterances. The model is best-suited to utterances which contain natural variation in speech and least suited to those containing isolated phones, with the exception of stop consonants. Future directions include integrating the model and synchronisation offset prediction process into speech therapy software BIBREF6 , BIBREF7 , and using the learned embeddings for other tasks such as active speaker detection BIBREF4 ." ], [ "Supported by EPSRC Healthcare Partnerships Programme grant number EP/P02338X/1 (Ultrax2020)." ] ] }
{ "question": [ "Do they compare their neural network against any other model?", "Do they annotate their own dataset or use an existing one?", "Does their neural network predict a single offset in a recording?", "What kind of neural network architecture do they use?" ], "question_id": [ "73d657d6faed0c11c65b1ab60e553db57f4971ca", "9ef182b61461d0d8b6feb1d6174796ccde290a15", "f6f8054f327a2c084a73faca16cf24a180c094ae", "b8f711179a468fec9a0d8a961fb0f51894af4b31" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "89cd66698512e65e6d240af77f3fc829fe373b2a" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Use an existing one", "evidence": [ "For our experiments, we select a dataset whose utterances have been correctly synchronised at recording time. This allows us to control how the model is trained and verify its performance using ground truth synchronisation offsets. We use UltraSuite: a repository of ultrasound and acoustic data gathered from child speech therapy sessions BIBREF15 . We used all three datasets from the repository: UXTD (recorded with typically developing children), and UXSSD and UPX (recorded with children with speech sound disorders). In total, the dataset contains 13,815 spoken utterances from 86 speakers, corresponding to 35.9 hours of recordings. The utterances have been categorised by the type of task the child was given, and are labelled as: Words (A), Non-words (B), Sentence (C), Articulatory (D), Non-speech (E), or Conversations (F). See BIBREF15 for details." ], "highlighted_evidence": [ "We use UltraSuite: a repository of ultrasound and acoustic data gathered from child speech therapy sessions BIBREF15 . We used all three datasets from the repository: UXTD (recorded with typically developing children), and UXSSD and UPX (recorded with children with speech sound disorders). In total, the dataset contains 13,815 spoken utterances from 86 speakers, corresponding to 35.9 hours of recordings. The utterances have been categorised by the type of task the child was given, and are labelled as: Words (A), Non-words (B), Sentence (C), Articulatory (D), Non-speech (E), or Conversations (F). " ] } ], "annotation_id": [ "c8d789113074b382993be027d1efa7e2d6889f00" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Given a pair of ultrasound and audio segments we can calculate the distance between them using our model. To predict the synchronisation offset for an utterance, we consider a discretised set of candidate offsets, calculate the average distance for each across utterance segments, and select the one with the minimum average distance. The candidate set is independent of the model, and is chosen based on task knowledge (Section SECREF5 )." ], "highlighted_evidence": [ "Given a pair of ultrasound and audio segments we can calculate the distance between them using our model. To predict the synchronisation offset for an utterance, we consider a discretised set of candidate offsets, calculate the average distance for each across utterance segments, and select the one with the minimum average distance. " ] } ], "annotation_id": [ "2547291c6f433f23fd04b97d9bf6228d47f28c18" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "CNN", "evidence": [ "We adopt the approach in BIBREF4 , modifying it to synchronise audio with UTI data. Our model, UltraSync, consists of two streams: the first takes as input a short segment of ultrasound and the second takes as input the corresponding audio. Both inputs are high-dimensional and are of different sizes. The objective is to learn a mapping from the inputs to a pair of low-dimensional vectors of the same length, such that the Euclidean distance between the two vectors is small when they correlate and large otherwise BIBREF21 , BIBREF22 . This model can be viewed as an extension of a siamese neural network BIBREF23 but with two asymmetrical streams and no shared parameters. Figure FIGREF1 illustrates the main architecture. The visual data INLINEFORM0 (ultrasound) and audio data INLINEFORM1 (MFCC), which have different shapes, are mapped to low dimensional embeddings INLINEFORM2 (visual) and INLINEFORM3 (audio) of the same size: DISPLAYFORM0", "FLOAT SELECTED: Figure 1: UltraSync maps high dimensional inputs to low dimensional vectors using a contrastive loss function, such that the Euclidean distance is small between vectors from positive pairs and large otherwise. Inputs span '200ms: 5 consecutive raw ultrasound frames on one stream and 20 frames of the corresponding MFCC features on the other." ], "highlighted_evidence": [ "Figure FIGREF1 illustrates the main architecture. ", "FLOAT SELECTED: Figure 1: UltraSync maps high dimensional inputs to low dimensional vectors using a contrastive loss function, such that the Euclidean distance is small between vectors from positive pairs and large otherwise. Inputs span '200ms: 5 consecutive raw ultrasound frames on one stream and 20 frames of the corresponding MFCC features on the other." ] } ], "annotation_id": [ "05c266e2b0ab0b45fca7c0b09534b1870aa75efd" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Figure 1: UltraSync maps high dimensional inputs to low dimensional vectors using a contrastive loss function, such that the Euclidean distance is small between vectors from positive pairs and large otherwise. Inputs span '200ms: 5 consecutive raw ultrasound frames on one stream and 20 frames of the corresponding MFCC features on the other.", "Table 1: Each stream has 3 convolutional layers followed by 2 fully-connected layers. Fully connected layers have 64 units each. For convolutional layers, we specify the number of filters and their receptive field size as “num×size×size” followed by the max-pooling downsampling factor. Each layer is followed by batch-normalisation then ReLU activation. Max-pooling is applied after the activation function.", "Table 2: Model accuracy per test set and utterance type. Performance is consistent across test sets for Words (A) where the sample sizes are large, and less consistent for types where the sample sizes are small. 71% of UXTD utterances are Articulatory (D), which explains the low performance on this test set (64.8% in Table 4). In contrast, performance on UXTD Words (A) is comparable to other test sets.", "Table 3: Model accuracy per utterance type, where N is the number of utterances. Performance is best on utterances containing natural variation in speech, such as Words (A) and Sentences (C). Non-words (B) and Conversations (F) also exhibit this variation, but due to smaller sample sizes the lower percentages are not representative. Performance is lowest on Articulatory utterances (D), which contain isolated phones. The mean and standard deviation of the discrepancy between the prediction and the true offset are also shown in milliseconds.", "Table 4: Model accuracy per test set. Contrary to expectation, performance is better on test sets containing new speakers than on test sets containing new sessions from known speakers. The performance on UXTD is considerably lower than other test sets, due to it containing a large number of Articulatory utterances, which are difficult to synchronise (see Tables 3 and 2)." ], "file": [ "1-Figure1-1.png", "3-Table1-1.png", "4-Table2-1.png", "4-Table3-1.png", "4-Table4-1.png" ] }
1710.06536
Basic tasks of sentiment analysis
Subjectivity detection is the task of identifying objective and subjective sentences. Objective sentences are those which do not exhibit any sentiment. So, it is desired for a sentiment analysis engine to find and separate the objective sentences for further analysis, e.g., polarity detection. In subjective sentences, opinions can often be expressed on one or multiple topics. Aspect extraction is a subtask of sentiment analysis that consists in identifying opinion targets in opinionated text, i.e., in detecting the specific aspects of a product or service the opinion holder is either praising or complaining about.
{ "section_name": [ "Affiliation", "Synonyms", "Glossary", "Definition", "Key Points", "Historical Background", "Introduction", "Subjectivity detection", "Aspect-Based Sentiment Analysis", "Preliminaries", "Gaussian Bayesian Networks", "Convolutional Neural Networks", "Convolution Deep Belief Network", " Subjectivity Detection", "Aspect Extraction", "Subjectivity Detection", "Key Applications", "Conclusion", "Future Directions", "Acknowledgement", "Cross References" ], "paragraphs": [ [ "School of Computer Science and Engineering, Nanyang Technological University, Singapore" ], [ "Sentiment Analysis, Subjectivity Detection, Deep Learning Aspect Extraction, Polarity Distribution, Convolutional Neural Network." ], [ "Aspect : Feature related to an opinion target", "Convolution : features made of consecutive words", "BOW : Bag of Words", "NLP : Natural Language Processing", "CNN : Convolutional Neural Network", "LDA : Latent Dirichlet Allocation" ], [ "Subjectivity detection is the task of identifying objective and subjective sentences. Objective sentences are those which do not exhibit any sentiment. So, it is desired for a sentiment analysis engine to find and separate the objective sentences for further analysis e.g., polarity detection. In subjective sentences, opinions can often be expressed on one or multiple topics. Aspect extraction is a subtask of sentiment analysis that consists in identifying opinion targets in opinionated text, i.e., in detecting the specific aspects of a product or service the opinion holder is either praising or complaining about." ], [ "We consider deep convolutional neural networks where each layer is learned independent of the others resulting in low complexity.", "We model temporal dynamics in product reviews by pre-training the deep CNN using dynamic Gaussian Bayesian networks.", "We combine linguistic aspect mining with CNN features for effective sentiment detection." ], [ "Traditional methods prior to 2001 used hand-crafted templates to identify subjectivity and did not generalize well for resource-deficient languages such as Spanish. Later works published between 2002 and 2009 proposed the use of deep neural networks to automatically learn a dictionary of features (in the form of convolution kernels) that is portable to new languages. Recently, recurrent deep neural networks are being used to model alternating subjective and objective sentences within a single review. Such networks are difficult to train for a large vocabulary of words due to the problem of vanishing gradients. Hence, in this chapter we consider use of heuristics to learn dynamic Gaussian networks to select significant word dependencies between sentences in a single review.", "Further, in order to relation between opinion targets and the corresponding polarity in a review, aspect based opinion mining is used. Explicit aspects were models by several authors using statistical observations such mutual information between noun phrase and the product class. However this method was unable to detect implicit aspects due to high level of noise in the data. Hence, topic modeling was widely used to extract and group aspects, where the latent variable 'topic' is introduced between the observed variables 'document' and 'word'. In this chapter, we demonstrate the use of 'common sense reasoning' when computing word distributions that enable shifting from a syntactic word model to a semantic concept model." ], [ "While sentiment analysis research has become very popular in the past ten years, most companies and researchers still approach it simply as a polarity detection problem. In reality, sentiment analysis is a `suitcase problem' that requires tackling many natural language processing (NLP) subtasks, including microtext analysis, sarcasm detection, anaphora resolution, subjectivity detection and aspect extraction. In this chapter, we focus on the last two subtasks as they are key for ensuring a minimum level of accuracy in the detection of polarity from social media.", "The two basic issues associated with sentiment analysis on the Web, in fact, are that (1) a lot of factual or non-opinionated information needs to be filtered out and (2) opinions are most times about different aspects of the same product or service rather than on the whole item and reviewers tend to praise some and criticize others. Subjectivity detection, hence, ensures that factual information is filtered out and only opinionated information is passed on to the polarity classifier and aspect extraction enables the correct distribution of polarity among the different features of the opinion target (in stead of having one unique, averaged polarity assigned to it). In this chapter, we offer some insights about each task and apply an ensemble of deep learning and linguistics to tackle both.", "The opportunity to capture the opinion of the general public about social events, political movements, company strategies, marketing campaigns, and product preferences has raised increasing interest of both the scientific community (because of the exciting open challenges) and the business world (because of the remarkable benefits for marketing and financial market prediction). Today, sentiment analysis research has its applications in several different scenarios. There are a good number of companies, both large- and small-scale, that focus on the analysis of opinions and sentiments as part of their mission BIBREF0 . Opinion mining techniques can be used for the creation and automated upkeep of review and opinion aggregation websites, in which opinions are continuously gathered from the Web and not restricted to just product reviews, but also to broader topics such as political issues and brand perception. Sentiment analysis also has a great potential as a sub-component technology for other systems. It can enhance the capabilities of customer relationship management and recommendation systems; for example, allowing users to find out which features customers are particularly interested in or to exclude items that have received overtly negative feedback from recommendation lists. Similarly, it can be used in social communication for troll filtering and to enhance anti-spam systems. Business intelligence is also one of the main factors behind corporate interest in the field of sentiment analysis BIBREF1 .", "Sentiment analysis is a `suitcase' research problem that requires tackling many NLP sub-tasks, including semantic parsing BIBREF2 , named entity recognition BIBREF3 , sarcasm detection BIBREF4 , subjectivity detection and aspect extraction. In opinion mining, different levels of analysis granularity have been proposed, each one having its own advantages and drawbacks BIBREF5 , BIBREF6 . Aspect-based opinion mining BIBREF7 , BIBREF8 focuses on the relations between aspects and document polarity. An aspect, also known as an opinion target, is a concept in which the opinion is expressed in the given document. For example, in the sentence, “The screen of my phone is really nice and its resolution is superb” for a phone review contains positive polarity, i.e., the author likes the phone. However, more specifically, the positive opinion is about its screen and resolution; these concepts are thus called opinion targets, or aspects, of this opinion. The task of identifying the aspects in a given opinionated text is called aspect extraction. There are two types of aspects defined in aspect-based opinion mining: explicit aspects and implicit aspects. Explicit aspects are words in the opinionated document that explicitly denote the opinion target. For instance, in the above example, the opinion targets screen and resolution are explicitly mentioned in the text. In contrast, an implicit aspect is a concept that represents the opinion target of an opinionated document but which is not specified explicitly in the text. One can infer that the sentence, “This camera is sleek and very affordable” implicitly contains a positive opinion of the aspects appearance and price of the entity camera. These same aspects would be explicit in an equivalent sentence: “The appearance of this camera is sleek and its price is very affordable.”", "Most of the previous works in aspect term extraction have either used conditional random fields (CRFs) BIBREF9 , BIBREF10 or linguistic patterns BIBREF7 , BIBREF11 . Both of these approaches have their own limitations: CRF is a linear model, so it needs a large number of features to work well; linguistic patterns need to be crafted by hand, and they crucially depend on the grammatical accuracy of the sentences. In this chapter, we apply an ensemble of deep learning and linguistics to tackle both the problem of aspect extraction and subjectivity detection.", "The remainder of this chapter is organized as follows: Section SECREF3 and SECREF4 propose some introductory explanation and some literature for the tasks of subjectivity detection and aspect extraction, respectively; Section SECREF5 illustrates the basic concepts of deep learning adopted in this work; Section SECREF6 describes in detail the proposed algorithm; Section SECREF7 shows evaluation results; finally, Section SECREF9 concludes the chapter." ], [ "Subjectivity detection is an important subtask of sentiment analysis that can prevent a sentiment classifier from considering irrelevant or potentially misleading text in online social platforms such as Twitter and Facebook. Subjective extraction can reduce the amount of review data to only 60 INLINEFORM0 and still produce the same polarity results as full text classification BIBREF12 . This allows analysts in government, commercial and political domains who need to determine the response of people to different crisis events BIBREF12 , BIBREF13 , BIBREF14 . Similarly, online reviews need to be summarized in a manner that allows comparison of opinions, so that a user can clearly see the advantages and weaknesses of each product merely with a single glance, both in unimodal BIBREF15 and multimodal BIBREF16 , BIBREF17 contexts. Further, we can do in-depth opinion assessment, such as finding reasons or aspects BIBREF18 in opinion-bearing texts. For example, INLINEFORM1 , which makes the film INLINEFORM2 . Several works have explored sentiment composition through careful engineering of features or polarity shifting rules on syntactic structures. However, sentiment accuracies for classifying a sentence as positive/negative/neutral has not exceeded 60 INLINEFORM3 .", "Early attempts used general subjectivity clues to generate training data from un-annotated text BIBREF19 . Next, bag-of-words (BOW) classifiers were introduced that represent a document as a multi set of its words disregarding grammar and word order. These methods did not work well on short tweets. Co-occurrence matrices also were unable to capture difference in antonyms such as `good/bad' that have similar distributions. Subjectivity detection hence progressed from syntactic to semantic methods in BIBREF19 , where the authors used extraction pattern to represent subjective expressions. For example, the pattern `hijacking' of INLINEFORM0 , looks for the noun `hijacking' and the object of the preposition INLINEFORM1 . Extracted features are used to train machine-learning classifiers such as SVM BIBREF20 and ELM BIBREF21 . Subjectivity detection is also useful for constructing and maintaining sentiment lexicons, as objective words or concepts need to be omitted from them BIBREF22 .", "Since, subjective sentences tend to be longer than neutral sentences, recursive neural networks were proposed where the sentiment class at each node in the parse tree was captured using matrix multiplication of parent nodes BIBREF23 , BIBREF24 . However, the number of possible parent composition functions is exponential, hence in BIBREF25 recursive neural tensor network was introduced that use a single tensor composition function to define multiple bilinear dependencies between words. In BIBREF26 , the authors used logistic regression predictor that defines a hyperplane in the word vector space where a word vectors positive sentiment probability depends on where it lies with respect to this hyperplane. However, it was found that while incorporating words that are more subjective can generally yield better results, the performance gain by employing extra neutral words is less significant BIBREF27 . Another class of probabilistic models called Latent Dirichlet Allocation assumes each document is a mixture of latent topics. Lastly, sentence-level subjectivity detection was integrated into document-level sentiment detection using graphs where each node is a sentence. The contextual constraints between sentences in a graph led to significant improvement in polarity classification BIBREF28 .", "Similarly, in BIBREF29 the authors take advantage of the sequence encoding method for trees and treat them as sequence kernels for sentences. Templates are not suitable for semantic role labeling, because relevant context might be very far away. Hence, deep neural networks have become popular to process text. In word2vec, for example, a word's meaning is simply a signal that helps to classify larger entities such as documents. Every word is mapped to a unique vector, represented by a column in a weight matrix. The concatenation or sum of the vectors is then used as features for prediction of the next word in a sentence BIBREF30 . Related words appear next to each other in a INLINEFORM0 dimensional vector space. Vectorizing them allows us to measure their similarities and cluster them. For semantic role labeling, we need to know the relative position of verbs, hence the features can include prefix, suffix, distance from verbs in the sentence etc. However, each feature has a corresponding vector representation in INLINEFORM1 dimensional space learned from the training data.", "Recently, convolutional neural network (CNN) is being used for subjectivity detection. In particular, BIBREF31 used recurrent CNNs. These show high accuracy on certain datasets such as Twitter we are also concerned with a specific sentence within the context of the previous discussion, the order of the sentences preceding the one at hand results in a sequence of sentences also known as a time series of sentences BIBREF31 . However, their model suffers from overfitting, hence in this work we consider deep convolutional neural networks, where temporal information is modeled via dynamic Gaussian Bayesian networks." ], [ "Aspect extraction from opinions was first studied by BIBREF7 . They introduced the distinction between explicit and implicit aspects. However, the authors only dealt with explicit aspects and used a set of rules based on statistical observations. Hu and Liu's method was later improved by BIBREF32 and by BIBREF33 . BIBREF32 assumed the product class is known in advance. Their algorithm detects whether a noun or noun phrase is a product feature by computing the point-wise mutual information between the noun phrase and the product class.", " BIBREF34 presented a method that uses language model to identify product features. They assumed that product features are more frequent in product reviews than in a general natural language text. However, their method seems to have low precision since retrieved aspects are affected by noise. Some methods treated the aspect term extraction as sequence labeling and used CRF for that. Such methods have performed very well on the datasets even in cross-domain experiments BIBREF9 , BIBREF10 .", "Topic modeling has been widely used as a basis to perform extraction and grouping of aspects BIBREF35 , BIBREF36 . Two models were considered: pLSA BIBREF37 and LDA BIBREF38 . Both models introduce a latent variable “topic” between the observable variables “document” and “word” to analyze the semantic topic distribution of documents. In topic models, each document is represented as a random mixture over latent topics, where each topic is characterized by a distribution over words.", "Such methods have been gaining popularity in social media analysis like emerging political topic detection in Twitter BIBREF39 . The LDA model defines a Dirichlet probabilistic generative process for document-topic distribution; in each document, a latent aspect is chosen according to a multinomial distribution, controlled by a Dirichlet prior INLINEFORM0 . Then, given an aspect, a word is extracted according to another multinomial distribution, controlled by another Dirichlet prior INLINEFORM1 . Among existing works employing these models are the extraction of global aspects ( such as the brand of a product) and local aspects (such as the property of a product BIBREF40 ), the extraction of key phrases BIBREF41 , the rating of multi-aspects BIBREF42 , and the summarization of aspects and sentiments BIBREF43 . BIBREF44 employed the maximum entropy method to train a switch variable based on POS tags of words and used it to separate aspect and sentiment words.", " BIBREF45 added user feedback to LDA as a response-variable related to each document. BIBREF46 proposed a semi-supervised model. DF-LDA BIBREF47 also represents a semi-supervised model, which allows the user to set must-link and cannot-link constraints. A must-link constraint means that two terms must be in the same topic, while a cannot-link constraint means that two terms cannot be in the same topic. BIBREF48 integrated commonsense in the calculation of word distributions in the LDA algorithm, thus enabling the shift from syntax to semantics in aspect-based sentiment analysis. BIBREF49 proposed two semi-supervised models for product aspect extraction based on the use of seeding aspects. In the category of supervised methods, BIBREF50 employed seed words to guide topic models to learn topics of specific interest to a user, while BIBREF42 and BIBREF51 employed seeding words to extract related product aspects from product reviews. On the other hand, recent approaches using deep CNNs BIBREF52 , BIBREF53 showed significant performance improvement over the state-of-the-art methods on a range of NLP tasks. BIBREF52 fed word embeddings to a CNN to solve standard NLP problems such as named entity recognition (NER), part-of-speech (POS) tagging and semantic role labeling." ], [ "In this section, we briefly review the theoretical concepts necessary to comprehend the present work. We begin with a description of maximum likelihood estimation of edges in dynamic Gaussian Bayesian networks where each node is a word in a sentence. Next, we show that weights in the CNN can be learned by minimizing a global error function that corresponds to an exponential distribution over a linear combination of input sequence of word features.", "Notations : Consider a Gaussian network (GN) with time delays which comprises a set of INLINEFORM0 nodes and observations gathered over INLINEFORM1 instances for all the nodes. Nodes can take real values from a multivariate distribution determined by the parent set. Let the dataset of samples be INLINEFORM2 , where INLINEFORM3 represents the sample value of the INLINEFORM4 random variable in instance INLINEFORM5 . Lastly, let INLINEFORM6 be the set of parent variables regulating variable INLINEFORM7 ." ], [ "In tasks where one is concerned with a specific sentence within the context of the previous discourse, capturing the order of the sequences preceding the one at hand may be particularly crucial.", "We take as given a sequence of sentences INLINEFORM0 , each in turn being a sequence of words so that INLINEFORM1 , where INLINEFORM2 is the length of sentence INLINEFORM3 . Thus, the probability of a word INLINEFORM4 follows the distribution : DISPLAYFORM0 ", "A Bayesian network is a graphical model that represents a joint multivariate probability distribution for a set of random variables BIBREF54 . It is a directed acyclic graph INLINEFORM0 with a set of parameters INLINEFORM1 that represents the strengths of connections by conditional probabilities.", "The BN decomposes the likelihood of node expressions into a product of conditional probabilities by assuming independence of non-descendant nodes, given their parents. DISPLAYFORM0 ", "where INLINEFORM0 denotes the conditional probability of node expression INLINEFORM1 given its parent node expressions INLINEFORM2 , and INLINEFORM3 denotes the maximum likelihood(ML) estimate of the conditional probabilities.", "Figure FIGREF11 (a) illustrates the state space of a Gaussian Bayesian network (GBN) at time instant INLINEFORM0 where each node INLINEFORM1 is a word in the sentence INLINEFORM2 . The connections represent causal dependencies over one or more time instants. The observed state vector of variable INLINEFORM3 is denoted as INLINEFORM4 and the conditional probability of variable INLINEFORM5 given variable INLINEFORM6 is INLINEFORM7 . The optimal Gaussian network INLINEFORM8 is obtained by maximizing the posterior probability of INLINEFORM9 given the data INLINEFORM10 . From Bayes theorem, the optimal Gaussian network INLINEFORM11 is given by: DISPLAYFORM0 ", "where INLINEFORM0 is the probability of the Gaussian network and INLINEFORM1 is the likelihood of the expression data given the Gaussian network.", "Given the set of conditional distributions with parameters INLINEFORM0 , the likelihood of the data is given by DISPLAYFORM0 ", "To find the likelihood in ( EQREF14 ), and to obtain the optimal Gaussian network as in ( EQREF13 ), Gaussian BN assumes that the nodes are multivariate Gaussian. That is, expression of node INLINEFORM0 can be described with mean INLINEFORM1 and covariance matrix INLINEFORM2 of size INLINEFORM3 . The joint probability of the network can be the product of a set of conditional probability distributions given by: DISPLAYFORM0 ", "where INLINEFORM0 and INLINEFORM1 denotes the regression coefficient matrix, INLINEFORM2 is the conditional variance of INLINEFORM3 given its parent set INLINEFORM4 , INLINEFORM5 is the covariance between observations of INLINEFORM6 and the variables in INLINEFORM7 , and INLINEFORM8 is the covariance matrix of INLINEFORM9 . The acyclic condition of BN does not allow feedback among nodes, and feedback is an essential characteristic of real world GN.", "Therefore, dynamic Bayesian networks have recently become popular in building GN with time delays mainly due to their ability to model causal interactions as well as feedback regulations BIBREF55 . A first-order dynamic BN is defined by a transition network of interactions between a pair of Gaussian networks connecting nodes at time instants INLINEFORM0 and INLINEFORM1 . In time instant INLINEFORM2 , the parents of nodes are those specified in the time instant INLINEFORM3 . Similarly, the Gaussian network of a INLINEFORM4 -order dynamic system is represented by a Gaussian network comprising INLINEFORM5 consecutive time points and INLINEFORM6 nodes, or a graph of INLINEFORM7 nodes. In practice, the sentence data is transformed to a BOW model where each sentence is a vector of frequencies for each word in the vocabulary. Figure FIGREF11 (b) illustrates the state space of a first-order Dynamic GBN models transition networks among words in sentences INLINEFORM8 and INLINEFORM9 in consecutive time points, the lines correspond to first-order edges among the words learned using BOW.", "Hence, a sequence of sentences results in a time series of word frequencies. It can be seen that such a discourse model produces compelling discourse vector representations that are sensitive to the structure of the discourse and promise to capture subtle aspects of discourse comprehension, especially when coupled to further semantic data and unsupervised pre-training." ], [ "The idea behind convolution is to take the dot product of a vector of INLINEFORM0 weights INLINEFORM1 also known as kernel vector with each INLINEFORM2 -gram in the sentence INLINEFORM3 to obtain another sequence of features INLINEFORM4 . DISPLAYFORM0 ", "We then apply a max pooling operation over the feature map and take the maximum value INLINEFORM0 as the feature corresponding to this particular kernel vector. Similarly, varying kernel vectors and window sizes are used to obtain multiple features BIBREF23 .", "For each word INLINEFORM0 in the vocabulary, an INLINEFORM1 dimensional vector representation is given in a look up table that is learned from the data BIBREF30 . The vector representation of a sentence is hence a concatenation of vectors for individual words. Similarly, we can have look up tables for other features. One might want to provide features other than words if these features are suspected to be helpful. Now, the convolution kernels are applied to word vectors instead of individual words.", "We use these features to train higher layers of the CNN that can represent bigger groups of words in sentences. We denote the feature learned at hidden neuron INLINEFORM0 in layer INLINEFORM1 as INLINEFORM2 . Multiple features may be learned in parallel in the same CNN layer. The features learned in each layer are used to train the next layer DISPLAYFORM0 ", "where * indicates convolution and INLINEFORM0 is a weight kernel for hidden neuron INLINEFORM1 and INLINEFORM2 is the total number of hidden neurons. Training a CNN becomes difficult as the number of layers increases, as the Hessian matrix of second-order derivatives often does not exist. Recently, deep learning has been used to improve the scalability of a model that has inherent parallel computation. This is because hierarchies of modules can provide a compact representation in the form of input-output pairs. Each layer tries to minimize the error between the original state of the input nodes and the state of the input nodes predicted by the hidden neurons.", "This results in a downward coupling between modules. The more abstract representation at the output of a higher layer module is combined with the less abstract representation at the internal nodes from the module in the layer below. In the next section, we describe deep CNN that can have arbitrary number of layers." ], [ "A deep belief network (DBN) is a type of deep neural network that can be viewed as a composite of simple, unsupervised models such as restricted Boltzmann machines (RBMs) where each RBMs hidden layer serves as the visible layer for the next RBM BIBREF56 . RBM is a bipartite graph comprising two layers of neurons: a visible and a hidden layer; it is restricted such that the connections among neurons in the same layer are not allowed. To compute the weights INLINEFORM0 of an RBM, we assume that the probability distribution over the input vector INLINEFORM1 is given as: DISPLAYFORM0 ", "where INLINEFORM0 is a normalisation constant. Computing the maximum likelihood is difficult as it involves solving the normalisation constant, which is a sum of an exponential number of terms. The standard approach is to approximate the average over the distribution with an average over a sample from INLINEFORM1 , obtained by Markov chain Monte Carlo until convergence.", "To train such a multi-layer system, we must compute the gradient of the total energy function INLINEFORM0 with respect to weights in all the layers. To learn these weights and maximize the global energy function, the approximate maximum likelihood contrastive divergence (CD) approach can be used. This method employs each training sample to initialize the visible layer. Next, it uses the Gibbs sampling algorithm to update the hidden layer and then reconstruct the visible layer consecutively, until convergence BIBREF57 . As an example, here we use a logistic regression model to learn the binary hidden neurons and each visible unit is assumed to be a sample from a normal distribution BIBREF58 .", "The continuous state INLINEFORM0 of the hidden neuron INLINEFORM1 , with bias INLINEFORM2 , is a weighted sum over all continuous visible nodes INLINEFORM3 and is given by: DISPLAYFORM0 ", "where INLINEFORM0 is the connection weight to hidden neuron INLINEFORM1 from visible node INLINEFORM2 . The binary state INLINEFORM3 of the hidden neuron can be defined by a sigmoid activation function: DISPLAYFORM0 ", "Similarly, in the next iteration, the binary state of each visible node is reconstructed and labeled as INLINEFORM0 . Here, we determine the value to the visible node INLINEFORM1 , with bias INLINEFORM2 , as a random sample from the normal distribution where the mean is a weighted sum over all binary hidden neurons and is given by: DISPLAYFORM0 ", "where INLINEFORM0 is the connection weight to hidden neuron INLINEFORM1 from visible node INLINEFORM2 . The continuous state INLINEFORM3 is a random sample from INLINEFORM4 , where INLINEFORM5 is the variance of all visible nodes. Lastly, the weights are updated as the difference between the original and reconstructed visible layer using: DISPLAYFORM0 ", "where INLINEFORM0 is the learning rate and INLINEFORM1 is the expected frequency with which visible unit INLINEFORM2 and hidden unit INLINEFORM3 are active together when the visible vectors are sampled from the training set and the hidden units are determined by ( EQREF21 ). Finally, the energy of a DNN can be determined in the final layer using INLINEFORM4 .", "To extend the deep belief networks to convolution deep belief network (CDBN) we simply partition the hidden layer into INLINEFORM0 groups. Each of the INLINEFORM1 groups is associated with a INLINEFORM2 filter where INLINEFORM3 is the width of the kernel and INLINEFORM4 is the number of dimensions in the word vector. Let us assume that the input layer has dimension INLINEFORM5 where INLINEFORM6 is the length of the sentence. Then the convolution operation given by ( EQREF17 ) will result in a hidden layer of INLINEFORM7 groups each of dimension INLINEFORM8 . These learned kernel weights are shared among all hidden units in a particular group. The energy function is now a sum over the energy of individual blocks given by: DISPLAYFORM0 ", "The CNN sentence model preserve the order of words by adopting convolution kernels of gradually increasing sizes that span an increasing number of words and ultimately the entire sentence BIBREF31 . However, several word dependencies may occur across sentences hence, in this work we propose a Bayesian CNN model that uses dynamic Bayesian networks to model a sequence of sentences." ], [ "In this work, we integrate a higher-order GBN for sentences into the first layer of the CNN. The GBN layer of connections INLINEFORM0 is learned using maximum likelihood approach on the BOW model of the training data. The input sequence of sentences INLINEFORM1 are parsed through this layer prior to training the CNN. Only sentences or groups of sentences containing high ML motifs are then used to train the CNN. Hence, motifs are convolved with the input sentences to generate a new set of sentences for pre-training. DISPLAYFORM0 ", "where INLINEFORM0 is the number of high ML motifs and INLINEFORM1 is the training set of sentences in a particular class.", "Fig. FIGREF28 illustrates the state space of Bayesian CNN where the input layer is pre-trained using a dynamic GBN with up-to two time point delays shown for three sentences in a review on iPhone. The dashed lines correspond to second-order edges among the words learned using BOW. Each hidden layer does convolution followed by pooling across the length of the sentence. To preserve the order of words we adopt kernels of increasing sizes.", "Since, the number of possible words in the vocabulary is very large, we consider only the top subjectivity clue words to learn the GBN layer. Lastly, In-order to preserve the context of words in conceptual phrases such as `touchscreen'; we consider additional nodes in the Bayesian network for phrases with subjectivity clues. Further, the word embeddings in the CNN are initialized using the log-bilinear language model (LBL) where the INLINEFORM0 dimensional vector representation of each word INLINEFORM1 in ( EQREF10 ) is given by : DISPLAYFORM0 ", "where INLINEFORM0 are the INLINEFORM1 co-occurrence or context matrices computed from the data.", "The time series of sentences is used to generate a sub-set of sentences containing high ML motifs using ( EQREF27 ). The frequency of a sentence in the new dataset will also correspond to the corresponding number of high ML motifs in the sentence. In this way, we are able to increase the weights of the corresponding causal features among words and concepts extracted using Gaussian Bayesian networks.", "The new set of sentences is used to pre-train the deep neural network prior to training with the complete dataset. Each sentence can be divided into chunks or phrases using POS taggers. The phrases have hierarchical structures and combine in distinct ways to form sentences. The INLINEFORM0 -gram kernels learned in the first layer hence correspond to a chunk in the sentence." ], [ "In order to train the CNN for aspect extraction, instead, we used a special training algorithm suitable for sequential data, proposed by BIBREF52 . We will summarize it here, mainly following BIBREF59 . The algorithm trains the neural network by back-propagation in order to maximize the likelihood over training sentences. Consider the network parameter INLINEFORM0 . We say that INLINEFORM1 is the output score for the likelihood of an input INLINEFORM2 to have the tag INLINEFORM3 . Then, the probability to assign the label INLINEFORM4 to INLINEFORM5 is calculated as DISPLAYFORM0 ", " Define the logadd operation as DISPLAYFORM0 ", " then for a training example, the log-likelihood becomes DISPLAYFORM0 ", " In aspect term extraction, the terms can be organized as chunks and are also often surrounded by opinion terms. Hence, it is important to consider sentence structure on a whole in order to obtain additional clues. Let it be given that there are INLINEFORM0 tokens in a sentence and INLINEFORM1 is the tag sequence while INLINEFORM2 is the network score for the INLINEFORM3 -th tag having INLINEFORM4 -th tag. We introduce INLINEFORM5 transition score from moving tag INLINEFORM6 to tag INLINEFORM7 . Then, the score tag for the sentence INLINEFORM8 to have the tag path INLINEFORM9 is defined by: DISPLAYFORM0 ", " This formula represents the tag path probability over all possible paths. Now, from ( EQREF32 ) we can write the log-likelihood DISPLAYFORM0 ", " The number of tag paths has exponential growth. However, using dynamic programming techniques, one can compute in polynomial time the score for all paths that end in a given tag BIBREF52 . Let INLINEFORM0 denote all paths that end with the tag INLINEFORM1 at the token INLINEFORM2 . Then, using recursion, we obtain DISPLAYFORM0 ", " For the sake of brevity, we shall not delve into details of the recursive procedure, which can be found in BIBREF52 . The next equation gives the log-add for all the paths to the token INLINEFORM0 : DISPLAYFORM0 ", "Using these equations, we can maximize the likelihood of ( EQREF35 ) over all training pairs. For inference, we need to find the best tag path using the Viterbi algorithm; e.g., we need to find the best tag path that minimizes the sentence score ( EQREF34 ).", "The features of an aspect term depend on its surrounding words. Thus, we used a window of 5 words around each word in a sentence, i.e., INLINEFORM0 words. We formed the local features of that window and considered them to be features of the middle word. Then, the feature vector was fed to a CNN.", "The network contained one input layer, two convolution layers, two max-pool layers, and a fully connected layer with softmax output. The first convolution layer consisted of 100 feature maps with filter size 2. The second convolution layer had 50 feature maps with filter size 3. The stride in each convolution layer is 1 as we wanted to tag each word. A max-pooling layer followed each convolution layer. The pool size we use in the max-pool layers was 2. We used regularization with dropout on the penultimate layer with a constraint on L2-norms of the weight vectors, with 30 epochs. The output of each convolution layer was computed using a non-linear function; in our case we used INLINEFORM0 .", "As features, we used word embeddings trained on two different corpora. We also used some additional features and rules to boost the accuracy; see Section UID49 . The CNN produces local features around each word in a sentence and then combines these features into a global feature vector. Since the kernel size for the two convolution layers was different, the dimensionality INLINEFORM0 mentioned in Section SECREF16 was INLINEFORM1 and INLINEFORM2 , respectively. The input layer was INLINEFORM3 , where 65 was the maximum number of words in a sentence, and 300 the dimensionality of the word embeddings used, per each word.", "The process was performed for each word in a sentence. Unlike traditional max-likelihood leaning scheme, we trained the system using propagation after convolving all tokens in the sentence. Namely, we stored the weights, biases, and features for each token after convolution and only back-propagated the error in order to correct them once all tokens were processed using the training scheme as explained in Section SECREF30 .", "If a training instance INLINEFORM0 had INLINEFORM1 words, then we represented the input vector for that instance as INLINEFORM2 . Here, INLINEFORM3 is a INLINEFORM4 -dimensional feature vector for the word INLINEFORM5 . We found that this network architecture produced good results on both of our benchmark datasets. Adding extra layers or changing the pooling size and window size did not contribute to the accuracy much, and instead, only served to increase computational cost.", "In this subsection, we present the data used in our experiments.", " BIBREF64 presented two different neural network models for creating word embeddings. The models were log-linear in nature, trained on large corpora. One of them is a bag-of-words based model called CBOW; it uses word context in order to obtain the embeddings. The other one is called skip-gram model; it predicts the word embeddings of surrounding words given the current word. Those authors made a dataset called word2vec publicly available. These 300-dimensional vectors were trained on a 100-billion-word corpus from Google News using the CBOW architecture.", "We trained the CBOW architecture proposed by BIBREF64 on a large Amazon product review dataset developed by BIBREF65 . This dataset consists of 34,686,770 reviews (4.7 billion words) of 2,441,053 Amazon products from June 1995 to March 2013. We kept the word embeddings 300-dimensional (http://sentic.net/AmazonWE.zip). Due to the nature of the text used to train this model, this includes opinionated/affective information, which is not present in ordinary texts such as the Google News corpus.", "For training and evaluation of the proposed approach, we used two corpora:", "Aspect-based sentiment analysis dataset developed by BIBREF66 ; and", "SemEval 2014 dataset. The dataset consists of training and test sets from two domains, Laptop and Restaurant; see Table TABREF52 .", "The annotations in both corpora were encoded according to IOB2, a widely used coding scheme for representing sequences. In this encoding, the first word of each chunk starts with a “B-Type” tag, “I-Type” is the continuation of the chunk and “O” is used to tag a word which is out of the chunk. In our case, we are interested to determine whether a word or chunk is an aspect, so we only have “B–A”, “I–A” and “O” tags for the words.", "Here is an example of IOB2 tags:", "also/O excellent/O operating/B-A system/I-A ,/O size/B-A and/O weight/B-A for/O optimal/O mobility/B-A excellent/O durability/B-A of/O the/O battery/B-A the/O functions/O provided/O by/O the/O trackpad/B-A is/O unmatched/O by/O any/O other/O brand/O", "In this section, we present the features, the representation of the text, and linguistic rules used in our experiments.", "We used the following the features:", "Word Embeddings We used the word embeddings described earlier as features for the network. This way, each word was encoded as 300-dimensional vector, which was fed to the network.", "Part of speech tags Most of the aspect terms are either nouns or noun chunk. This justifies the importance of POS features. We used the POS tag of the word as its additional feature. We used 6 basic parts of speech (noun, verb, adjective, adverb, preposition, conjunction) encoded as a 6- dimensional binary vector. We used Stanford Tagger as a POS tagger.", "These two features vectors were concatenated and fed to CNN.", "So, for each word the final feature vector is 306 dimensional.", "In some of our experiments, we used a set of linguistic patterns (LPs) derived from sentic patterns (LP) BIBREF11 , a linguistic framework based on SenticNet BIBREF22 . SenticNet is a concept-level knowledge base for sentiment analysis built by means of sentic computing BIBREF67 , a multi-disciplinary approach to natural language processing and understanding at the crossroads between affective computing, information extraction, and commonsense reasoning, which exploits both computer and human sciences to better interpret and process social information on the Web. In particular, we used the following linguistic rules:", "Let a noun h be a subject of a word t, which has an adverbial or adjective modifier present in a large sentiment lexicon, SenticNet. Then mark h as an aspect.", "Except when the sentence has an auxiliary verb, such as is, was, would, should, could, etc., we apply:", "If the verb t is modified by an adjective or adverb or is in adverbial clause modifier relation with another token, then mark h as an aspect. E.g., in “The battery lasts little”,", "battery is the subject of lasts, which is modified by an adjective modifier little, so battery is marked as an aspect.", "If t has a direct object, a noun n, not found in SenticNet, then mark n an aspect, as, e.g., in “I like the lens of this camera”.", "If a noun h is a complement of a couplar verb, then mark h as an explicit aspect. E.g., in “The camera is nice”, camera is marked as an aspect.", "If a term marked as an aspect by the CNN or the other rules is in a noun-noun compound relationship with another word, then instead form one aspect term composed of both of them. E.g., if in “battery life”, “battery” or “life” is marked as an aspect, then the whole expression is marked as an aspect.", "The above rules 1–4 improve recall by discovering more aspect terms. However, to improve precision, we apply some heuristics: e.g., we remove stop-words such as of, the, a, etc., even if they were marked as aspect terms by the CNN or the other rules.", "We used the Stanford parser to determine syntactic relations in the sentences.", "We combined LPs with the CNN as follows: both LPs and CNN-based classifier are run on the text; then all terms marked by any of the two classifiers are reported as aspect terms, except for those unmarked by the last rule.", "Table TABREF63 shows the accuracy of our aspect term extraction framework in laptop and restaurant domains. The framework gave better accuracy on restaurant domain reviews, because of the lower variety of aspect available terms than in laptop domain. However, in both cases recall was lower than precision.", "Table TABREF63 shows improvement in terms of both precision and recall when the POS feature is used. Pre-trained word embeddings performed better than randomized features (each word's vector initialized randomly); see Table TABREF62 . Amazon embeddings performed better than Google word2vec embeddings. This supports our claim that the former contains opinion-specific information which helped it to outperform the accuracy of Google embeddings trained on more formal text—the Google news corpus. Because of this, in the sequel we only show the performance using Amazon embeddings, which we denote simply as WE (word embeddings).", "In both domains, CNN suffered from low recall, i.e., it missed some valid aspect terms. Linguistic analysis of the syntactic structure of the sentences substantially helped to overcome some drawbacks of machine learning-based analysis. Our experiments showed good improvement in both precision and recall when LPs were used together with CNN; see Table TABREF64 .", "As to the LPs, the removal of stop-words, Rule 1, and Rule 3 were most beneficial. Figure FIGREF66 shows a visualization for the Table TABREF64 . Table TABREF65 and Figure FIGREF61 shows the comparison between the proposed method and the state of the art on the Semeval dataset. It is noted that about 36.55% aspect terms present in the laptop domain corpus are phrase and restaurant corpus consists of 24.56% aspect terms. The performance of detecting aspect phrases are lower than single word aspect tokens in both domains. This shows that the sequential tagging is indeed a tough task to do. Lack of sufficient training data for aspect phrases is also one of the reasons to get lower accuracy in this case.", "In particular, we got 79.20% and 83.55% F-score to detect aspect phrases in laptop and restaurant domain respectively. We observed some cases where only 1 term in an aspect phrase is detected as aspect term. In those cases Rule 4 of the LPs helped to correctly detect the aspect phrases. We also carried out experiments on the aspect dataset originally developed by BIBREF66 . This is to date the largest comprehensive aspect-based sentiment analysis dataset. The best accuracy on this dataset was obtained when word embedding features were used together with the POS features. This shows that while the word embedding features are most useful, the POS feature also plays a major role in aspect extraction.", "As on the SemEval dataset, LPs together with CNN increased the overall accuracy. However, LPs have performed much better on this dataset than on the SemEval dataset. This supports the observation made previously BIBREF66 that on this dataset LPs are more useful. One of the possible reasons for this is that most of the sentences in this dataset are grammatically correct and contain only one aspect term. Here we combined LPs and a CNN to achieve even better results than the approach of by BIBREF66 based only on LPs. Our experimental results showed that this ensemble algorithm (CNN+LP) can better understand the semantics of the text than BIBREF66 's pure LP-based algorithm, and thus extracts more salient aspect terms. Table TABREF69 and Figure FIGREF68 shows the performance and comparisons of different frameworks.", "Figure FIGREF70 compares the proposed method with the state of the art. We believe that there are two key reasons for our framework to outperform state-of-the-art approaches. First, a deep CNN, which is non-linear in nature, better fits the data than linear models such as CRF. Second, the pre-trained word embedding features help our framework to outperform state-of-the-art methods that do not use word embeddings. The main advantage of our framework is that it does not need any feature engineering. This minimizes development cost and time." ], [ "We use the MPQA corpus BIBREF20 , a collection of 535 English news articles from a variety of sources manually annotated with subjectivity flag. From the total of 9,700 sentences in this corpus, 55 INLINEFORM0 of the sentences are labeled as subjective while the rest are objective. We also compare with the Movie Review (MR) benchmark dataset BIBREF28 , that contains 5000 subjective movie review snippets from Rotten Tomatoes website and another 5000 objective sentences from plot summaries available from the Internet Movies Database. All sentences are at least ten words long and drawn from reviews or plot summaries of movies released post 2001.", "The data pre-processing included removing top 50 stop words and punctuation marks from the sentences. Next, we used a POS tagger to determine the part-of-speech for each word in a sentence. Subjectivity clues dataset BIBREF19 contains a list of over 8,000 clues identified manually as well as automatically using both annotated and unannotated data. Each clue is a word and the corresponding part of speech.", "The frequency of each clue was computed in both subjective and objective sentences of the MPQA corpus. Here we consider the top 50 clue words with highest frequency of occurrence in the subjective sentences. We also extracted 25 top concepts containing the top clue words using the method described in BIBREF11 . The CNN is collectively pre-trained with both subjective and objective sentences that contain high ML word and concept motifs. The word vectors are initialized using the LBL model and a context window of size 5 and 30 features. Each sentence is wrapped to a window of 50 words to reduce the number of parameters and hence the over-fitting of the model. A CNN with three hidden layers of 100 neurons and kernels of size INLINEFORM0 is used. The output layer corresponds to two neurons for each class of sentiments.", "We used 10 fold cross validation to determine the accuracy of classifying new sentences using the trained CNN classifier. A comparison is done with classifying the time series data using baseline classifiers such as Naive Bayes SVM (NBSVM) BIBREF60 , Multichannel CNN (CNN-MC) BIBREF61 , Subjectivity Word Sense Disambiguation (SWSD) BIBREF62 and Unsupervised-WSD (UWSD) BIBREF63 . Table TABREF41 shows that BCDBN outperforms previous methods by INLINEFORM0 in accuracy on both datasets. Almost INLINEFORM1 improvement is observed over NBSVM on the movie review dataset. In addition, we only consider word vectors of 30 features instead of the 300 features used by CNN-MC and hence are 10 times faster." ], [ "Subjectivity detection can prevent the sentiment classifier from considering irrelevant or potentially misleading text. This is particularly useful in multi-perspective question answering summarization systems that need to summarize different opinions and perspectives and present multiple answers to the user based on opinions derived from different sources. It is also useful to analysts in government, commercial and political domains who need to determine the response of the people to different crisis events. After filtering of subjective sentences, aspect mining can be used to provide clearer visibility into the emotions of people by connecting different polarities to the corresponding target attribute." ], [ "In this chapter, we tackled the two basic tasks of sentiment analysis in social media: subjectivity detection and aspect extraction. We used an ensemble of deep learning and linguistics to collect opinionated information and, hence, perform fine-grained (aspect-based) sentiment analysis. In particular, we proposed a Bayesian deep convolutional belief network to classify a sequence of sentences as either subjective or objective and used a convolutional neural network for aspect extraction. Coupled with some linguistic rules, this ensemble approach gave a significant improvement in performance over state-of-the-art techniques and paved the way for a more multifaceted (i.e., covering more NLP subtasks) and multidisciplinary (i.e., integrating techniques from linguistics and other disciplines) approach to the complex problem of sentiment analysis." ], [ "In the future we will try to visualize the hierarchies of features learned via deep learning. We can also consider fusion with other modalities such as YouTube videos." ], [ "This work was funded by Complexity Institute, Nanyang Technological University." ], [ "Sentiment Quantification of User-Generated Content, 110170 Semantic Sentiment Analysis of Twitter Data, 110167 Twitter Microblog Sentiment Analysis, 265" ] ] }
{ "question": [ "How are aspects identified in aspect extraction?" ], "question_id": [ "3bf429633ecbbfec3d7ffbcfa61fa90440cc918b" ], "nlp_background": [ "infinity" ], "topic_background": [ "familiar" ], "paper_read": [ "no" ], "search_query": [ "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "apply an ensemble of deep learning and linguistics t" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Most of the previous works in aspect term extraction have either used conditional random fields (CRFs) BIBREF9 , BIBREF10 or linguistic patterns BIBREF7 , BIBREF11 . Both of these approaches have their own limitations: CRF is a linear model, so it needs a large number of features to work well; linguistic patterns need to be crafted by hand, and they crucially depend on the grammatical accuracy of the sentences. In this chapter, we apply an ensemble of deep learning and linguistics to tackle both the problem of aspect extraction and subjectivity detection." ], "highlighted_evidence": [ "In this chapter, we apply an ensemble of deep learning and linguistics to tackle both the problem of aspect extraction and subjectivity detection." ] } ], "annotation_id": [ "05cb5f52929b675b470e6ce835198557b654e4e7" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ] }
{ "caption": [ "Fig. 1 State space of different Bayesian models", "Fig. 2 State space of Bayesian CNN where the input layer is pre-trained using a dynamic GBN", "Table 2 SemEval Data used for Evaluation", "Fig. 3 Comparison of the performance with the state of the art.", "Table 3 Random features vs. Google Embeddings vs. Amazon Embeddings on the SemEval 2014 dataset", "Table 4 Feature analysis for the CNN classifier", "Table 5 Impact of Sentic Patterns on the SemEval 2014 dataset", "Table 6 Comparison with the state of the art. ZW stands for [68]; LP stands for Sentic Patterns.", "Fig. 4 Comparison between the performance of CNN, CNN-LP and LP.", "Table 7 Impact of the POS feature on the dataset by [52]", "Fig. 5 Comparison between the performance of CNN, CNN-LP and LP.", "Table 8 Impact of Sentic Patterns on the dataset by [52]", "Fig. 6 Comparison of the performance with the state of the art on Bing Liu dataset." ], "file": [ "8-Figure1-1.png", "13-Figure2-1.png", "18-Table2-1.png", "20-Figure3-1.png", "20-Table3-1.png", "20-Table4-1.png", "21-Table5-1.png", "21-Table6-1.png", "21-Figure4-1.png", "22-Table7-1.png", "22-Figure5-1.png", "23-Table8-1.png", "23-Figure6-1.png" ] }
1701.02877
Generalisation in Named Entity Recognition: A Quantitative Analysis
Named Entity Recognition (NER) is a key NLP task, which is all the more challenging on Web and user-generated content with their diverse and continuously changing language. This paper aims to quantify how this diversity impacts state-of-the-art NER methods, by measuring named entity (NE) and context variability, feature sparsity, and their effects on precision and recall. In particular, our findings indicate that NER approaches struggle to generalise in diverse genres with limited training data. Unseen NEs, in particular, play an important role, which have a higher incidence in diverse genres such as social media than in more regular genres such as newswire. Coupled with a higher incidence of unseen features more generally and the lack of large training corpora, this leads to significantly lower F1 scores for diverse genres as compared to more regular ones. We also find that leading systems rely heavily on surface forms found in training data, having problems generalising beyond these, and offer explanations for this observation.
{ "section_name": [ "Introduction", "Datasets", "NER Models and Features", "RQ1: NER performance with Different Approaches", "RQ2: NER performance in Different Genres", "RQ3: Impact of NE Diversity", "RQ4: Unseen Features, unseen NEs and NER performance", "RQ5: Out-Of-Domain NER Performance and Memorisation", "Conclusion", "Acknowledgement" ], "paragraphs": [ [ "Named entity recognition and classification (NERC, short NER), the task of recognising and assigning a class to mentions of proper names (named entities, NEs) in text, has attracted many years of research BIBREF0 , BIBREF1 , analyses BIBREF2 , starting from the first MUC challenge in 1995 BIBREF3 . Recognising entities is key to many applications, including text summarisation BIBREF4 , search BIBREF5 , the semantic web BIBREF6 , topic modelling BIBREF7 , and machine translation BIBREF8 , BIBREF9 .", "As NER is being applied to increasingly diverse and challenging text genres BIBREF10 , BIBREF11 , BIBREF12 , this has lead to a noisier, sparser feature space, which in turn requires regularisation BIBREF13 and the avoidance of overfitting. This has been the case even for large corpora all of the same genre and with the same entity classification scheme, such as ACE BIBREF14 . Recall, in particular, has been a persistent problem, as named entities often seem to have unusual surface forms, e.g. unusual character sequences for the given language (e.g. Szeged in an English-language document) or words that individually are typically not NEs, unless they are combined together (e.g. the White House).", "Indeed, the move from ACE and MUC to broader kinds of corpora has presented existing NER systems and resources with a great deal of difficulty BIBREF15 , which some researchers have tried to address through domain adaptation, specifically with entity recognition in mind BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . However, more recent performance comparisons of NER methods over different corpora showed that older tools tend to simply fail to adapt, even when given a fair amount of in-domain data and resources BIBREF21 , BIBREF11 . Simultaneously, the value of NER in non-newswire data BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 has rocketed: for example, social media now provides us with a sample of all human discourse, unmolested by editors, publishing guidelines and the like, and all in digital format – leading to, for example, whole new fields of research opening in computational social science BIBREF26 , BIBREF27 , BIBREF28 .", "The prevailing assumption has been that this lower NER performance is due to domain differences arising from using newswire (NW) as training data, as well as from the irregular, noisy nature of new media (e.g. BIBREF21 ). Existing studies BIBREF11 further suggest that named entity diversity, discrepancy between named entities in the training set and the test set (entity drift over time in particular), and diverse context, are the likely reasons behind the significantly lower NER performance on social media corpora, as compared to newswire.", "No prior studies, however, have investigated these hypotheses quantitatively. For example, it is not yet established whether this performance drop is really due to a higher proportion of unseen NEs in the social media, or is it instead due to NEs being situated in different kinds of linguistic context.", "Accordingly, the contributions of this paper lie in investigating the following open research questions:", "In particular, the paper carries out a comparative analyses of the performance of several different approaches to statistical NER over multiple text genres, with varying NE and lexical diversity. In line with prior analyses of NER performance BIBREF2 , BIBREF11 , we carry out corpus analysis and introduce briefly the NER methods used for experimentation. Unlike prior efforts, however, our main objectives are to uncover the impact of NE diversity and context diversity on performance (measured primarily by F1 score), and also to study the relationship between OOV NEs and features and F1. See Section \"Experiments\" for details.", "To ensure representativeness and comprehensiveness, our experimental findings are based on key benchmark NER corpora spanning multiple genres, time periods, and corpus annotation methodologies and guidelines. As detailed in Section \"Datasets\" , the corpora studied are OntoNotes BIBREF29 , ACE BIBREF30 , MUC 7 BIBREF31 , the Ritter NER corpus BIBREF21 , the MSM 2013 corpus BIBREF32 , and the UMBC Twitter corpus BIBREF33 . To eliminate potential bias from the choice of statistical NER approach, experiments are carried out with three differently-principled NER approaches, namely Stanford NER BIBREF34 , SENNA BIBREF35 and CRFSuite BIBREF36 (see Section \"NER Models and Features\" for details)." ], [ "Since the goal of this study is to compare NER performance on corpora from diverse domains and genres, seven benchmark NER corpora are included, spanning newswire, broadcast conversation, Web content, and social media (see Table 1 for details). These datasets were chosen such that they have been annotated with the same or very similar entity classes, in particular, names of people, locations, and organisations. Thus corpora including only domain-specific entities (e.g. biomedical corpora) were excluded. The choice of corpora was also motivated by their chronological age; we wanted to ensure a good temporal spread, in order to study possible effects of entity drift over time.", "A note is required about terminology. This paper refers to text genre and also text domain. These are two dimensions by which a document or corpus can be described. Genre here accounts the general characteristics of the text, measurable with things like register, tone, reading ease, sentence length, vocabulary and so on. Domain describes the dominant subject matter of text, which might give specialised vocabulary or specific, unusal word senses. For example, “broadcast news\" is a genre, describing the manner of use of language, whereas “financial text\" or “popular culture\" are domains, describing the topic. One notable exception to this terminology is social media, which tends to be a blend of myriad domains and genres, with huge variation in both these dimensions BIBREF38 , BIBREF39 ; for simplicity, we also refer to this as a genre here.", "In chronological order, the first corpus included here is MUC 7, which is the last of the MUC challenges BIBREF31 . This is an important corpus, since the Message Understanding Conference (MUC) was the first one to introduce the NER task in 1995 BIBREF3 , with focus on recognising persons, locations and organisations in newswire text.", "A subsequent evaluation campaign was the CoNLL 2003 NER shared task BIBREF40 , which created gold standard data for newswire in Spanish, Dutch, English and German. The corpus of this evaluation effort is now one of the most popular gold standards for NER, with new NER approaches and methods often reporting performance on that.", "Later evaluation campaigns began addressing NER for genres other than newswire, specifically ACE BIBREF30 and OntoNotes BIBREF29 . Both of those contain subcorpora in several genres, namely newswire, broadcast news, broadcast conversation, weblogs, and conversational telephone speech. ACE, in addition, contains a subcorpus with usenet newsgroups. Like CoNLL 2003, the OntoNotes corpus is also a popular benchmark dataset for NER. The languages covered are English, Arabic and Chinese. A further difference between the ACE and OntoNotes corpora on one hand, and CoNLL and MUC on the other, is that they contain annotations not only for NER, but also for other tasks such as coreference resolution, relation and event extraction and word sense disambiguation. In this paper, however, we restrict ourselves purely to the English NER annotations, for consistency across datasets. The ACE corpus contains HEAD as well as EXTENT annotations for NE spans. For our experiments we use the EXTENT tags.", "With the emergence of social media, studying NER performance on this genre gained momentum. So far, there have been no big evaluation efforts, such as ACE and OntoNotes, resulting in substantial amounts of gold standard data. Instead, benchmark corpora were created as part of smaller challenges or individual projects. The first such corpus is the UMBC corpus for Twitter NER BIBREF33 , where researchers used crowdsourcing to obtain annotations for persons, locations and organisations. A further Twitter NER corpus was created by BIBREF21 , which, in contrast to other corpora, contains more fine-grained classes defined by the Freebase schema BIBREF41 . Next, the Making Sense of Microposts initiative BIBREF32 (MSM) provides single annotated data for named entity recognition on Twitter for persons, locations, organisations and miscellaneous. MSM initiatives from 2014 onwards in addition feature a named entity linking task, but since we only focus on NER here, we use the 2013 corpus.", "These corpora are diverse not only in terms of genres and time periods covered, but also in terms of NE classes and their definitions. In particular, the ACE and OntoNotes corpora try to model entity metonymy by introducing facilities and geo-political entities (GPEs). Since the rest of the benchmark datasets do not make this distinction, metonymous entities are mapped to a more common entity class (see below).", "In order to ensure consistency across corpora, only Person (PER), Location (LOC) and Organisation (ORG) are used in our experiments, and other NE classes are mapped to O (no NE). For the Ritter corpus, the 10 entity classes are collapsed to three as in BIBREF21 . For the ACE and OntoNotes corpora, the following mapping is used: PERSON $\\rightarrow $ PER; LOCATION, FACILITY, GPE $\\rightarrow $ LOC; ORGANIZATION $\\rightarrow $ ORG; all other classes $\\rightarrow $ O.", "Tokens are annotated with BIO sequence tags, indicating that they are the beginning (B) or inside (I) of NE mentions, or outside of NE mentions (O). For the Ritter and ACE 2005 corpora, separate training and test corpora are not publicly available, so we randomly sample 1/3 for testing and use the rest for training. The resulting training and testing data sizes measured in number of NEs are listed in Table 2 . Separate models are then trained on the training parts of each corpus and evaluated on the development (if available) and test parts of the same corpus. If development parts are available, as they are for CoNLL (CoNLL Test A) and MUC (MUC 7 Dev), they are not merged with the training corpora for testing, as it was permitted to do in the context of those evaluation challenges.", "[t]", " P, R and F1 of NERC with different models evaluated on different testing corpora, trained on corpora normalised by size", "Table 1 shows which genres the different corpora belong to, the number of NEs and the proportions of NE classes per corpus. Sizes of NER corpora have increased over time, from MUC to OntoNotes.", "Further, the class distribution varies between corpora: while the CoNLL corpus is very balanced and contains about equal numbers of PER, LOC and ORG NEs, other corpora are not. The least balanced corpus is the MSM 2013 Test corpus, which contains 98 LOC NEs, but 1110 PER NEs. This makes it difficult to compare NER performance here, since performance partly depends on training data size. Since comparing NER performance as such is not the goal of this paper, we will illustrate the impact of training data size by using learning curves in the next section; illustrate NERC performance on trained corpora normalised by size in Table UID9 ; and then only use the original training data size for subsequent experiments.", "In order to compare corpus diversity across genres, we measure NE and token/type diversity (following e.g. BIBREF2 ). Note that types are the unique tokens, so the ratio can be understood as ratio of total tokens to unique ones. Table 4 shows the ratios between the number of NEs and the number of unique NEs per corpus, while Table 5 reports the token/type ratios. The lower those ratios are, the more diverse a corpus is. While token/type ratios also include tokens which are NEs, they are a good measure of broader linguistic diversity.", "Aside from these metrics, there are other factors which contribute to corpus diversity, including how big a corpus is and how well sampled it is, e.g. if a corpus is only about one story, it should not be surprising to see a high token/type ratio. Therefore, by experimenting on multiple corpora, from different genres and created through different methodologies, we aim to encompass these other aspects of corpus diversity.", "Since the original NE and token/type ratios do not account for corpus size, Tables 5 and 4 present also the normalised ratios. For those, a number of tokens equivalent to those in the corpus, e.g. 7037 for UMBC (Table 5 ) or, respectively, a number of NEs equivalent to those in the corpus (506 for UMBC) are selected (Table 4 ).", "An easy choice of sampling method would be to sample tokens and NEs randomly. However, this would not reflect the composition of corpora appropriately. Corpora consist of several documents, tweets or blog entries, which are likely to repeat the words or NEs since they are about one story. The difference between bigger and smaller corpora is then that bigger corpora consist of more of those documents, tweets, blog entries, interviews, etc. Therefore, when we downsample, we take the first $n$ tokens for the token/type ratios or the first $n$ NEs for the NEs/Unique NEs ratios.", "Looking at the normalised diversity metrics, the lowest NE/Unique NE ratios $<= 1.5$ (in bold, Table 4 ) are observed on the Twitter and CoNLL Test corpora. Seeing this for Twitter is not surprising since one would expect noise in social media text (e.g. spelling variations or mistakes) to also have an impact on how often the same NEs are seen. Observing this in the latter, though, is less intuitive and suggests that the CoNLL corpora are well balanced in terms of stories. Low NE/Unique ratios ( $<= 1.7$ ) can also be observed for ACE WL, ACE UN and OntoNotes TC. Similar to social media text, content from weblogs, usenet dicussions and telephone conversations also contains a larger amount of noise compared to the traditionally-studied newswire genre, so this is not a surprising result. Corpora bearing high NE/Unique NE ratios ( $> 2.5$ ) are ACE CTS, OntoNotes MZ and OntoNotes BN. These results are also not surprising. The telephone conversations in ACE CTS are all about the same story, and newswire and broadcast news tend to contain longer stories (reducing variety in any fixed-size set) and are more regular due to editing.", "The token/type ratios reflect similar trends (Table 5 ). Low token/type ratios $<= 2.8$ (in bold) are observed for the Twitter corpora (Ritter and UMBC), as well as for the CoNLL Test corpus. Token/type ratios are also low ( $<= 3.2$ ) for CoNLL Train and ACE WL. Interestingly, ACE UN and MSM Train and Test do not have low token/type ratios although they have low NE/Unique ratios. That is, many diverse persons, locations and organisations are mentioned in those corpora, but similar context vocabulary is used. Token/type ratios are high ( $>= 4.4$ ) for MUC7 Dev, ACE BC, ACE CTS, ACE UN and OntoNotes TC. Telephone conversations (TC) having high token/type ratios can be attributed to the high amount filler words (e.g. “uh”, “you know”). NE corpora are generally expected to have regular language use – for ACE, at least, in this instance.", "Furthermore, it is worth pointing out that, especially for the larger corpora (e.g. OntoNotes NW), size normalisation makes a big difference. The normalised NE/Unique NE ratios drop by almost a half compared to the un-normalised ratios, and normalised Token/Type ratios drop by up to 85%. This strengthens our argument for size normalisation and also poses the question of low NERC performance for diverse genres being mostly due to the lack of large training corpora. This is examined in Section \"RQ2: NER performance in Different Genres\" .", "Lastly, Table 6 reports tag density (percentage of tokens tagged as part of a NE), which is another useful metric of corpus diversity that can be interpreted as the information density of a corpus. What can be observed here is that the NW corpora have the highest tag density and generally tend to have higher tag density than corpora of other genres; that is, newswire bears a lot of entities. Corpora with especially low tag density $<= 0.06$ (in bold) are the TC corpora, Ritter, OntoNotes WB, ACE UN, ACE BN and ACE BC. As already mentioned, conversational corpora, to which ACE BC also belong, tend to have many filler words, thus it is not surprising that they have a low tag density. There are only minor differences between the tag density and the normalised tag density, since corpus size as such does not impact tag density." ], [ "To avoid system-specific bias in our experiments, three widely-used supervised statistical approaches to NER are included: Stanford NER, SENNA, and CRFSuite. These systems each have contrasting notable attributes.", "Stanford NER BIBREF34 is the most popular of the three, deployed widely in both research and commerce. The system has been developed in terms of both generalising the underlying technology and also specific additions for certain languages. The majority of openly-available additions to Stanford NER, in terms of models, gazetteers, prefix/suffix handling and so on, have been created for newswire-style text. Named entity recognition and classification is modelled as a sequence labelling task with first-order conditional random fields (CRFs) BIBREF43 .", "SENNA BIBREF35 is a more recent system for named entity extraction and other NLP tasks. Using word representations and deep learning with deep convolutional neural networks, the general principle for SENNA is to avoid task-specific engineering while also doing well on multiple benchmarks. The approach taken to fit these desiderata is to use representations induced from large unlabelled datasets, including LM2 (introduced in the paper itself) and Brown clusters BIBREF44 , BIBREF45 . The outcome is a flexible system that is readily adaptable, given training data. Although the system is more flexible in general, it relies on learning language models from unlabelled data, which might take a long time to gather and retrain. For the setup in BIBREF35 language models are trained for seven weeks on the English Wikipedia, Reuters RCV1 BIBREF46 and parts of the Wall Street Journal, and results are reported over the CoNLL 2003 NER dataset. Reuters RCV1 is chosen as unlabelled data because the English CoNLL 2003 corpus is created from the Reuters RCV1 corpus. For this paper, we use the original language models distributed with SENNA and evaluate SENNA with the DeepNL framework BIBREF47 . As such, it is to some degree also biased towards the CoNLL 2003 benchmark data.", "Finally, we use the classical NER approach from CRFsuite BIBREF36 , which also uses first-order CRFs. This frames NER as a structured sequence prediction task, using features derived directly from the training text. Unlike the other systems, no external knowledge (e.g. gazetteers and unsupervised representations) are used. This provides a strong basic supervised system, and – unlike Stanford NER and SENNA – has not been tuned for any particular domain, giving potential to reveal more challenging domains without any intrinsic bias.", "We use the feature extractors natively distributed with the NER frameworks. For Stanford NER we use the feature set “chris2009” without distributional similarity, which has been tuned for the CoNLL 2003 data. This feature was tuned to handle OOV words through word shape, i.e. capitalisation of constituent characters. The goal is to reduce feature sparsity – the basic problem behind OOV named entities – by reducing the complexity of word shapes for long words, while retaining word shape resolution for shorter words. In addition, word clusters, neighbouring n-grams, label sequences and quasi-Newton minima search are included. SENNA uses word embedding features and gazetteer features; for the training configuration see https://github.com/attardi/deepnl#benchmarks. Finally, for CRFSuite, we use the provided feature extractor without POS or chunking features, which leaves unigram and bigram word features of the mention and in a window of 2 to the left and the right of the mention, character shape, prefixes and suffixes of tokens.", "These systems are compared against a simple surface form memorisation tagger. The memorisation baseline picks the most frequent NE label for each token sequence as observed in the training corpus. There are two kinds of ambiguity: one is overlapping sequences, e.g. if both “New York City” and “New York” are memorised as a location. In that case the longest-matching sequence is labelled with the corresponding NE class. The second, class ambiguity, occurs when the same textual label refers to different NE classes, e.g. “Google” could either refer to the name of a company, in which case it would be labelled as ORG, or to the company's search engine, which would be labelled as O (no NE)." ], [ "[t]", " P, R and F1 of NERC with different models trained on original corpora", "[t]", " F1 per NE type with different models trained on original corpora", "Our first research question is how NERC performance differs for corpora between approaches. In order to answer this, Precision (P), Recall (R) and F1 metrics are reported on size-normalised corpora (Table UID9 ) and original corpora (Tables \"RQ1: NER performance with Different Approaches\" and \"RQ1: NER performance with Different Approaches\" ). The reason for size normalisation is to make results comparable across corpora. For size normalisation, the training corpora are downsampled to include the same number of NEs as the smallest corpus, UMBC. For that, sentences are selected from the beginning of the train part of the corpora so that they include the same number of NEs as UMBC. Other ways of downsampling the corpora would be to select the first $n$ sentences or the first $n$ tokens, where $n$ is the number of sentences in the smallest corpus. The reason that the number of NEs, which represent the number of positive training examples, is chosen for downsampling the corpora is that the number of positive training examples have a much bigger impact on learning than the number of negative training examples. For instance, BIBREF48 , among others, study topic classification performance for small corpora and sample from the Reuters corpus. They find that adding more negative training data gives little to no improvement, whereas adding positive examples drastically improves performance.", "Table UID9 shows results with size normalised precision (P), recall (R), and F1-Score (F1). The five lowest P, R and F1 values per method (CRFSuite, Stanford NER, SENNA) are in bold to highlight underperformers. Results for all corpora are summed with macro average.", "Comparing the different methods, the highest F1 results are achieved with SENNA, followed by Stanford NER and CRFSuite. SENNA has a balanced P and R, which can be explained by the use of word embeddings as features, which help with the unseen word problem. For Stanford NER as well as CRFSuite, which do not make use of embeddings, recall is about half of precision. These findings are in line with other work reporting the usefulness of word embeddings and deep learning for a variety of NLP tasks and domains BIBREF49 , BIBREF50 , BIBREF51 . With respect to individual corpora, the ones where SENNA outperforms other methods by a large margin ( $>=$ 13 points in F1) are CoNLL Test A, ACE CTS and OntoNotes TC. The first success can be attributed to being from the same the domain SENNA was originally tuned for. The second is more unexpected and could be due to those corpora containing a disproportional amount of PER and LOC NEs (which are easier to tag correctly) compared to ORG NEs, as can be seen in Table \"RQ1: NER performance with Different Approaches\" , where F1 of NERC methods is reported on the original training data.", "Our analysis of CRFSuite here is that it is less tuned for NW corpora and might therefore have a more balanced performance across genres does not hold. Results with CRFSuite for every corpus are worse than the results for that corpus with Stanford NER, which is also CRF-based.", "To summarise, our findings are:", "[noitemsep]", "F1 is highest with SENNA, followed by Stanford NER and CRFSuite", "SENNA outperforms other methods by a large margin (e.g. $>=$ 13 points in F1) for CoNLL Test A, ACE CTS and OntoNotes TC", "Our hypothesis that CRFSuite is less tuned for NW corpora and will therefore have a more balanced performance across genres does not hold, as results for CRFSuite for every corpus are worse than with Stanford NER" ], [ "Our second research question is whether existing NER approaches generalise well over corpora in different genres. To do this we study again Precision (P), Recall (R) and F1 metrics on size-normalised corpora (Table UID9 ), on original corpora (Tables \"RQ1: NER performance with Different Approaches\" and \"RQ1: NER performance with Different Approaches\" ), and we further test performance per genre in a separate table (Table 3 ).", "F1 scores over size-normalised corpora vary widely (Table UID9 ). For example, the SENNA scores range from 9.35% F1 (ACE UN) to 71.48% (CoNLL Test A). Lowest results are consistently observed for the ACE subcorpora, UMBC, and OntoNotes BC and WB. The ACE corpora are large and so may be more prone to non-uniformities emerging during downsampling; they also have special rules for some kinds of organisation which can skew results (as described in Section UID9 ). The highest results are on the CoNLL Test A corpus, OntoNotes BN and MUC 7 Dev. This moderately supports our hypothesis that NER systems perform better on NW than on other genres, probably due to extra fitting from many researchers using them as benchmarks for tuning their approaches. Looking at the Twitter (TWI) corpora present the most challenge due to increased diversity, the trends are unstable. Although results for UMBC are among the lowest, results for MSM 2013 and Ritter are in the same range or even higher than those on NW datasets. This begs the question whether low results for Twitter corpora reported previously were due to the lack of sufficient in-genre training data.", "Comparing results on normalised to non-normalised data, Twitter results are lower than those for most OntoNotes corpora and CoNLL test corpora, mostly due to low recall. Other difficult corpora having low performance are ACE UN and WEB corpora. We further explicitly examine results on size normalised corpora grouped by corpus type, shown in Table 3 . It becomes clear that, on average, newswire corpora and OntoNotes MZ are the easiest corpora and ACE UN, WEB and TWI are harder. This confirms our hypothesis that social media and Web corpora are challenging for NERC.", "The CoNLL results, on the other hand, are the highest across all corpora irrespective of the NERC method. What is very interesting to see is that they are much higher than the results on the biggest training corpus, OntoNotes NW. For instance, SENNA has an F1 of 78.04 on OntoNotes, compared to an F1 of 92.39 and 86.44 for CoNLL Test A and Test B respectively. So even though OntoNotes NW is more than twice the size of CoNLL in terms of NEs (see Table 4 ), NERC performance is much higher on CoNLL. NERC performance with respect to training corpus size is represented in Figure 1 . The latter figure confirms that although there is some correlation between corpus size and F1, the variance between results on comparably sized corpora is big. This strengthens our argument that there is a need for experimental studies, such as those reported below, to find out what, apart from corpus size, impacts NERC performance.", "Another set of results presented in Table \"RQ1: NER performance with Different Approaches\" are those of the simple NERC memorisation baseline. It can be observed that corpora with a low F1 for NERC methods, such as UMBC and ACE UN, also have a low memorisation performance. Memorisation is discussed in more depth in Section \"RQ5: Out-Of-Domain NER Performance and Memorisation\" .", "When NERC results are compared to the corpus diversity statistics, i.e. NE/Unique NE ratios (Table 4 ), token/type ratios (Table 5 ), and tag density (Table 6 ), the strongest predictor for F1 is tag density, as can be evidenced by the R correlation values between the ratios and F1 scores with the Stanford NER system, shown in the respective tables.", "There is a positive correlation between high F1 and high tag density (R of 0.57 and R of 0.62 with normalised tag density), a weak positive correlation for NE/unique ratios (R of 0.20 and R of 0.15 for normalised ratio), whereas for token/type ratios, no such clear correlation can be observed (R of 0.25 and R of -0.07 for normalised ratio).", "However, tag density is also not an absolute predictor for NERC performance. While NW corpora have both high NERC performance and high tag density, this high density is not necessarily an indicator of high performance. For example, systems might not find high tag density corpora of other genres necessarily so easy.", "One factor that can explain the difference in genre performance between e.g. newswire and social media is entity drift – the change in observed entity terms over time. In this case, it is evident from the differing surface forms and contexts for a given entity class. For example, the concept of “location\" that NER systems try to learn might be frequently represented in English newswire from 1991 with terms like Iraq or Kuwait, but more with Atlanta, Bosnia and Kabul in the same language and genre from 1996. Informally, drift on Twitter is often characterised as both high-frequency and high-magnitude; that is, the changes are both rapid and correspond to a large amount of surface form occurrences (e.g. BIBREF12 , BIBREF52 ).", "We examined the impact of drift in newswire and Twitter corpora, taking datasets based in different timeframes. The goal is to gauge how much diversity is due to new entities appearing over time. To do this, we used just the surface lexicalisations of entities as the entity representation. The overlap of surface forms was measured across different corpora of the same genre and language. We used an additional corpus based on recent data – that from the W-NUT 2015 challenge BIBREF25 . This is measured in terms of occurrences, rather than distinct surface forms, so that the magnitude of the drift is shown instead of having skew in results from the the noisy long tail. Results are given in Table 7 for newswire and Table 8 for Twitter corpora.", "It is evident that the within-class commonalities in surface forms are much higher in newswire than in Twitter. That is to say, observations of entity texts in one newswire corpus are more helpful in labelling other newswire corpora, than if the same technique is used to label other twitter corpora.", "This indicates that drift is lower in newswire than in tweets. Certainly, the proportion of entity mentions in most recent corpora (the rightmost-columns) are consistently low compared to entity forms available in earlier data. These reflect the raised OOV and drift rates found in previous work BIBREF12 , BIBREF53 . Another explanation is that there is higher noise in variation, and that the drift is not longitudinal, but rather general. This is partially addressed by RQ3, which we will address next, in Section \"RQ3: Impact of NE Diversity\" .", "To summarise, our findings are:", "[noitemsep]", "Overall, F1 scores vary widely across corpora.", "Trends can be marked in some genres. On average, newswire corpora and OntoNotes MZ are the easiest corpora and ACE UN, WEB and TWI are the hardest corpora for NER methods to reach good performance on.", "Normalising corpora by size results in more noisy data such as TWI and WEB data achieving similar results to NW corpora.", "Increasing the amount of available in-domain training data will likely result in improved NERC performance.", "There is a strong positive correlation between high F1 and high tag density, a weak positive correlation for NE/unique ratios and no clear correlation between token/type ratios and F1", "Temporal NE drift is lower in newswire than in tweets", "The next section will take a closer look at the impact of seen and unseen NEs on NER performance." ], [ "Unseen NEs are those with surface forms present only in the test, but not training data, whereas seen NEs are those also encountered in the training data. As discussed previously, the ratio between those two measures is an indicator of corpus NE diversity.", "Table 9 shows how the number of unseen NEs per test corpus relates to the total number of NEs per corpus. The proportion of unseen forms varies widely by corpus, ranging from 0.351 (ACE NW) to 0.931 (UMBC). As expected there is a correlation between corpus size and percentage of unseen NEs, i.e. smaller corpora such as MUC and UMBC tend to contain a larger proportion of unseen NEs than bigger corpora such as ACE NW. In addition, similar to the token/type ratios listed in Table 5 , we observe that TWI and WEB corpora have a higher proportion of unseen entities.", "As can be seen from Table \"RQ1: NER performance with Different Approaches\" , corpora with a low percentage of unseen NEs (e.g. CoNLL Test A and OntoNotes NW) tend to have high NERC performance, whereas corpora with high percentage of unseen NEs (e.g. UMBC) tend to have low NERC performance. This suggests that systems struggle to recognise and classify unseen NEs correctly.", "To check this seen/unseen performance split, next we examine NERC performance for unseen and seen NEs separately; results are given in Table 10 . The “All\" column group represents an averaged performance result. What becomes clear from the macro averages is that F1 on unseen NEs is significantly lower than F1 on seen NEs for all three NERC approaches. This is mostly due to recall on unseen NEs being lower than that on seen NEs, and suggests some memorisation and poor generalisation in existing systems. In particular, Stanford NER and CRFSuite have almost 50% lower recall on unseen NEs compared to seen NEs. One outlier is ACE UN, for which the average seen F1 is 1.01 and the average unseen F1 is 1.52, though both are miniscule and the different negligible.", "Of the three approaches, SENNA exhibits the narrowest F1 difference between seen and unseen NEs. In fact it performs below Stanford NER for seen NEs on many corpora. This may be because SENNA has but a few features, based on word embeddings, which reduces feature sparsity; intuitively, the simplicity of the representation is likely to help with unseen NEs, at the cost of slightly reduced performance on seen NEs through slower fitting. Although SENNA appears to be better at generalising than Stanford NER and our CRFSuite baseline, the difference between its performance on seen NEs and unseen NEs is still noticeable. This is 21.77 for SENNA (macro average), whereas it is 29.41 for CRFSuite and 35.68 for Stanford NER.", "The fact that performance over unseen entities is significantly lower than on seen NEs partly explains what we observed in the previous section; i.e., that corpora with a high proportion of unseen entities, such as the ACE WL corpus, are harder to label than corpora of a similar size from other genres, such as the ACE BC corpus (e.g. systems reach F1 of $\\sim $ 30 compared to $\\sim $ 50; Table \"RQ1: NER performance with Different Approaches\" ).", "However, even though performance on seen NEs is higher than on unseen, there is also a difference between seen NEs in corpora of different sizes and genres. For instance, performance on seen NEs in ACE WL is 70.86 (averaged over the three different approaches), whereas performance on seen NEs in the less-diverse ACE BC corpus is higher at 76.42; the less diverse data is, on average, easier to tag. Interestingly, average F1 on seen NEs in the Twitter corpora (MSM and Ritter) is around 80, whereas average F1 on the ACE corpora, which are of similar size, is lower, at around 70.", "To summarise, our findings are:", "[noitemsep]", "F1 on unseen NEs is significantly lower than F1 on seen NEs for all three NERC approaches, which is mostly due to recall on unseen NEs being lower than that on seen NEs.", "Performance on seen NEs is significantly and consistently higher than that of unseen NEs in different corpora, with the lower scores mostly attributable to lower recall.", "However, there are still significant differences at labelling seen NEs in different corpora, which means that if NEs are seen or unseen does not account for all of the difference of F1 between corpora of different genres." ], [ "Having examined the impact of seen/unseen NEs on NERC performance in RQ3, and touched upon surface form drift in RQ2, we now turn our attention towards establishing the impact of seen features, i.e. features appearing in the test set that are observed also in the training set. While feature sparsity can help to explain low F1, it is not a good predictor of performance across methods: sparse features can be good if mixed with high-frequency ones. For instance, Stanford NER often outperforms CRFSuite (see Table \"RQ1: NER performance with Different Approaches\" ) despite having a lower proportion of seen features (i.e. those that occur both in test data and during training). Also, some approaches such as SENNA use a small number of features and base their features almost entirely on the NEs and not on their context.", "Subsequently, we want to measure F1 for unseens and seen NEs, as in Section \"RQ3: Impact of NE Diversity\" , but also examine how the proportion of seen features impacts on the result. We define seen features as those observed in the test data and also the training data. In turn, unseen features are those observed in the test data but not in the training data. That is, they have not been previously encountered by the system at the time of labeling. Unseen features are different from unseen words in that they are the difference in representation, not surface form. For example, the entity “Xoxarle\" may be an unseen entity not found in training data This entity could reasonably have “shape:Xxxxxxx\" and “last-letter:e\" as part of its feature representation. If the training data contains entities “Kenneth\" and “Simone\", each of this will have generated these two features respectively. Thus, these example features will not be unseen features in this case, despite coming from an unseen entity. Conversely, continuing this example, if the training data contains no feature “first-letter:X\" – which applies to the unseen entity in question – then this will be an unseen feature.", "We therefore measure the proportion of unseen features per unseen and seen proportion of different corpora. An analysis of this with Stanford NER is shown in Figure 2 . Each data point represents a corpus. The blue squares are data points for seen NEs and the red circles are data points for unseen NEs. The figure shows a negative correlation between F1 and percentage of unseen features, i.e. the lower the percentage of unseen features, the higher the F1. Seen and unseen performance and features separate into two groups, with only two outlier points. The figure shows that novel, previously unseen NEs have more unseen features and that systems score a lower F1 on them. This suggests that despite the presence of feature extractors for tackling unseen NEs, the features generated often do not overlap with those from seen NEs. However, one would expect individual features to give different generalisation power for other sets of entities, and for systems use these features in different ways. That is, machine learning approaches to the NER task do not seem to learn clear-cut decision boundaries based on a small set of features. This is reflected in the softness of the correlation.", "Finally, the proportion of seen features is higher for seen NEs. The two outlier points are ACE UN (low F1 for seen NEs despite low percentage of unseen features) and UMBC (high F1 for seen NEs despite high percentage of unseen features). An error analysis shows that the ACE UN corpus suffers from the problem that the seen NEs are ambiguous, meaning even if they have been seen in the training corpus, a majority of the time they have been observed with a different NE label. For the UMBC corpus, the opposite is true: seen NEs are unambiguous. This kind of metonymy is a known and challenging issue in NER, and the results on these corpora highlight the impact is still has on modern systems.", "For all approaches the proportion of observed features for seen NEs is bigger than the proportion of observed features for unseen NEs, as it should be. However, within the seen and unseen testing instances, there is no clear trend indicating whether having more observed features overall increases F1 performance. One trend that is observable is that the smaller the token/type ratio is (Table 5 ), the bigger the variance between the smallest and biggest $n$ for each corpus, or, in other words, the smaller the token/type ratio is, the more diverse the features.", "To summarise, our findings are:", "[noitemsep]", "Seen NEs have more unseen features and systems score a lower F1 on them.", "Outliers are due to low/high ambiguity of seen NEs.", "The proportion of observed features for seen NEs is bigger than the proportion of observed features for unseen NEs", "Within the seen and unseen testing instances, there is no clear trend indicating whether having more observed features overall increases F1 performance.", "The smaller the token/type ratio is, the more diverse the features." ], [ "This section explores baseline out-of-domain NERC performance without domain adaptation; what percentage of NEs are seen if there is a difference between the the training and the testing domains; and how the difference in performance on unseen and seen NEs compares to in-domain performance.", "As demonstrated by the above experiments, and in line with related work, NERC performance varies across domains while also being influenced by the size of the available in-domain training data. Prior work on transfer learning and domain adaptation (e.g. BIBREF16 ) has aimed at increasing performance in domains where only small amounts of training data are available. This is achieved by adding out-of domain data from domains where larger amounts of training data exist. For domain adaptation to be successful, however, the seed domain needs to be similar to the target domain, i.e. if there is no or very little overlap in terms of contexts of the training and testing instances, the model does not learn any additional helpful weights. As a confounding factor, Twitter and other social media generally consist of many (thousands-millions) of micro-domains, with each author BIBREF54 community BIBREF55 and even conversation BIBREF56 having its own style, which makes it hard to adapt to it as a single, monolithic genre; accordingly, adding out-of-domain NER data gives bad results in this situation BIBREF21 . And even if recognised perfectly, entities that occur just once cause problems beyond NER, e.g. in co-reference BIBREF57 .", "In particular, BIBREF58 has reported improving F1 by around 6% through adaptation from the CoNLL to the ACE dataset. However, transfer learning becomes more difficult if the target domain is very noisy or, as mentioned already, too different from the seed domain. For example, BIBREF59 unsuccessfully tried to adapt the CoNLL 2003 corpus to a Twitter corpus spanning several topics. They found that hand-annotating a Twitter corpus consisting of 24,000 tokens performs better on new Twitter data than their transfer learning efforts with the CoNLL 2003 corpus.", "The seed domain for the experiments here is newswire, where we use the classifier trained on the biggest NW corpus investigated in this study, i.e. OntoNotes NW. That classifier is then applied to all other corpora. The rationale is to test how suitable such a big corpus would be for improving Twitter NER, for which only small training corpora are available.", "Results for out-of-domain performance are reported in Table 11 . The highest F1 performance is on the OntoNotes BC corpus, with similar results to the in-domain task. This is unsurprising as it belongs to a similar domain as the training corpus (broadcast conversation) the data was collected in the same time period, and it was annotated using the same guidelines. In contrast, out-of-domain results are much lower than in-domain results for the CoNLL corpora, even though they belong to the same genre as OntoNotes NW. Memorisation recall performance on CoNLL TestA and TestB with OntoNotes NW test suggest that this is partly due to the relatively low overlap in NEs between the two datasets. This could be attributed to the CoNLL corpus having been collected in a different time period to the OntoNotes corpus, when other entities were popular in the news; an example of drift BIBREF37 . Conversely, Stanford NER does better on these corpora than it does on other news data, e.g. ACE NW. This indicates that Stanford NER is capable of some degree of generalisation and can detect novel entity surface forms; however, recall is still lower than precision here, achieving roughly the same scores across these three (from 44.11 to 44.96), showing difficulty in picking up novel entities in novel settings.", "In addition, there are differences in annotation guidelines between the two datasets. If the CoNLL annotation guidelines were more inclusive than the Ontonotes ones, then even a memorisation evaluation over the same dataset would yield this result. This is, in fact, the case: OntoNotes divides entities into more classes, not all of which can be readily mapped to PER/LOC/ORG. For example, OntoNotes includes PRODUCT, EVENT, and WORK OF ART classes, which are not represented in the CoNLL data. It also includes the NORP class, which blends nationalities, religious and political groups. This has some overlap with ORG, but also includes terms such as “muslims\" and “Danes\", which are too broad for the ACE-related definition of ORGANIZATION. Full details can be found in the OntoNotes 5.0 release notes and the (brief) CoNLL 2003 annotation categories. Notice how the CoNLL guidelines are much more terse, being generally non-prose, but also manage to cram in fairly comprehensive lists of sub-kinds of entities in each case. This is likely to make the CoNLL classes include a diverse range of entities, with the many suggestions acting as generative material for the annotator, and therefore providing a broader range of annotations from which to generalise from – i.e., slightly easier to tag.", "The lowest F1 of 0 is “achieved\" on ACE BN. An examination of that corpus reveals the NEs contained in that corpus are all lower case, whereas those in OntoNotes NW have initial capital letters.", "Results on unseen NEs for the out-of-domain setting are in Table 12 . The last section's observation of NERC performance being lower for unseen NEs also generally holds true in this out-of-domain setting. The macro average over F1 for the in-domain setting is 76.74% for seen NEs vs. 53.76 for unseen NEs, whereas for the out-of-domain setting the F1 is 56.10% for seen NEs and 47.73% for unseen NEs.", "Corpora with a particularly big F1 difference between seen and unseen NEs ( $<=$ 20% averaged over all NERC methods) are ACE NW, ACE BC, ACE UN, OntoNotes BN and OntoNotes MZ. For some corpora (CoNLL Test A and B, MSM and Ritter), out-of-domain F1 (macro average over all methods) of unseen NEs is better than for seen NEs. We suspect that this is due to the out-of-domain evaluation setting encouraging better generalisation, as well as the regularity in entity context observed in the fairly limited CoNLL news data – for example, this corpus contains a large proportion of cricket score reports and many cricketer names, occurring in linguistically similar contexts. Others have also noted that the CoNLL datasets are low-diversity compared to OntoNotes, in the context of named entity recognition BIBREF60 . In each of the exceptions except MSM, the difference is relatively small. We note that the MSM test corpus is one of the smallest datasets used in the evaluation, also based on a noisier genre than most others, and so regard this discrepancy as an outlier.", "Corpora for which out-of-domain F1 is better than in-domain F1 for at least one of the NERC methods are: MUC7 Test, ACE WL, ACE UN, OntoNotes WB, OntoNotes TC and UMBC. Most of those corpora are small, with combined training and testing bearing fewer than 1,000 NEs (MUC7 Test, ACE UN, UMBC). In such cases, it appears beneficial to have a larger amount of training data, even if it is from a different domain and/or time period. The remaining 3 corpora contain weblogs (ACE WL, ACE WB) and online Usenet discussions (ACE UN). Those three are diverse corpora, as can be observed by the relatively low NEs/Unique NEs ratios (Table 4 ). However, NE/Unique NEs ratios are not an absolute predictor for better out-of-domain than in-domain performance: there are corpora with lower NEs/Unique NEs ratios than ACE WB which have better in-domain than out-of-domain performance. As for the other Twitter corpora, MSM 2013 and Ritter, performance is very low, especially for the memorisation system. This reflects that, as well as surface form variation, the context or other information represented by features shifts significantly more in Twitter than across different samples of newswire, and that the generalisations that can be drawn from newswire by modern NER systems are not sufficient to give any useful performance in this natural, unconstrained kind of text.", "In fact, it is interesting to see that the memorisation baseline is so effective with many genres, including broadcast news, weblog and newswire. This indicates that there is low variation in the topics discussed by these sources – only a few named entities are mentioned by each. When named entities are seen as micro-topics, each indicating a grounded and small topic of interest, this reflects the nature of news having low topic variation, focusing on a few specific issues – e.g., location referred to tend to be big; persons tend to be politically or financially significant; and organisations rich or governmental BIBREF61 . In contrast, social media users also discuss local locations like restaurants, organisations such as music band and sports clubs, and are content to discuss people that are not necessarily mentioned in Wikipedia. The low overlap and memorisation scores on tweets, when taking entity lexica based on newswire, are therefore symptomatic of the lack of variation in newswire text, which has a limited authorship demographic BIBREF62 and often has to comply to editorial guidelines.", "The other genre that was particularly difficult for the systems was ACE Usenet. This is a form of user-generated content, not intended for publication but rather discussion among communities. In this sense, it is social media, and so it is not surprising that system performance on ACE UN resembles performance on social media more than other genres.", "Crucially, the computationally-cheap memorisation method actually acts as a reasonable predictor of the performance of other methods. This suggests that high entity diversity predicts difficulty for current NER systems. As we know that social media tends to have high entity diversity – certainly higher than other genres examined – this offers an explanation for why NER systems perform so poorly when taken outside the relatively conservative newswire domain. Indeed, if memorisation offers a consistent prediction of performance, then it is reasonable to say that memorisation and memorisation-like behaviour accounts for a large proportion of NER system performance.", "To conclude regarding memorisation and out-of-domain performance, there are multiple issues to consider: is the corpus a sub-corpus of the same corpus as the training corpus, does it belong to the same genre, is it collected in the same time period, and was it created with similar annotation guidelines. Yet it is very difficult to explain high/low out-of-domain performance compared to in-domain performance with those factors.", "A consistent trend is that, if out-of-domain memorisation is better in-domain memorisation, out-of-domain NERC performance with supervised learning is better than in-domain NERC performance with supervised learning too. This reinforces discussions in previous sections: an overlap in NEs is a good predictor for NERC performance. This is useful when a suitable training corpus has to be identified for a new domain. It can be time-consuming to engineer features or study and compare machine learning methods for different domains, while memorisation performance can be checked quickly.", "Indeed, memorisation consistently predicts NER performance. The prediction applies both within and across domains. This has implications for the focus of future work in NER: the ability to generalise well enough to recognise unseen entities is a significant and still-open problem.", "To summarise, our findings are:", "[noitemsep]", "What time period an out of domain corpus is collected in plays an important role in NER performance.", "The context or other information represented by features shifts significantly more in Twitter than across different samples of newswire.", "The generalisations that can be drawn from newswire by modern NER systems are not sufficient to give any useful performance in this varied kind of text.", "Memorisation consistently predicts NER performance, both inside and outside genres or domains." ], [ "This paper investigated the ability of modern NER systems to generalise effectively over a variety of genres. Firstly, by analysing different corpora, we demonstrated that datasets differ widely in many regards: in terms of size; balance of entity classes; proportion of NEs; and how often NEs and tokens are repeated. The most balanced corpus in terms of NE classes is the CoNLL corpus, which, incidentally, is also the most widely used NERC corpus, both for method tuning of off-the-shelf NERC systems (e.g. Stanford NER, SENNA), as well as for comparative evaluation. Corpora, traditionally viewed as noisy, i.e. the Twitter and Web corpora, were found to have a low repetition of NEs and tokens. More surprisingly, however, so does the CoNLL corpus, which indicates that it is well balanced in terms of stories. Newswire corpora have a large proportion of NEs as percentage of all tokens, which indicates high information density. Web, Twitter and telephone conversation corpora, on the other hand, have low information density.", "Our second set of findings relates to the NERC approaches studied. Overall, SENNA achieves consistently the highest performance across most corpora, and thus has the best approach to generalising from training to testing data. This can mostly be attributed to SENNA's use of word embeddings, trained with deep convolutional neural nets. The default parameters of SENNA achieve a balanced precision and recall, while for Stanford NER and CRFSuite, precision is almost twice as high as recall.", "Our experiments also confirmed the correlation between NERC performance and training corpus size, although size alone is not an absolute predictor. In particular, the biggest NE-annotated corpus amongst those studied is OntoNotes NW – almost twice the size of CoNLL in terms of number of NEs. Nevertheless, the average F1 for CoNLL is the highest of all corpora and, in particular, SENNA has 11 points higher F1 on CoNLL than on OntoNotes NW.", "Studying NERC on size-normalised corpora, it becomes clear that there is also a big difference in performance on corpora from the same genre. When normalising training data by size, diverse corpora, such as Web and social media, still yield lower F1 than newswire corpora. This indicates that annotating more training examples for diverse genres would likely lead to a dramatic increase in F1.", "What is found to be a good predictor of F1 is a memorisation baseline, which picks the most frequent NE label for each token sequence in the test corpus as observed in the training corpus. This supported our hypothesis that entity diversity plays an important role, being negatively correlated with F1. Studying proportions of unseen entity surface forms, experiments showed corpora with a large proportion of unseen NEs tend to yield lower F1, due to much lower performance on unseen than seen NEs (about 17 points lower averaged over all NERC methods and corpora). This finally explains why the performance is highest for the benchmark CoNLL newswire corpus – it contains the lowest proportion of unseen NEs. It also explains the difference in performance between NERC on other corpora. Out of all the possible indicators for high NER F1 studied, this is found to be the most reliable one. This directly supports our hypothesis that generalising for unseen named entities is both difficult and important.", "Also studied is the proportion of unseen features per unseen and seen NE portions of different corpora. However, this is found to not be very helpful. The proportion of seen features is higher for seen NEs, as it should be. However, within the seen and unseen NE splits, there is no clear trend indicating if having more seen features helps.", "We also showed that hand-annotating more training examples is a straight-forward and reliable way of improving NERC performance. However, this is costly, which is why it can be useful to study if using different, larger corpora for training might be helpful. Indeed, substituting in-domain training corpora with other training corpora for the same genre created at the same time improves performance, and studying how such corpora can be combined with transfer learning or domain adaptation strategies might improve performance even further. However, for most corpora, there is a significant drop in performance for out-of-domain training. What is again found to be reliable is to check the memorisation baseline: if results for the out-of-domain memorisation baseline are higher than for in-domain memorisation, than using the out-of-domain corpus for training is likely to be helpful.", "Across a broad range of corpora and genres, characterised in different ways, we have examined how named entities are embedded and presented. While there is great variation in the range and class of entities found, it is consistent that the more varied texts are harder to do named entity recognition in. This connection with variation occurs to such an extent that, in fact, performance when memorising lexical forms stably predicts system accuracy. The result of this is that systems are not sufficiently effective at generalising beyond the entity surface forms and contexts found in training data. To close this gap and advance NER systems, and cope with the modern reality of streamed NER, as opposed to the prior generation of batch-learning based systems with static evaluation sets being used as research benchmarks, future work needs to address named entity generalisation and out-of-vocabulary lexical forms." ], [ "This work was partially supported by the UK EPSRC Grant No. EP/K017896/1 uComp and by the European Union under Grant Agreements No. 611233 PHEME. The authors wish to thank the CS&L reviewers for their helpful and constructive feedback." ] ] }
{ "question": [ "What web and user-generated NER datasets are used for the analysis?" ], "question_id": [ "94e0cf44345800ef46a8c7d52902f074a1139e1a" ], "nlp_background": [ "five" ], "topic_background": [ "familiar" ], "paper_read": [ "somewhat" ], "search_query": [ "named entity recognition" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "MUC, CoNLL, ACE, OntoNotes, MSM, Ritter, UMBC", "evidence": [ "Since the goal of this study is to compare NER performance on corpora from diverse domains and genres, seven benchmark NER corpora are included, spanning newswire, broadcast conversation, Web content, and social media (see Table 1 for details). These datasets were chosen such that they have been annotated with the same or very similar entity classes, in particular, names of people, locations, and organisations. Thus corpora including only domain-specific entities (e.g. biomedical corpora) were excluded. The choice of corpora was also motivated by their chronological age; we wanted to ensure a good temporal spread, in order to study possible effects of entity drift over time.", "FLOAT SELECTED: Table 1 Corpora genres and number of NEs of different classes." ], "highlighted_evidence": [ "Since the goal of this study is to compare NER performance on corpora from diverse domains and genres, seven benchmark NER corpora are included, spanning newswire, broadcast conversation, Web content, and social media (see Table 1 for details).", "FLOAT SELECTED: Table 1 Corpora genres and number of NEs of different classes." ] } ], "annotation_id": [ "05dfe42d133923f3516fb680679bacc680589a03" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Table 1 Corpora genres and number of NEs of different classes.", "Table 2 Sizes of corpora, measured in number of NEs, used for training and testing. Note that the for the ConLL corpus the dev set is called “Test A” and the test set “Test B”.", "Table 3 P, R and F1 of NERC with different models evaluated on different testing corpora, trained on corpora normalised by size.", "Table 4 P, R and F1 of NERC with different models evaluated on different testing corpora, trained on corpora normalised by size, metrics macro averaged by genres.", "Table 5 NE/Unique NE ratios and normalised NE/Unique NE ratios of different corpora, mean and median of those values plus R correlation of ratios with Stanford NER F1 on original corpora.", "Table 6 Token/type ratios and normalised token/type ratios of different corpora, mean and median of those values plus R correlation of ratios with Stanford NER F1 on original corpora.", "Table 7 Tag density and normalised tag density, the proportion of tokens with NE tags to all tokens, mean and median of those values plus R correlation of density with Stanford NER F1 on original corpora.", "Table 8 P, R and F1 of NERC with different models trained on original corpora.", "Table 9 F1 per NE type with different models trained on original corpora.", "Fig. 1. F1 of different NER methods with respect to training corpus size, measured in log of number of NEs.", "Table 10 Entity surface form occurrence overlap between Twitter corpora.", "Table 11 Entity surface form occurrence overlap between news corpora.", "Table 12 Proportion of unseen entities in different test corpora.", "Table 13 P, R and F1 of NERC with different models of unseen and seen NEs.", "Fig. 2. Percentage of unseen features and F1 with Stanford NER for seen (blue squares) and unseen (red circles) NEs in different corpora. (For interpretation of the references to colour in this figure, the reader is referred to the web version of this article.)", "Table 14 Out of domain performance: F1 of NERC with different models.", "Table 15 Out-of-domain performance for unseen vs. seen NEs: F1 of NERC with different models." ], "file": [ "4-Table1-1.png", "5-Table2-1.png", "6-Table3-1.png", "6-Table4-1.png", "7-Table5-1.png", "7-Table6-1.png", "8-Table7-1.png", "10-Table8-1.png", "11-Table9-1.png", "12-Figure1-1.png", "13-Table10-1.png", "13-Table11-1.png", "14-Table12-1.png", "15-Table13-1.png", "17-Figure2-1.png", "18-Table14-1.png", "19-Table15-1.png" ] }
1904.05862
wav2vec: Unsupervised Pre-training for Speech Recognition
We explore unsupervised pre-training for speech recognition by learning representations of raw audio. wav2vec is trained on large amounts of unlabeled audio data and the resulting representations are then used to improve acoustic model training. We pre-train a simple multi-layer convolutional neural network optimized via a noise contrastive binary classification task. Our experiments on WSJ reduce WER of a strong character-based log-mel filterbank baseline by up to 36% when only a few hours of transcribed data is available. Our approach achieves 2.43% WER on the nov92 test set. This outperforms Deep Speech 2, the best reported character-based system in the literature while using three orders of magnitude less labeled training data.
{ "section_name": [ "Introduction", "Pre-training Approach", "Model", "Objective", "Data", "Acoustic Models", "Decoding", "Pre-training Models", "Results", "Pre-training for the WSJ benchmark", "Pre-training for TIMIT", "Ablations", "Conclusions", "Acknowledgements" ], "paragraphs": [ [ "Current state of the art models for speech recognition require large amounts of transcribed audio data to attain good performance BIBREF1 . Recently, pre-training of neural networks has emerged as an effective technique for settings where labeled data is scarce. The key idea is to learn general representations in a setup where substantial amounts of labeled or unlabeled data is available and to leverage the learned representations to improve performance on a downstream task for which the amount of data is limited. This is particularly interesting for tasks where substantial effort is required to obtain labeled data, such as speech recognition.", "In computer vision, representations for ImageNet BIBREF2 and COCO BIBREF3 have proven to be useful to initialize models for tasks such as image captioning BIBREF4 or pose estimation BIBREF5 . Unsupervised pre-training for computer vision has also shown promise BIBREF6 . In natural language processing (NLP), unsupervised pre-training of language models BIBREF7 , BIBREF8 , BIBREF9 improved many tasks such as text classification, phrase structure parsing and machine translation BIBREF10 , BIBREF11 . In speech processing, pre-training has focused on emotion recogniton BIBREF12 , speaker identification BIBREF13 , phoneme discrimination BIBREF14 , BIBREF15 as well as transferring ASR representations from one language to another BIBREF16 . There has been work on unsupervised learning for speech but the resulting representations have not been applied to improve supervised speech recognition BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 .", "In this paper, we apply unsupervised pre-training to improve supervised speech recognition. This enables exploiting unlabeled audio data which is much easier to collect than labeled data. Our model, , is a convolutional neural network that takes raw audio as input and computes a general representation that can be input to a speech recognition system. The objective is a contrastive loss that requires distinguishing a true future audio sample from negatives BIBREF22 , BIBREF23 , BIBREF15 . Different to previous work BIBREF15 , we move beyond frame-wise phoneme classification and apply the learned representations to improve strong supervised ASR systems. relies on a fully convolutional architecture which can be easily parallelized over time on modern hardware compared to recurrent autoregressive models used in previous work (§ SECREF2 ).", "Our experimental results on the WSJ benchmark demonstrate that pre-trained representations estimated on about 1,000 hours of unlabeled speech can substantially improve a character-based ASR system and outperform the best character-based result in the literature, Deep Speech 2. On the TIMIT task, pre-training enables us to match the best reported result in the literature. In a simulated low-resource setup with only eight hours of transcriped audio data, reduces WER by up to 32% compared to a baseline model that relies on labeled data only (§ SECREF3 & § SECREF4 )." ], [ "Given an audio signal as input, we optimize our model (§ SECREF3 ) to predict future samples from a given signal context. A common problem with these approaches is the requirement to accurately model the data distribution INLINEFORM0 , which is challenging. We avoid this problem by first encoding raw speech samples INLINEFORM1 into a feature representation INLINEFORM2 at a lower temporal frequency and then implicitly model a density function INLINEFORM3 similar to BIBREF15 ." ], [ "Our model takes raw audio signal as input and then applies two networks. The encoder network embeds the audio signal in latent space and the context network combines multiple time-steps of the encoder to obtain contextualized representations (Figure FIGREF2 ). Both networks are then used to compute the objective function (§ SECREF4 ).", "Given raw audio samples INLINEFORM0 , we apply the encoder network INLINEFORM1 which we parameterize as a five-layer convolutional network similar to BIBREF15 . Alternatively, one could use other architectures such as the trainable frontend of BIBREF24 amongst others. The encoder layers have kernel sizes INLINEFORM2 and strides INLINEFORM3 . The output of the encoder is a low frequency feature representation INLINEFORM4 which encodes about 30ms of 16KHz of audio and the striding results in representation INLINEFORM5 every 10ms.", "Next, we apply the context network INLINEFORM0 to the output of the encoder network to mix multiple latent representations INLINEFORM1 into a single contextualized tensor INLINEFORM2 for a receptive field size INLINEFORM3 . The context network has seven layers and each layer has kernel size three and stride one. The total receptive field of the context network is about 180ms.", "The layers of both networks consist of a causal convolution with 512 channels, a group normalization layer and a ReLU nonlinearity. We normalize both across the feature and temporal dimension for each sample which is equivalent to group normalization with a single normalization group BIBREF25 . We found it important to choose a normalization scheme that is invariant to the scaling and the offset of the input data. This choice resulted in representations that generalize well across datasets." ], [ "We train the model to distinguish a sample INLINEFORM0 that is k steps in the future from distractor samples INLINEFORM1 drawn from a proposal distribution INLINEFORM2 , by minimizing the contrastive loss for each step INLINEFORM3 : DISPLAYFORM0 ", "where we denote the sigmoid INLINEFORM0 , and where INLINEFORM1 is the probability of INLINEFORM2 being the true sample. We consider a step-specific affine transformation INLINEFORM3 for each step INLINEFORM4 , that is applied to INLINEFORM5 BIBREF15 . We optimize the loss INLINEFORM6 , summing ( EQREF5 ) over different step sizes. In practice, we approximate the expectation by sampling ten negatives examples by uniformly choosing distractors from each audio sequence, i.e., INLINEFORM7 , where INLINEFORM8 is the sequence length and we set INLINEFORM9 to the number of negatives.", "After training, we input the representations produced by the context network INLINEFORM0 to the acoustic model instead of log-mel filterbank features." ], [ "We consider the following corpora: For phoneme recognition on TIMIT BIBREF26 we use the standard train, dev and test split where the training data contains just over three hours of audio data. Wall Street Journal (WSJ; Woodland et al., 1994) comprises about 81 hours of transcribed audio data. We train on si284, validate on nov93dev and test on nov92. Librispeech BIBREF27 contains a total of 960 hours of clean and noisy speech for training. For pre-training, we use either the full 81 hours of the WSJ corpus, an 80 hour subset of clean Librispeech, the full 960 hour Librispeech training set, or a combination of all of them.", "To train the baseline acoustic model we compute 80 log-mel filterbank coefficients for a 25ms sliding window with stride 10ms. Final models are evaluated in terms of both word error rate (WER) and letter error rate (LER)." ], [ "We use the wav2letter++ toolkit for training and evaluation of acoustic models BIBREF28 . For the TIMIT task, we follow the character-based wav2letter++ setup of BIBREF24 which uses seven consecutive blocks of convolutions (kernel size 5 with 1,000 channels), followed by a PReLU nonlinearity and a dropout rate of 0.7. The final representation is projected to a 39-dimensional phoneme probability. The model is trained using the Auto Segmentation Criterion (ASG; Collobert et al., 2016)) using SGD with momentum.", "Our baseline for the WSJ benchmark is the wav2letter++ setup described in BIBREF29 which is a 17 layer model with gated convolutions BIBREF30 . The model predicts probabilities for 31 graphemes, including the standard English alphabet, the apostrophe and period, two repetition characters (e.g. the word ann is transcribed as an1), and a silence token (|) used as word boundary.", "All acoustic models are trained on 8 Nvidia V100 GPUs using the distributed training implementations of fairseq and wav2letter++. When training acoustic models on WSJ, we use plain SGD with learning rate 5.6 as well as gradient clipping BIBREF29 and train for 1,000 epochs with a total batch size of 64 audio sequences. We use early stopping and choose models based on validation WER after evaluating checkpoints with a 4-gram language model. For TIMIT we use learning rate 0.12, momentum of 0.9 and train for 1,000 epochs on 8 GPUs with a batch size of 16 audio sequences." ], [ "For decoding the emissions from the acoustic model we use a lexicon as well as a separate language model trained on the WSJ language modeling data only. We consider a 4-gram KenLM language model BIBREF31 , a word-based convolutional language model BIBREF29 , and a character based convolutional language model BIBREF32 . We decode the word sequence INLINEFORM0 from the output of the context network INLINEFORM1 or log-mel filterbanks using the beam search decoder of BIBREF29 by maximizing DISPLAYFORM0 ", "where INLINEFORM0 is the acoustic model, INLINEFORM1 is the language model, INLINEFORM2 are the characters of INLINEFORM3 . Hyper-parameters INLINEFORM4 , INLINEFORM5 and INLINEFORM6 are weights for the language model, the word penalty, and the silence penalty.", "For decoding WSJ, we tune the hyperparameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 using a random search. Finally, we decode the emissions from the acoustic model with the best parameter setting for INLINEFORM3 , INLINEFORM4 and INLINEFORM5 , and a beam size of 4000 and beam score threshold of 250." ], [ "The pre-training models are implemented in PyTorch in the fairseq toolkit BIBREF0 . We optimize them with Adam BIBREF33 and a cosine learning rate schedule BIBREF34 annealed over 40K update steps for both WSJ and the clean Librispeech training datasets. We start with a learning rate of 1e-7, and the gradually warm it up for 500 updates up to 0.005 and then decay it following the cosine curve up to 1e-6. We train for 400K steps for full Librispeech. To compute the objective, we sample ten negatives and we use INLINEFORM0 tasks.", "We train on 8 GPUs and put a variable number of audio sequences on each GPU, up to a pre-defined limit of 1.5M frames per GPU. Sequences are grouped by length and we crop them to a maximum size of 150K frames each, or the length of the shortest sequence in the batch, whichever is smaller. Cropping removes speech signal from either the beginning or end of the sequence and we randomly decide the cropping offsets for each sample; we re-sample every epoch. This is a form of data augmentation but also ensures equal length of all sequences on a GPU and removes on average 25% of the training data. After cropping the total effective batch size across GPUs is about 556 seconds of speech signal (for a variable number of audio sequences)." ], [ "Different to BIBREF15 , we evaluate the pre-trained representations directly on downstream speech recognition tasks. We measure speech recognition performance on the WSJ benchmark and simulate various low resource setups (§ SECREF12 ). We also evaluate on the TIMIT phoneme recognition task (§ SECREF13 ) and ablate various modeling choices (§ SECREF14 )." ], [ "We consider pre-training on the audio data (without labels) of WSJ, part of clean Librispeech (about 80h) and full Librispeech as well as a combination of all datasets (§ SECREF7 ). For the pre-training experiments we feed the output of the context network to the acoustic model, instead of log-mel filterbank features.", "Table shows that pre-training on more data leads to better accuracy on the WSJ benchmark. Pre-trained representations can substantially improve performance over our character-based baseline which is trained on log-mel filterbank features. This shows that pre-training on unlabeled audio data can improve over the best character-based approach, Deep Speech 2 BIBREF1 , by 0.3 WER on nov92. Our best pre-training model performs as well as the phoneme-based model of BIBREF35 . BIBREF36 is a phoneme-based approach that pre-trains on the transcribed Libirspeech data and then fine-tunes on WSJ. In comparison, our method requires only unlabeled audio data and BIBREF36 also rely on a stronger baseline model than our setup.", "What is the impact of pre-trained representations with less transcribed data? In order to get a better understanding of this, we train acoustic models with different amounts of labeled training data and measure accuracy with and without pre-trained representations (log-mel filterbanks). The pre-trained representations are trained on the full Librispeech corpus and we measure accuracy in terms of WER when decoding with a 4-gram language model. Figure shows that pre-training reduces WER by 32% on nov93dev when only about eight hours of transcribed data is available. Pre-training only on the audio data of WSJ ( WSJ) performs worse compared to the much larger Librispeech ( Libri). This further confirms that pre-training on more data is crucial to good performance." ], [ "On the TIMIT task we use a 7-layer wav2letter++ model with high dropout (§ SECREF3 ; Synnaeve et al., 2016). Table shows that we can match the state of the art when we pre-train on Librispeech and WSJ audio data. Accuracy steadily increases with more data for pre-training and the best accuracy is achieved when we use the largest amount of data for pre-training." ], [ "In this section we analyze some of the design choices we made for . We pre-train on the 80 hour subset of clean Librispeech and evaluate on TIMIT. Table shows that increasing the number of negative samples only helps up to ten samples. Thereafter, performance plateaus while training time increases. We suspect that this is because the training signal from the positive samples decreases as the number of negative samples increases. In this experiment, everything is kept equal except for the number of negative samples.", "Next, we analyze the effect of data augmentation through cropping audio sequences (§ SECREF11 ). When creating batches we crop sequences to a pre-defined maximum length. Table shows that a crop size of 150K frames results in the best performance. Not restricting the maximum length (None) gives an average sequence length of about 207K frames and results in the worst accuracy. This is most likely because the setting provides the least amount of data augmentation.", "Table shows that predicting more than 12 steps ahead in the future does not result in better performance and increasing the number of steps increases training time." ], [ "We introduce , the first application of unsupervised pre-training to speech recognition with a fully convolutional model. Our approach achieves 2.78 WER on the test set of WSJ, a result that outperforms the next best known character-based speech recognition model in the literature BIBREF1 while using three orders of magnitude less transcribed training data. We show that more data for pre-training improves performance and that this approach not only improves resource-poor setups, but also settings where all WSJ training data is used. In future work, we will investigate different architectures and fine-tuning which is likely to further improve performance." ], [ "We thank the Speech team at FAIR, especially Jacob Kahn, Vineel Pratap and Qiantong Xu for help with wav2letter++ experiments, and Tatiana Likhomanenko for providing convolutional language models for our experiments." ] ] }
{ "question": [ "Which unlabeled data do they pretrain with?", "How many convolutional layers does their model have?", "Do they explore how much traning data is needed for which magnitude of improvement for WER? " ], "question_id": [ "ad67ca844c63bf8ac9fdd0fa5f58c5a438f16211", "12eaaf3b6ebc51846448c6e1ad210dbef7d25a96", "828615a874512844ede9d7f7d92bdc48bb48b18d" ], "nlp_background": [ "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "1000 hours of WSJ audio data", "evidence": [ "We consider pre-training on the audio data (without labels) of WSJ, part of clean Librispeech (about 80h) and full Librispeech as well as a combination of all datasets (§ SECREF7 ). For the pre-training experiments we feed the output of the context network to the acoustic model, instead of log-mel filterbank features.", "Our experimental results on the WSJ benchmark demonstrate that pre-trained representations estimated on about 1,000 hours of unlabeled speech can substantially improve a character-based ASR system and outperform the best character-based result in the literature, Deep Speech 2. On the TIMIT task, pre-training enables us to match the best reported result in the literature. In a simulated low-resource setup with only eight hours of transcriped audio data, reduces WER by up to 32% compared to a baseline model that relies on labeled data only (§ SECREF3 & § SECREF4 )." ], "highlighted_evidence": [ "We consider pre-training on the audio data (without labels) of WSJ, part of clean Librispeech (about 80h) and full Librispeech as well as a combination of all datasets (§ SECREF7 ). ", "Our experimental results on the WSJ benchmark demonstrate that pre-trained representations estimated on about 1,000 hours of unlabeled speech can substantially improve a character-based ASR system and outperform the best character-based result in the literature, Deep Speech 2. " ] } ], "annotation_id": [ "b5db6a885782bd0be2ae18fb5f4ee7b901f4899a" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "wav2vec has 12 convolutional layers", "evidence": [ "Given raw audio samples INLINEFORM0 , we apply the encoder network INLINEFORM1 which we parameterize as a five-layer convolutional network similar to BIBREF15 . Alternatively, one could use other architectures such as the trainable frontend of BIBREF24 amongst others. The encoder layers have kernel sizes INLINEFORM2 and strides INLINEFORM3 . The output of the encoder is a low frequency feature representation INLINEFORM4 which encodes about 30ms of 16KHz of audio and the striding results in representation INLINEFORM5 every 10ms.", "Next, we apply the context network INLINEFORM0 to the output of the encoder network to mix multiple latent representations INLINEFORM1 into a single contextualized tensor INLINEFORM2 for a receptive field size INLINEFORM3 . The context network has seven layers and each layer has kernel size three and stride one. The total receptive field of the context network is about 180ms." ], "highlighted_evidence": [ "Given raw audio samples INLINEFORM0 , we apply the encoder network INLINEFORM1 which we parameterize as a five-layer convolutional network similar to BIBREF15 .", "Next, we apply the context network INLINEFORM0 to the output of the encoder network to mix multiple latent representations INLINEFORM1 into a single contextualized tensor INLINEFORM2 for a receptive field size INLINEFORM3 . The context network has seven layers and each layer has kernel size three and stride one. " ] } ], "annotation_id": [ "8e62f7f6e7e443e1ab1df3d3c04d273a06ade07f" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "What is the impact of pre-trained representations with less transcribed data? In order to get a better understanding of this, we train acoustic models with different amounts of labeled training data and measure accuracy with and without pre-trained representations (log-mel filterbanks). The pre-trained representations are trained on the full Librispeech corpus and we measure accuracy in terms of WER when decoding with a 4-gram language model. Figure shows that pre-training reduces WER by 32% on nov93dev when only about eight hours of transcribed data is available. Pre-training only on the audio data of WSJ ( WSJ) performs worse compared to the much larger Librispeech ( Libri). This further confirms that pre-training on more data is crucial to good performance." ], "highlighted_evidence": [ "What is the impact of pre-trained representations with less transcribed data? In order to get a better understanding of this, we train acoustic models with different amounts of labeled training data and measure accuracy with and without pre-trained representations (log-mel filterbanks). The pre-trained representations are trained on the full Librispeech corpus and we measure accuracy in terms of WER when decoding with a 4-gram language model. Figure shows that pre-training reduces WER by 32% on nov93dev when only about eight hours of transcribed data is available. Pre-training only on the audio data of WSJ ( WSJ) performs worse compared to the much larger Librispeech ( Libri). This further confirms that pre-training on more data is crucial to good performance." ] } ], "annotation_id": [ "0633347cc1331b9aecb030e036503854b5167b2d" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Figure 1: Illustration of pre-training from audio data X which is encoded with two convolutional neural networks that are stacked on top of each other. The model is optimized to solve a next time step prediction task.", "Table 1: Replacing log-mel filterbanks (Baseline) by pre-trained embeddings improves WSJ performance on test (nov92) and validation (nov93dev) in terms of both LER and WER. We evaluate pre-training on the acoustic data of part of clean and full Librispeech as well as the combination of all of them. † indicates results with phoneme-based models.", "Figure 2: Pre-training substanstially improves WER in simulated low-resource setups on the audio data of WSJ compared to wav2letter++ with log-mel filterbanks features (Baseline). Pre-training on the audio data of the full 960 h Librispeech dataset (wav2vec Libri) performs better than pre-training on the 81 h WSJ dataset (wav2vec WSJ).", "Table 2: Results for phoneme recognition on TIMIT in terms of PER. All our models use the CNN8L-PReLU-do0.7 architecture (Ravanelli et al., 2018).", "Table 3: Effect of different number of negative samples during pre-training for TIMIT on the development set.", "Table 5: Effect of different number of tasks K (cf. Table 3)." ], "file": [ "2-Figure1-1.png", "4-Table1-1.png", "5-Figure2-1.png", "6-Table2-1.png", "6-Table3-1.png", "7-Table5-1.png" ] }
1708.09157
Cross-lingual, Character-Level Neural Morphological Tagging
Even for common NLP tasks, sufficient supervision is not available in many languages -- morphological tagging is no exception. In the work presented here, we explore a transfer learning scheme, whereby we train character-level recurrent neural taggers to predict morphological taggings for high-resource languages and low-resource languages together. Learning joint character representations among multiple related languages successfully enables knowledge transfer from the high-resource languages to the low-resource ones, improving accuracy by up to 30%
{ "section_name": [ "Introduction", "Morphological Tagging", "Character-Level Neural Transfer", "Character-Level Neural Networks", "Cross-Lingual Morphological Transfer as Multi-Task Learning", "Experiments", "Experimental Languages", "Datasets", "Baselines", "Experimental Details", "Results and Discussion", "Related Work", "Alignment-Based Distant Supervision.", "Character-level NLP.", "Neural Cross-lingual Transfer in NLP.", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "State-of-the-art morphological taggers require thousands of annotated sentences to train. For the majority of the world's languages, however, sufficient, large-scale annotation is not available and obtaining it would often be infeasible. Accordingly, an important road forward in low-resource NLP is the development of methods that allow for the training of high-quality tools from smaller amounts of data. In this work, we focus on transfer learning—we train a recurrent neural tagger for a low-resource language jointly with a tagger for a related high-resource language. Forcing the models to share character-level features among the languages allows large gains in accuracy when tagging the low-resource languages, while maintaining (or even improving) accuracy on the high-resource language.", "Recurrent neural networks constitute the state of the art for a myriad of tasks in NLP, e.g., multi-lingual part-of-speech tagging BIBREF0 , syntactic parsing BIBREF1 , BIBREF2 , morphological paradigm completion BIBREF3 , BIBREF4 and language modeling BIBREF5 , BIBREF6 ; recently, such models have also improved morphological tagging BIBREF7 , BIBREF8 . In addition to increased performance over classical approaches, neural networks also offer a second advantage: they admit a clean paradigm for multi-task learning. If the learned representations for all of the tasks are embedded jointly into a shared vector space, the various tasks reap benefits from each other and often performance improves for all BIBREF9 . We exploit this idea for language-to-language transfer to develop an approach for cross-lingual morphological tagging.", "We experiment on 18 languages taken from four different language families. Using the Universal Dependencies treebanks, we emulate a low-resource setting for our experiments, e.g., we attempt to train a morphological tagger for Catalan using primarily data from a related language like Spanish. Our results demonstrate the successful transfer of morphological knowledge from the high-resource languages to the low-resource languages without relying on an externally acquired bilingual lexicon or bitext. We consider both the single- and multi-source transfer case and explore how similar two languages must be in order to enable high-quality transfer of morphological taggers." ], [ "Many languages in the world exhibit rich inflectional morphology: the form of individual words mutates to reflect the syntactic function. For example, the Spanish verb soñar will appear as sueño in the first person present singular, but soñáis in the second person present plural, depending on the bundle of syntaco-semantic attributes associated with the given form (in a sentential context). For concreteness, we list a more complete table of Spanish verbal inflections in tab:paradigm. [author=Ryan,color=purple!40,size=,fancyline,caption=,]Notation in table is different. Note that some languages, e.g. the Northeastern Caucasian language Archi, display a veritable cornucopia of potential forms with the size of the verbal paradigm exceeding 10,000 BIBREF10 .", "Standard NLP annotation, e.g., the scheme in sylakglassman-EtAl:2015:ACL-IJCNLP, marks forms in terms of universal key–attribute pairs, e.g., the first person present singular is represented as $\\left[\\right.$ pos=V, per=1, num=sg, tns=pres $\\left.\\right]$ . This bundle of key–attributes pairs is typically termed a morphological tag and we may view the goal of morphological tagging to label each word in its sentential context with the appropriate tag BIBREF11 , BIBREF12 . As the part-of-speech (POS) is a component of the tag, we may view morphological tagging as a strict generalization of POS tagging, where we have significantly refined the set of available tags. All of the experiments in this paper make use of the universal morphological tag set available in the Universal Dependencies (UD) BIBREF13 . As an example, we have provided a Russian sentence with its UD tagging in fig:russian-sentence." ], [ "Our formulation of transfer learning builds on work in multi-task learning BIBREF15 , BIBREF9 . We treat each individual language as a task and train a joint model for all the tasks. We first discuss the current state of the art in morphological tagging: a character-level recurrent neural network. After that, we explore three augmentations to the architecture that allow for the transfer learning scenario. All of our proposals force the embedding of the characters for both the source and the target language to share the same vector space, but involve different mechanisms, by which the model may learn language-specific features." ], [ "Character-level neural networks currently constitute the state of the art in morphological tagging BIBREF8 . We draw on previous work in defining a conditional distribution over taggings ${t}$ for a sentence ${w}$ of length $|{w}| = N$ as ", "$$p_{{\\theta }}({{t}} \\mid {{w}}) = \\prod _{i=1}^N p_{{\\theta }}(t_i \\mid {{w}}), $$ (Eq. 12) ", "which may be seen as a $0^\\text{th}$ order conditional random field (CRF) BIBREF16 with parameter vector ${{\\theta }}$ . Importantly, this factorization of the distribution $p_{{\\theta }}({{t}} \\mid {{w}})$ also allows for efficient exact decoding and marginal inference in ${\\cal O}(N)$ -time, but at the cost of not admitting any explicit interactions in the output structure, i.e., between adjacent tags. We parameterize the distribution over tags at each time step as ", "$$p_{{\\theta }}(t_i \\mid {{w}}) = \\text{softmax}\\left(W {e}_i + {b}\\right), $$ (Eq. 15) ", "where $W \\in \\mathbb {R}^{|{\\cal T}| \\times n}$ is an embedding matrix, ${b}\\in \\mathbb {R}^{|{\\cal T}|}$ is a bias vector and positional embeddings ${e}_i$ are taken from a concatenation of the output of two long short-term memory recurrent neural networks (LSTMs) BIBREF18 , folded forward and backward, respectively, over a sequence of input vectors. This constitutes a bidirectional LSTM BIBREF19 . We define the positional embedding vector as follows ", "$${e}_i = \\left[{\\text{LSTM}}({v}_{1:i});\n{\\text{LSTM}}({v}_{i+1:N})\\right], $$ (Eq. 17) ", "where each ${v}_i \\in \\mathbb {R}^n$ is, itself, a word embedding. Note that the function $\\text{LSTM}$ returns the last final hidden state vector of the network. This architecture is the context bidirectional recurrent neural network of plank-sogaard-goldberg:2016:P16-2. Finally, we derive each word embedding vector ${v}_i$ from a character-level bidirectional LSTM embedder. Namely, we define each word embedding as the concatenation ", "$${v}_i = &\\left[ {\\text{LSTM}}\\left(\\langle c_{i_1}, \\ldots ,\nc_{i_{M_i}}\\rangle \\right); \\right. \\\\\n&\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\left. {\\text{LSTM}} \\left(\\langle c_{i_{M_i}}, \\ldots , c_{i_1}\\rangle \\right) \\right]. \\nonumber $$ (Eq. 18) ", " In other words, we run a bidirectional LSTM over the character stream. This bidirectional LSTM is the sequence bidirectional recurrent neural network of plank-sogaard-goldberg:2016:P16-2. Note a concatenation of the sequence of character symbols $\\langle c_{i_1}, \\ldots , c_{i_{M_i}} \\rangle $ results in the word string $w_i$ . Each of the $M_i$ characters $c_{i_k}$ is a member of the set $\\Sigma $ . We take $\\Sigma $ to be the union of sets of characters in the languages considered.", "We direct the reader to heigold2017 for a more in-depth discussion of this and various additional architectures for the computation of ${v}_i$ ; the architecture we have presented in eq:embedder-v is competitive with the best performing setting in Heigold et al.'s study." ], [ "Cross-lingual morphological tagging may be formulated as a multi-task learning problem. We seek to learn a set of shared character embeddings for taggers in both languages together through optimization of a joint loss function that combines the high-resource tagger and the low-resource one. The first loss function we consider is the following: ", "$${\\cal L}_{\\textit {multi}}({\\theta }) = -\\!\\!\\!\\sum _{({t}, {w}) \\in {\\cal D}_s} \\!\\!\\!\\! \\log &\\, p_{{\\theta }} ({t}\\mid {w}, \\ell _s ) \\\\[-5]\n\\nonumber & -\\!\\!\\!\\!\\sum _{({t}, {w}) \\in {\\cal D}_t} \\!\\!\n\\log p_{{\\theta }}\\left({t}\\mid {w}, \\ell _t \\right).$$ (Eq. 20) ", " Crucially, our cross-lingual objective forces both taggers to share part of the parameter vector ${\\theta }$ , which allows it to represent morphological regularities between the two languages in a common embedding space and, thus, enables transfer of knowledge. This is no different from monolingual multi-task settings, e.g., jointly training a chunker and a tagger for the transfer of syntactic information BIBREF9 . We point out that, in contrast to our approach, almost all multi-task transfer learning, e.g., for dependency parsing BIBREF20 , has shared word-level embeddings rather than character-level embeddings. See sec:related-work for a more complete discussion.", "We consider two parameterizations of this distribution $p_{{\\theta }}(t_i\n\\mid {w}, \\ell )$ . First, we modify the initial character-level LSTM embedding such that it also encodes the identity of the language. Second, we modify the softmax layer, creating a language-specific softmax.", "Our first architecture has one softmax, as in eq:tagger, over all morphological tags in ${\\cal T}$ (shared among all the languages). To allow the architecture to encode morphological features specific to one language, e.g., the third person present plural ending in Spanish is -an, but -ão in Portuguese, we modify the creation of the character-level embeddings. Specifically, we augment the character alphabet $\\Sigma $ with a distinguished symbol that indicates the language: $\\text{{\\tt id}}_\\ell $ . We then pre- and postpend this symbol to the character stream for every word before feeding the characters into the bidirectional LSTM Thus, we arrive at the new language-specific word embeddings, ", "$${v}^{\\ell }_i = &\\left[ {\\text{LSTM}}\\left(\\langle \\text{{\\tt id}}_\\ell , c_{i_1}, \\ldots ,\nc_{i_{M_i}}, \\text{{\\tt id}}_\\ell \\rangle \\right); \\right. \\\\\n&\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\left. {\\text{LSTM}} \\left(\\langle \\text{{\\tt id}}_\\ell , c_{i_{M_i}}, \\ldots , c_{i_1}, \\text{{\\tt id}}_\\ell \\rangle \\right) \\right]. \\nonumber $$ (Eq. 22) ", " This model creates a language-specific embedding vector ${v}_i$ , but the individual embeddings for a given character are shared among the languages jointly trained on. The remainder of the architecture is held constant.", "Next, inspired by the architecture of heigold2013multilingual, we consider a language-specific softmax layer, i.e., we define a new output layer for every language: ", "$$p_{{\\theta }}\\left(t_i \\mid {w}, \\ell \\right) = \\text{softmax}\\left(W_{\\ell } {e}_i + {b}_{\\ell }\\right),$$ (Eq. 24) ", "where $W_{\\ell } \\in \\mathbb {R}^{|{\\cal T}| \\times n}$ and ${b}_{\\ell } \\in \\mathbb {R}^{|{\\cal T}|}$ are now language-specific. In this architecture, the embeddings ${e}_i$ are the same for all languages—the model has to learn language-specific behavior exclusively through the output softmax of the tagging LSTM.", "The third model we exhibit is a joint architecture for tagging and language identification. We consider the following loss function: ", "$${\\cal L}_{\\textit {joint}} ({\\theta }) = -\\!\\!\\!\\sum _{({t}, {w}) \\in {\\cal D}_s} \\!\\!\\! \\log \\, & p_{{\\theta }}(\\ell _s, {t}\\mid {w}) \\\\[-5] \\nonumber &-\\!\\sum _{({t}, {w}) \\in {\\cal D}_t} \\!\\!\\!\\!\\! \\log p_{{\\theta }}\\left(\\ell _t, {t}\\mid {w}\\right),$$ (Eq. 26) ", " where we factor the joint distribution as ", "$$p_{{\\theta }}\\left(\\ell , {t}\\mid {w}\\right) &= p_{{\\theta }}\\left(\\ell \\mid {w}\\right) \\cdot p_{{\\theta }}\\left({t}\\mid {w}, \\ell \\right).$$ (Eq. 27) ", " Just as before, we define $p_{{\\theta }}\\left({t}\\mid {w}, \\ell \\right)$ above as in eq:lang-specific and we define ", "$$p_{{\\theta }}(\\ell \\mid {w}) = \\text{softmax}\\left(U\\tanh (V{e}_i)\\right),$$ (Eq. 28) ", "which is a multi-layer perceptron with a binary softmax (over the two languages) as an output layer; we have added the additional parameters $V \\in \\mathbb {R}^{2 \\times n}$ and $U \\in \\mathbb {R}^{2 \\times 2}$ . In the case of multi-source transfer, this is a softmax over the set of languages.", "The first two architectures discussed in par:arch1 represent two possibilities for a multi-task objective, where we condition on the language of the sentence. The first integrates this knowledge at a lower level and the second at a higher level. The third architecture discussed in sec:joint-arch takes a different tack—rather than conditioning on the language, it predicts it. The joint model offers one interesting advantage over the two architectures proposed. Namely, it allows us to perform a morphological analysis on a sentence where the language is unknown. This effectively alleviates an early step in the NLP pipeline, where language id is performed and is useful in conditions where the language to be tagged may not be known a-priori, e.g., when tagging social media data.", "While there are certainly more complex architectures one could engineer for the task, we believe we have found a relatively diverse sampling, enabling an interesting experimental comparison. Indeed, it is an important empirical question which architectures are most appropriate for transfer learning. Since transfer learning affords the opportunity to reduce the sample complexity of the “data-hungry” neural networks that currently dominate NLP research, finding a good solution for cross-lingual transfer in state-of-the-art neural models will likely be a boon for low-resource NLP in general." ], [ "Empirically, we ask three questions of our architectures. i) How well can we transfer morphological tagging models from high-resource languages to low-resource languages in each architecture? (Does one of the three outperform the others?) ii) How many annotated data in the low-resource language do we need? iii) How closely related do the languages need to be to get good transfer?" ], [ "We experiment with the language families: Romance (Indo-European), Northern Germanic (Indo-European), Slavic (Indo-European) and Uralic. In the Romance sub-grouping of the wider Indo-European family, we experiment on Catalan (ca), French (fr), Italian (it), Portuguese (pt), Romanian (ro) and Spanish (es). In the Northern Germanic family, we experiment on Danish (da), Norwegian (no) and Swedish (sv). In the Slavic family, we experiment on Bulgarian (bg), Czech (bg), Polish (pl), Russian (ru), Slovak (sk) and Ukrainian (uk). Finally, in the Uralic family we experiment on Estonian (et), Finnish (fi) and Hungarian (hu)." ], [ "We use the morphological tagging datasets provided by the Universal Dependencies (UD) treebanks (the concatenation of the $4^\\text{th}$ and $6^\\text{th}$ columns of the file format) BIBREF13 . We list the size of the training, development and test splits of the UD treebanks we used in tab:lang-size. Also, we list the number of unique morphological tags in each language in tab:num-tags, which serves as an approximate measure of the morphological complexity each language exhibits. Crucially, the data are annotated in a cross-linguistically consistent manner, such that words in the different languages that have the same syntacto-semantic function have the same bundle of tags (see sec:morpho-tagging for a discussion). Potentially, further gains would be possible by using a more universal scheme, e.g., the UniMorph scheme." ], [ "We consider two baselines in our work. First, we consider the MarMoT tagger BIBREF17 , which is currently the best performing non-neural model. The source code for MarMoT is freely available online, which allows us to perform fully controlled experiments with this model. Second, we consider the alignment-based projection approach of buys-botha:2016:P16-1. We discuss each of the two baselines in turn.", "The MarMoT tagger is the leading non-neural approach to morphological tagging. This baseline is important since non-neural, feature-based approaches have been found empirically to be more efficient, in the sense that their learning curves tend to be steeper. Thus, in the low-resource setting we would be remiss to not consider a feature-based approach. Note that this is not a transfer approach, but rather only uses the low-resource data.", "The projection approach of buys-botha:2016:P16-1 provides an alternative method for transfer learning. The idea is to construct pseudo-annotations for bitext given an alignments BIBREF21 . Then, one trains a standard tagger using the projected annotations. The specific tagger employed is the wsabie model of DBLP:conf/ijcai/WestonBU11, which—like our approach— is a $0^\\text{th}$ -order discriminative neural model. In contrast to ours, however, their network is shallow. We compare the two methods in more detail in sec:related-work.", "Additionally, we perform a thorough study of the neural transfer learner, considering all three architectures. A primary goal of our experiments is to determine which of our three proposed neural transfer techniques is superior. Even though our experiments focus on morphological tagging, these architectures are more general in that they may be easily applied to other tasks, e.g., parsing or machine translation. We additionally explore the viability of multi-source transfer, i.e., the case where we have multiple source languages. All of our architectures generalize to the multi-source case without any complications." ], [ "We train our models with the following conditions.", "We evaluate using average per token accuracy, as is standard for both POS tagging and morphological tagging, and per feature $F_1$ as employed in buys-botha:2016:P16-1. The per feature $F_1$ calculates a key $F^k_1$ for each key in the target language's tags by asking if the key-attribute pair $k_i$ $=$ $v_i$ is in the predicted tag. Then, the key-specific $F^k_1$ values are averaged equally. Note that $F_1$ is a more flexible metric as it gives partial credit for getting some of the attributes in the bundle correct, where accuracy does not.", "[author=Ryan,color=purple!40,size=,fancyline,caption=,]Georg needs to check. Taken from: http://www.dfki.de/ neumann/publications/new-ps/BigNLP2016.pdf Our networks are four layers deep (two LSTM layers for the character embedder, i.e., to compute ${v_i}$ and two LSTM layers for the tagger, i.e., to compute ${e_i}$ ) and we use an embedding size of 128 for the character input vector size and hidden layers of 256 nodes in all other cases. All networks are trained with the stochastic gradient method RMSProp BIBREF22 , with a fixed initial learning rate and a learning rate decay that is adjusted for the other languages according to the amount of training data. The batch size is always 16. Furthermore, we use dropout BIBREF23 . The dropout probability is set to 0.2. We used Torch 7 BIBREF24 to configure the computation graphs implementing the network architectures." ], [ "[author=Ryan,color=purple!40,size=,fancyline,caption=,]Needs to be updated! We report our results in two tables. First, we report a detailed cross-lingual evaluation in tab:results. Secondly, we report a comparison against two baselines in tab:baseline-table1 (accuracy) and tab:baseline-table2 ( $F_1$ ). We see two general trends of the data. First, we find that genetically closer languages yield better source languages. Second, we find that the multi-softmax architecture is the best in terms of transfer ability, as evinced by the results in tab:results. We find a wider gap between our model and the baselines under the accuracy than under $F_1$ . We attribute this to the fact that $F_1$ is a softer metric in that it assigns credit to partially correct guesses." ], [ "We divide the discussion of related work topically into three parts for ease of intellectual digestion." ], [ "Most cross-lingual work in NLP—focusing on morphology or otherwise—has concentrated on indirect supervision, rather than transfer learning. The goal in such a regime is to provide noisy labels for training the tagger in the low-resource language through annotations projected over aligned bitext with a high-resource language. This method of projection was first introduced by DBLP:conf/naacl/YarowskyN01 for the projection of POS annotation. While follow-up work BIBREF26 , BIBREF27 , BIBREF28 has continually demonstrated the efficacy of projecting simple part-of-speech annotations, buys-botha:2016:P16-1 were the first to show the use of bitext-based projection for the training of a morphological tagger for low-resource languages.", "As we also discuss the training of a morphological tagger, our work is most closely related to buys-botha:2016:P16-1 in terms of the task itself. We contrast the approaches. The main difference lies therein, that our approach is not projection-based and, thus, does not require the construction of a bilingual lexicon for projection based on bitext. Rather, our method jointly learns multiple taggers and forces them to share features—a true transfer learning scenario. In contrast to projection-based methods, our procedure always requires a minimal amount of annotated data in the low-resource target language—in practice, however, this distinction is non-critical as projection-based methods without a small mount of seed target language data perform poorly BIBREF29 ." ], [ "Our work also follows a recent trend in NLP, whereby traditional word-level neural representations are being replaced by character-level representations for a myriad tasks, e.g., POS tagging DBLP:conf/icml/SantosZ14, parsing BIBREF30 , language modeling BIBREF31 , sentiment analysis BIBREF32 as well as the tagger of heigold2017, whose work we build upon. Our work is also related to recent work on character-level morphological generation using neural architectures BIBREF33 , BIBREF34 ." ], [ "In terms of methodology, however, our proposal bears similarity to recent work in speech and machine translation–we discuss each in turn. In speech recognition, heigold2013multilingual train a cross-lingual neural acoustic model on five Romance languages. The architecture bears similarity to our multi-language softmax approach. Dependency parsing benefits from cross-lingual learning in a similar fashion BIBREF35 , BIBREF20 .", "In neural machine translation BIBREF36 , BIBREF37 , recent work BIBREF38 , BIBREF39 , BIBREF40 has explored the possibility of jointly train translation models for a wide variety of languages. Our work addresses a different task, but the undergirding philosophical motivation is similar, i.e., attack low-resource NLP through multi-task transfer learning. kann-cotterell-schutze:2017:ACL2017 offer a similar method for cross-lingual transfer in morphological inflection generation." ], [ "We have presented three character-level recurrent neural network architectures for multi-task cross-lingual transfer of morphological taggers. We provided an empirical evaluation of the technique on 18 languages from four different language families, showing wide-spread applicability of the method. We found that the transfer of morphological taggers is an eminently viable endeavor among related language and, in general, the closer the languages, the easier the transfer of morphology becomes. Our technique outperforms two strong baselines proposed in previous work. Moreover, we define standard low-resource training splits in UD for future research in low-resource morphological tagging. Future work should focus on extending the neural morphological tagger to a joint lemmatizer BIBREF41 and evaluate its functionality in the low-resource setting." ], [ "RC acknowledges the support of an NDSEG fellowship. Also, we would like to thank Jan Buys and Jan Botha who helped us compare to the numbers reported in their paper. We would also like to thank Hinrich Schütze for reading an early draft and Tim Vieira and Jason Naradowsky for helpful initial discussions." ] ] }
{ "question": [ "How are character representations from various languages joint?", "On which dataset is the experiment conducted?" ], "question_id": [ "a43c400ae37a8705ff2effb4828f4b0b177a74c4", "4056ee2fd7a0a0f444275e627bb881134a1c2a10" ], "nlp_background": [ "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no" ], "search_query": [ "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "shared character embeddings for taggers in both languages together through optimization of a joint loss function" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our formulation of transfer learning builds on work in multi-task learning BIBREF15 , BIBREF9 . We treat each individual language as a task and train a joint model for all the tasks. We first discuss the current state of the art in morphological tagging: a character-level recurrent neural network. After that, we explore three augmentations to the architecture that allow for the transfer learning scenario. All of our proposals force the embedding of the characters for both the source and the target language to share the same vector space, but involve different mechanisms, by which the model may learn language-specific features.", "Cross-lingual morphological tagging may be formulated as a multi-task learning problem. We seek to learn a set of shared character embeddings for taggers in both languages together through optimization of a joint loss function that combines the high-resource tagger and the low-resource one. The first loss function we consider is the following:" ], "highlighted_evidence": [ "We treat each individual language as a task and train a joint model for all the tasks.", "We seek to learn a set of shared character embeddings for taggers in both languages together through optimization of a joint loss function that combines the high-resource tagger and the low-resource one." ] } ], "annotation_id": [ "0648da60ed78880e2d29b141c706cf0428136c86" ], "worker_id": [ "2910ac50801742c7b608b6289a49dffb14737474" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "We use the morphological tagging datasets provided by the Universal Dependencies (UD) treebanks (the concatenation of the $4^\\text{th}$ and $6^\\text{th}$ columns of the file format) BIBREF13 . " ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use the morphological tagging datasets provided by the Universal Dependencies (UD) treebanks (the concatenation of the $4^\\text{th}$ and $6^\\text{th}$ columns of the file format) BIBREF13 . We list the size of the training, development and test splits of the UD treebanks we used in tab:lang-size. Also, we list the number of unique morphological tags in each language in tab:num-tags, which serves as an approximate measure of the morphological complexity each language exhibits. Crucially, the data are annotated in a cross-linguistically consistent manner, such that words in the different languages that have the same syntacto-semantic function have the same bundle of tags (see sec:morpho-tagging for a discussion). Potentially, further gains would be possible by using a more universal scheme, e.g., the UniMorph scheme." ], "highlighted_evidence": [ "We use the morphological tagging datasets provided by the Universal Dependencies (UD) treebanks (the concatenation of the $4^\\text{th}$ and $6^\\text{th}$ columns of the file format) BIBREF13 ." ] } ], "annotation_id": [ "c4d6300a72c04524b18132c85146f9752823563b" ], "worker_id": [ "01cb6148c645822f9a870d3ac20d496c05b6b217" ] } ] }
{ "caption": [ "Figure 1: Example of a morphologically tagged sentence in Russian using the annotation scheme provided in the UD dataset.", "Table 1: Partial inflection table for the Spanish verb soñar", "Figure 2: We depict four subarchitectures used in the models we develop in this work. Combining (a) with the character embeddings in (c) gives the vanilla morphological tagging architecture of Heigold et al. (2017). Combining (a) with (d) yields the language-universal softmax architecture and (b) and (c) yields our joint model for language identification and tagging.", "Table 3: Number of unique morphological tags for each of the experimental languages (organized by language family).", "Table 2: Number of tokens in each of the train, development and test splits (organized by language family).", "Table 4: Results for transfer learning with our joint model. The tables highlight that the best source languages are often genetically and typologically closest. Also, we see that multi-source often helps, albeit more often in the |Dt| = 100 case.", "Table 5: Comparison of our approach to various baselines for low-resource tagging under token-level accuracy. We compare on only those languages in Buys and Botha (2016). Note that tag-level accuracy was not reported in the original B&B paper, but was acquired through personal communication with the first author. All architectures presented in this work are used in their multi-source setting. The B&B and MARMOT models are single-source.", "Figure 3: Learning Curve for Spanish and Catalan comparing our monolingual model, our joint model and two MARMOT models. The first MARMOT model is identical to those trained in the rest of the paper and the second attempts a multi-task approach, which failed so no further experimentation was performed with this model.", "Table 6: Comparison of our approach to various baselines for low-resource tagging under F1 to allow for a more complete comparison to the model of Buys and Botha (2016). All architectures presented in this work are used in their multi-source setting. The B&B and MARMOT models are single-source. We only compare on those languages used in B&B." ], "file": [ "2-Figure1-1.png", "2-Table1-1.png", "4-Figure2-1.png", "5-Table3-1.png", "5-Table2-1.png", "7-Table4-1.png", "8-Table5-1.png", "8-Figure3-1.png", "9-Table6-1.png" ] }
1911.00069
Neural Cross-Lingual Relation Extraction Based on Bilingual Word Embedding Mapping
Relation extraction (RE) seeks to detect and classify semantic relationships between entities, which provides useful information for many NLP applications. Since the state-of-the-art RE models require large amounts of manually annotated data and language-specific resources to achieve high accuracy, it is very challenging to transfer an RE model of a resource-rich language to a resource-poor language. In this paper, we propose a new approach for cross-lingual RE model transfer based on bilingual word embedding mapping. It projects word embeddings from a target language to a source language, so that a well-trained source-language neural network RE model can be directly applied to the target language. Experiment results show that the proposed approach achieves very good performance for a number of target languages on both in-house and open datasets, using a small bilingual dictionary with only 1K word pairs.
{ "section_name": [ "Introduction", "Overview of the Approach", "Cross-Lingual Word Embeddings", "Cross-Lingual Word Embeddings ::: Monolingual Word Embeddings", "Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping", "Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping ::: Length Normalization and Orthogonal Transformation", "Cross-Lingual Word Embeddings ::: Bilingual Word Embedding Mapping ::: Semi-Supervised and Unsupervised Mappings", "Neural Network RE Models", "Neural Network RE Models ::: Embedding Layer", "Neural Network RE Models ::: Context Layer", "Neural Network RE Models ::: Context Layer ::: Bi-LSTM Context Layer", "Neural Network RE Models ::: Context Layer ::: CNN Context Layer", "Neural Network RE Models ::: Summarization Layer", "Neural Network RE Models ::: Output Layer", "Neural Network RE Models ::: Cross-Lingual RE Model Transfer", "Experiments", "Experiments ::: Datasets", "Experiments ::: Source (English) RE Model Performance", "Experiments ::: Cross-Lingual RE Performance", "Experiments ::: Cross-Lingual RE Performance ::: Dictionary Size", "Experiments ::: Cross-Lingual RE Performance ::: Comparison of Different Mappings", "Experiments ::: Cross-Lingual RE Performance ::: Performance on Test Data", "Experiments ::: Cross-Lingual RE Performance ::: Discussion", "Related Work", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Relation extraction (RE) is an important information extraction task that seeks to detect and classify semantic relationships between entities like persons, organizations, geo-political entities, locations, and events. It provides useful information for many NLP applications such as knowledge base construction, text mining and question answering. For example, the entity Washington, D.C. and the entity United States have a CapitalOf relationship, and extraction of such relationships can help answer questions like “What is the capital city of the United States?\"", "Traditional RE models (e.g., BIBREF0, BIBREF1, BIBREF2) require careful feature engineering to derive and combine various lexical, syntactic and semantic features. Recently, neural network RE models (e.g., BIBREF3, BIBREF4, BIBREF5, BIBREF6) have become very successful. These models employ a certain level of automatic feature learning by using word embeddings, which significantly simplifies the feature engineering task while considerably improving the accuracy, achieving the state-of-the-art performance for relation extraction.", "All the above RE models are supervised machine learning models that need to be trained with large amounts of manually annotated RE data to achieve high accuracy. However, annotating RE data by human is expensive and time-consuming, and can be quite difficult for a new language. Moreover, most RE models require language-specific resources such as dependency parsers and part-of-speech (POS) taggers, which also makes it very challenging to transfer an RE model of a resource-rich language to a resource-poor language.", "There are a few existing weakly supervised cross-lingual RE approaches that require no human annotation in the target languages, e.g., BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, the existing approaches require aligned parallel corpora or machine translation systems, which may not be readily available in practice.", "In this paper, we make the following contributions to cross-lingual RE:", "We propose a new approach for direct cross-lingual RE model transfer based on bilingual word embedding mapping. It projects word embeddings from a target language to a source language (e.g., English), so that a well-trained source-language RE model can be directly applied to the target language, with no manually annotated RE data needed for the target language.", "We design a deep neural network architecture for the source-language (English) RE model that uses word embeddings and generic language-independent features as the input. The English RE model achieves the-state-of-the-art performance without using language-specific resources.", "We conduct extensive experiments which show that the proposed approach achieves very good performance (up to $79\\%$ of the accuracy of the supervised target-language RE model) for a number of target languages on both in-house and the ACE05 datasets BIBREF11, using a small bilingual dictionary with only 1K word pairs. To the best of our knowledge, this is the first work that includes empirical studies for cross-lingual RE on several languages across a variety of language families, without using aligned parallel corpora or machine translation systems.", "We organize the paper as follows. In Section 2 we provide an overview of our approach. In Section 3 we describe how to build monolingual word embeddings and learn a linear mapping between two languages. In Section 4 we present a neural network architecture for the source-language (English). In Section 5 we evaluate the performance of the proposed approach for a number of target languages. We discuss related work in Section 6 and conclude the paper in Section 7." ], [ "We summarize the main steps of our neural cross-lingual RE model transfer approach as follows.", "Build word embeddings for the source language and the target language separately using monolingual data.", "Learn a linear mapping that projects the target-language word embeddings into the source-language embedding space using a small bilingual dictionary.", "Build a neural network source-language RE model that uses word embeddings and generic language-independent features as the input.", "For a target-language sentence and any two entities in it, project the word embeddings of the words in the sentence to the source-language word embeddings using the linear mapping, and then apply the source-language RE model on the projected word embeddings to classify the relationship between the two entities. An example is shown in Figure FIGREF4, where the target language is Portuguese and the source language is English.", "We will describe each component of our approach in the subsequent sections." ], [ "In recent years, vector representations of words, known as word embeddings, become ubiquitous for many NLP applications BIBREF12, BIBREF13, BIBREF14.", "A monolingual word embedding model maps words in the vocabulary $\\mathcal {V}$ of a language to real-valued vectors in $\\mathbb {R}^{d\\times 1}$. The dimension of the vector space $d$ is normally much smaller than the size of the vocabulary $V=|\\mathcal {V}|$ for efficient representation. It also aims to capture semantic similarities between the words based on their distributional properties in large samples of monolingual data.", "Cross-lingual word embedding models try to build word embeddings across multiple languages BIBREF15, BIBREF16. One approach builds monolingual word embeddings separately and then maps them to the same vector space using a bilingual dictionary BIBREF17, BIBREF18. Another approach builds multilingual word embeddings in a shared vector space simultaneously, by generating mixed language corpora using aligned sentences BIBREF19, BIBREF20.", "In this paper, we adopt the technique in BIBREF17 because it only requires a small bilingual dictionary of aligned word pairs, and does not require parallel corpora of aligned sentences which could be more difficult to obtain." ], [ "To build monolingual word embeddings for the source and target languages, we use a variant of the Continuous Bag-of-Words (CBOW) word2vec model BIBREF13.", "The standard CBOW model has two matrices, the input word matrix $\\tilde{\\mathbf {X}} \\in \\mathbb {R}^{d\\times V}$ and the output word matrix $\\mathbf {X} \\in \\mathbb {R}^{d\\times V}$. For the $i$th word $w_i$ in $\\mathcal {V}$, let $\\mathbf {e}(w_i) \\in \\mathbb {R}^{V \\times 1}$ be a one-hot vector with 1 at index $i$ and 0s at other indexes, so that $\\tilde{\\mathbf {x}}_i = \\tilde{\\mathbf {X}}\\mathbf {e}(w_i)$ (the $i$th column of $\\tilde{\\mathbf {X}}$) is the input vector representation of word $w_i$, and $\\mathbf {x}_i = \\mathbf {X}\\mathbf {e}(w_i)$ (the $i$th column of $\\mathbf {X}$) is the output vector representation (i.e., word embedding) of word $w_i$.", "Given a sequence of training words $w_1, w_2, ..., w_N$, the CBOW model seeks to predict a target word $w_t$ using a window of $2c$ context words surrounding $w_t$, by maximizing the following objective function:", "The conditional probability is calculated using a softmax function:", "where $\\mathbf {x}_t=\\mathbf {X}\\mathbf {e}(w_t)$ is the output vector representation of word $w_t$, and", "is the sum of the input vector representations of the context words.", "In our variant of the CBOW model, we use a separate input word matrix $\\tilde{\\mathbf {X}}_j$ for a context word at position $j, -c \\le j \\le c, j\\ne 0$. In addition, we employ weights that decay with the distances of the context words to the target word. Under these modifications, we have", "We use the variant to build monolingual word embeddings because experiments on named entity recognition and word similarity tasks showed this variant leads to small improvements over the standard CBOW model BIBREF21." ], [ "BIBREF17 observed that word embeddings of different languages often have similar geometric arrangements, and suggested to learn a linear mapping between the vector spaces.", "Let $\\mathcal {D}$ be a bilingual dictionary with aligned word pairs ($w_i, v_i)_{i=1,...,D}$ between a source language $s$ and a target language $t$, where $w_i$ is a source-language word and $v_i$ is the translation of $w_i$ in the target language. Let $\\mathbf {x}_i \\in \\mathbb {R}^{d \\times 1}$ be the word embedding of the source-language word $w_i$, $\\mathbf {y}_i \\in \\mathbb {R}^{d \\times 1}$ be the word embedding of the target-language word $v_i$.", "We find a linear mapping (matrix) $\\mathbf {M}_{t\\rightarrow s}$ such that $\\mathbf {M}_{t\\rightarrow s}\\mathbf {y}_i$ approximates $\\mathbf {x}_i$, by solving the following least squares problem using the dictionary as the training set:", "Using $\\mathbf {M}_{t\\rightarrow s}$, for any target-language word $v$ with word embedding $\\mathbf {y}$, we can project it into the source-language embedding space as $\\mathbf {M}_{t\\rightarrow s}\\mathbf {y}$." ], [ "To ensure that all the training instances in the dictionary $\\mathcal {D}$ contribute equally to the optimization objective in (DISPLAY_FORM14) and to preserve vector norms after projection, we have tried length normalization and orthogonal transformation for learning the bilingual mapping as in BIBREF22, BIBREF23, BIBREF24.", "First, we normalize the source-language and target-language word embeddings to be unit vectors: $\\mathbf {x}^{\\prime }=\\frac{\\mathbf {x}}{||\\mathbf {x}||}$ for each source-language word embedding $\\mathbf {x}$, and $\\mathbf {y}^{\\prime }= \\frac{\\mathbf {y}}{||\\mathbf {y}||}$ for each target-language word embedding $\\mathbf {y}$.", "Next, we add an orthogonality constraint to (DISPLAY_FORM14) such that $\\mathbf {M}$ is an orthogonal matrix, i.e., $\\mathbf {M}^\\mathrm {T}\\mathbf {M} = \\mathbf {I}$ where $\\mathbf {I}$ denotes the identity matrix:", "$\\mathbf {M}^{O} _{t\\rightarrow s}$ can be computed using singular-value decomposition (SVD)." ], [ "The mapping learned in (DISPLAY_FORM14) or (DISPLAY_FORM16) requires a seed dictionary. To relax this requirement, BIBREF25 proposed a self-learning procedure that can be combined with a dictionary-based mapping technique. Starting with a small seed dictionary, the procedure iteratively 1) learns a mapping using the current dictionary; and 2) computes a new dictionary using the learned mapping.", "BIBREF26 proposed an unsupervised method to learn the bilingual mapping without using a seed dictionary. The method first uses a heuristic to build an initial dictionary that aligns the vocabularies of two languages, and then applies a robust self-learning procedure to iteratively improve the mapping. Another unsupervised method based on adversarial training was proposed in BIBREF27.", "We compare the performance of different mappings for cross-lingual RE model transfer in Section SECREF45." ], [ "For any two entities in a sentence, an RE model determines whether these two entities have a relationship, and if yes, classifies the relationship into one of the pre-defined relation types. We focus on neural network RE models since these models achieve the state-of-the-art performance for relation extraction. Most importantly, neural network RE models use word embeddings as the input, which are amenable to cross-lingual model transfer via cross-lingual word embeddings. In this paper, we use English as the source language.", "Our neural network architecture has four layers. The first layer is the embedding layer which maps input words in a sentence to word embeddings. The second layer is a context layer which transforms the word embeddings to context-aware vector representations using a recurrent or convolutional neural network layer. The third layer is a summarization layer which summarizes the vectors in a sentence by grouping and pooling. The final layer is the output layer which returns the classification label for the relation type." ], [ "For an English sentence with $n$ words $\\mathbf {s}=(w_1,w_2,...,w_n)$, the embedding layer maps each word $w_t$ to a real-valued vector (word embedding) $\\mathbf {x}_t\\in \\mathbb {R}^{d \\times 1}$ using the English word embedding model (Section SECREF9). In addition, for each entity $m$ in the sentence, the embedding layer maps its entity type to a real-valued vector (entity label embedding) $\\mathbf {l}_m \\in \\mathbb {R}^{d_m \\times 1}$ (initialized randomly). In our experiments we use $d=300$ and $d_m = 50$." ], [ "Given the word embeddings $\\mathbf {x}_t$'s of the words in the sentence, the context layer tries to build a sentence-context-aware vector representation for each word. We consider two types of neural network layers that aim to achieve this." ], [ "The first type of context layer is based on Long Short-Term Memory (LSTM) type recurrent neural networks BIBREF28, BIBREF29. Recurrent neural networks (RNNs) are a class of neural networks that operate on sequential data such as sequences of words. LSTM networks are a type of RNNs that have been invented to better capture long-range dependencies in sequential data.", "We pass the word embeddings $\\mathbf {x}_t$'s to a forward and a backward LSTM layer. A forward or backward LSTM layer consists of a set of recurrently connected blocks known as memory blocks. The memory block at the $t$-th word in the forward LSTM layer contains a memory cell $\\overrightarrow{\\mathbf {c}}_t$ and three gates: an input gate $\\overrightarrow{\\mathbf {i}}_t$, a forget gate $\\overrightarrow{\\mathbf {f}}_t$ and an output gate $\\overrightarrow{\\mathbf {o}}_t$ ($\\overrightarrow{\\cdot }$ indicates the forward direction), which are updated as follows:", "where $\\sigma $ is the element-wise sigmoid function and $\\odot $ is the element-wise multiplication.", "The hidden state vector $\\overrightarrow{\\mathbf {h}}_t$ in the forward LSTM layer incorporates information from the left (past) tokens of $w_t$ in the sentence. Similarly, we can compute the hidden state vector $\\overleftarrow{\\mathbf {h}}_t$ in the backward LSTM layer, which incorporates information from the right (future) tokens of $w_t$ in the sentence. The concatenation of the two vectors $\\mathbf {h}_t = [\\overrightarrow{\\mathbf {h}}_t, \\overleftarrow{\\mathbf {h}}_t]$ is a good representation of the word $w_t$ with both left and right contextual information in the sentence." ], [ "The second type of context layer is based on Convolutional Neural Networks (CNNs) BIBREF3, BIBREF4, which applies convolution-like operation on successive windows of size $k$ around each word in the sentence. Let $\\mathbf {z}_t = [\\mathbf {x}_{t-(k-1)/2},...,\\mathbf {x}_{t+(k-1)/2}]$ be the concatenation of $k$ word embeddings around $w_t$. The convolutional layer computes a hidden state vector", "for each word $w_t$, where $\\mathbf {W}$ is a weight matrix and $\\mathbf {b}$ is a bias vector, and $\\tanh (\\cdot )$ is the element-wise hyperbolic tangent function." ], [ "After the context layer, the sentence $(w_1,w_2,...,w_n)$ is represented by $(\\mathbf {h}_1,....,\\mathbf {h}_n)$. Suppose $m_1=(w_{b_1},..,w_{e_1})$ and $m_2=(w_{b_2},..,w_{e_2})$ are two entities in the sentence where $m_1$ is on the left of $m_2$ (i.e., $e_1 < b_2$). As different sentences and entities may have various lengths, the summarization layer tries to build a fixed-length vector that best summarizes the representations of the sentence and the two entities for relation type classification.", "We divide the hidden state vectors $\\mathbf {h}_t$'s into 5 groups:", "$G_1=\\lbrace \\mathbf {h}_{1},..,\\mathbf {h}_{b_1-1}\\rbrace $ includes vectors that are left to the first entity $m_1$.", "$G_2=\\lbrace \\mathbf {h}_{b_1},..,\\mathbf {h}_{e_1}\\rbrace $ includes vectors that are in the first entity $m_1$.", "$G_3=\\lbrace \\mathbf {h}_{e_1+1},..,\\mathbf {h}_{b_2-1}\\rbrace $ includes vectors that are between the two entities.", "$G_4=\\lbrace \\mathbf {h}_{b_2},..,\\mathbf {h}_{e_2}\\rbrace $ includes vectors that are in the second entity $m_2$.", "$G_5=\\lbrace \\mathbf {h}_{e_2+1},..,\\mathbf {h}_{n}\\rbrace $ includes vectors that are right to the second entity $m_2$.", "We perform element-wise max pooling among the vectors in each group:", "where $d_h$ is the dimension of the hidden state vectors. Concatenating the $\\mathbf {h}_{G_i}$'s we get a fixed-length vector $\\mathbf {h}_s=[\\mathbf {h}_{G_1},...,\\mathbf {h}_{G_5}]$." ], [ "The output layer receives inputs from the previous layers (the summarization vector $\\mathbf {h}_s$, the entity label embeddings $\\mathbf {l}_{m_1}$ and $\\mathbf {l}_{m_2}$ for the two entities under consideration) and returns a probability distribution over the relation type labels:" ], [ "Given the word embeddings of a sequence of words in a target language $t$, $(\\mathbf {y}_1,...,\\mathbf {y}_n)$, we project them into the English embedding space by applying the linear mapping $\\mathbf {M}_{t\\rightarrow s}$ learned in Section SECREF13: $(\\mathbf {M}_{t\\rightarrow s}\\mathbf {y}_1, \\mathbf {M}_{t\\rightarrow s}\\mathbf {y}_2,...,\\mathbf {M}_{t\\rightarrow s}\\mathbf {y}_n)$. The neural network English RE model is then applied on the projected word embeddings and the entity label embeddings (which are language independent) to perform relationship classification.", "Note that our models do not use language-specific resources such as dependency parsers or POS taggers because these resources might not be readily available for a target language. Also our models do not use precise word position features since word positions in sentences can vary a lot across languages." ], [ "In this section, we evaluate the performance of the proposed cross-lingual RE approach on both in-house dataset and the ACE (Automatic Content Extraction) 2005 multilingual dataset BIBREF11." ], [ "Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).", "The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical).", "For both datasets, we create a class label “O\" to denote that the two entities under consideration do not have a relationship belonging to one of the relation types of interest." ], [ "We build 3 neural network English RE models under the architecture described in Section SECREF4:", "The first neural network RE model does not have a context layer and the word embeddings are directly passed to the summarization layer. We call it Pass-Through for short.", "The second neural network RE model has a Bi-LSTM context layer. We call it Bi-LSTM for short.", "The third neural network model has a CNN context layer with a window size 3. We call it CNN for short.", "First we compare our neural network English RE models with the state-of-the-art RE models on the ACE05 English data. The ACE05 English data can be divided to 6 different domains: broadcast conversation (bc), broadcast news (bn), telephone conversation (cts), newswire (nw), usenet (un) and webblogs (wl). We apply the same data split in BIBREF31, BIBREF30, BIBREF6, which uses news (the union of bn and nw) as the training set, a half of bc as the development set and the remaining data as the test set.", "We learn the model parameters using Adam BIBREF32. We apply dropout BIBREF33 to the hidden layers to reduce overfitting. The development set is used for tuning the model hyperparameters and for early stopping.", "In Table TABREF40 we compare our models with the best models in BIBREF30 and BIBREF6. Our Bi-LSTM model outperforms the best model (single or ensemble) in BIBREF30 and the best single model in BIBREF6, without using any language-specific resources such as dependency parsers.", "While the data split in the previous works was motivated by domain adaptation, the focus of this paper is on cross-lingual model transfer, and hence we apply a random data split as follows. For the source language English and each target language, we randomly select $80\\%$ of the data as the training set, $10\\%$ as the development set, and keep the remaining $10\\%$ as the test set. The sizes of the sets are summarized in Table TABREF41.", "We report the Precision, Recall and $F_1$ score of the 3 neural network English RE models in Table TABREF42. Note that adding an additional context layer with either Bi-LSTM or CNN significantly improves the performance of our English RE model, compared with the simple Pass-Through model. Therefore, we will focus on the Bi-LSTM model and the CNN model in the subsequent experiments." ], [ "We apply the English RE models to the 7 target languages across a variety of language families." ], [ "The bilingual dictionary includes the most frequent target-language words and their translations in English. To determine how many word pairs are needed to learn an effective bilingual word embedding mapping for cross-lingual RE, we first evaluate the performance ($F_1$ score) of our cross-lingual RE approach on the target-language development sets with an increasing dictionary size, as plotted in Figure FIGREF35.", "We found that for most target languages, once the dictionary size reaches 1K, further increasing the dictionary size may not improve the transfer performance. Therefore, we select the dictionary size to be 1K." ], [ "We compare the performance of cross-lingual RE model transfer under the following bilingual word embedding mappings:", "Regular-1K: the regular mapping learned in (DISPLAY_FORM14) using 1K word pairs;", "Orthogonal-1K: the orthogonal mapping with length normalization learned in (DISPLAY_FORM16) using 1K word pairs (in this case we train the English RE models with the normalized English word embeddings);", "Semi-Supervised-1K: the mapping learned with 1K word pairs and improved by the self-learning method in BIBREF25;", "Unsupervised: the mapping learned by the unsupervised method in BIBREF26.", "The results are summarized in Table TABREF46. The regular mapping outperforms the orthogonal mapping consistently across the target languages. While the orthogonal mapping was shown to work better than the regular mapping for the word translation task BIBREF22, BIBREF23, BIBREF24, our cross-lingual RE approach directly maps target-language word embeddings to the English embedding space without conducting word translations. Moreover, the orthogonal mapping requires length normalization, but we observed that length normalization adversely affects the performance of the English RE models (about 2.0 $F_1$ points drop).", "We apply the vecmap toolkit to obtain the semi-supervised and unsupervised mappings. The unsupervised mapping has the lowest average accuracy over the target languages, but it does not require a seed dictionary. Among all the mappings, the regular mapping achieves the best average accuracy over the target languages using a dictionary with only 1K word pairs, and hence we adopt it for the cross-lingual RE task." ], [ "The cross-lingual RE model transfer results for the in-house test data are summarized in Table TABREF52 and the results for the ACE05 test data are summarized in Table TABREF53, using the regular mapping learned with a bilingual dictionary of size 1K. In the tables, we also provide the performance of the supervised RE model (Bi-LSTM) for each target language, which is trained with a few hundred thousand tokens of manually annotated RE data in the target-language, and may serve as an upper bound for the cross-lingual model transfer performance.", "Among the 2 neural network models, the Bi-LSTM model achieves a better cross-lingual RE performance than the CNN model for 6 out of the 7 target languages. In terms of absolute performance, the Bi-LSTM model achieves over $40.0$ $F_1$ scores for German, Spanish, Portuguese and Chinese. In terms of relative performance, it reaches over $75\\%$ of the accuracy of the supervised target-language RE model for German, Spanish, Italian and Portuguese. While Japanese and Arabic appear to be more difficult to transfer, it still achieves $55\\%$ and $52\\%$ of the accuracy of the supervised Japanese and Arabic RE model, respectively, without using any manually annotated RE data in Japanese/Arabic.", "We apply model ensemble to further improve the accuracy of the Bi-LSTM model. We train 5 Bi-LSTM English RE models initiated with different random seeds, apply the 5 models on the target languages, and combine the outputs by selecting the relation type labels with the highest probabilities among the 5 models. This Ensemble approach improves the single model by 0.6-1.9 $F_1$ points, except for Arabic." ], [ "Since our approach projects the target-language word embeddings to the source-language embedding space preserving the word order, it is expected to work better for a target language that has more similar word order as the source language. This has been verified by our experiments. The source language, English, belongs to the SVO (Subject, Verb, Object) language family where in a sentence the subject comes first, the verb second, and the object third. Spanish, Italian, Portuguese, German (in conventional typology) and Chinese also belong to the SVO language family, and our approach achieves over $70\\%$ relative accuracy for these languages. On the other hand, Japanese belongs to the SOV (Subject, Object, Verb) language family and Arabic belongs to the VSO (Verb, Subject, Object) language family, and our approach achieves lower relative accuracy for these two languages." ], [ "There are a few weakly supervised cross-lingual RE approaches. BIBREF7 and BIBREF8 project annotated English RE data to Korean to create weakly labeled training data via aligned parallel corpora. BIBREF9 translates a target-language sentence into English, performs RE in English, and then projects the relation phrases back to the target-language sentence. BIBREF10 proposes an adversarial feature adaptation approach for cross-lingual relation classification, which uses a machine translation system to translate source-language sentences into target-language sentences. Unlike the existing approaches, our approach does not require aligned parallel corpora or machine translation systems. There are also several multilingual RE approaches, e.g., BIBREF34, BIBREF35, BIBREF36, where the focus is to improve monolingual RE by jointly modeling texts in multiple languages.", "Many cross-lingual word embedding models have been developed recently BIBREF15, BIBREF16. An important application of cross-lingual word embeddings is to enable cross-lingual model transfer. In this paper, we apply the bilingual word embedding mapping technique in BIBREF17 to cross-lingual RE model transfer. Similar approaches have been applied to other NLP tasks such as dependency parsing BIBREF37, POS tagging BIBREF38 and named entity recognition BIBREF21, BIBREF39." ], [ "In this paper, we developed a simple yet effective neural cross-lingual RE model transfer approach, which has very low resource requirements (a small bilingual dictionary with 1K word pairs) and can be easily extended to a new language. Extensive experiments for 7 target languages across a variety of language families on both in-house and open datasets show that the proposed approach achieves very good performance (up to $79\\%$ of the accuracy of the supervised target-language RE model), which provides a strong baseline for building cross-lingual RE models with minimal resources." ], [ "We thank Mo Yu for sharing their ACE05 English data split and the anonymous reviewers for their valuable comments." ] ] }
{ "question": [ "Do they train their own RE model?", "How big are the datasets?", "What languages do they experiment on?", "What datasets are used?" ], "question_id": [ "f6496b8d09911cdf3a9b72aec0b0be6232a6dba1", "5c90e1ed208911dbcae7e760a553e912f8c237a5", "3c3b4797e2b21e2c31cf117ad9e52f327721790f", "a7d72f308444616a0befc8db7ad388b3216e2143" ], "nlp_background": [ "two", "two", "two", "two" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "First we compare our neural network English RE models with the state-of-the-art RE models on the ACE05 English data. The ACE05 English data can be divided to 6 different domains: broadcast conversation (bc), broadcast news (bn), telephone conversation (cts), newswire (nw), usenet (un) and webblogs (wl). We apply the same data split in BIBREF31, BIBREF30, BIBREF6, which uses news (the union of bn and nw) as the training set, a half of bc as the development set and the remaining data as the test set.", "We learn the model parameters using Adam BIBREF32. We apply dropout BIBREF33 to the hidden layers to reduce overfitting. The development set is used for tuning the model hyperparameters and for early stopping.", "We apply model ensemble to further improve the accuracy of the Bi-LSTM model. We train 5 Bi-LSTM English RE models initiated with different random seeds, apply the 5 models on the target languages, and combine the outputs by selecting the relation type labels with the highest probabilities among the 5 models. This Ensemble approach improves the single model by 0.6-1.9 $F_1$ points, except for Arabic." ], "highlighted_evidence": [ "We apply the same data split in BIBREF31, BIBREF30, BIBREF6, which uses news (the union of bn and nw) as the training set, a half of bc as the development set and the remaining data as the test set.\n\nWe learn the model parameters using Adam BIBREF32. We apply dropout BIBREF33 to the hidden layers to reduce overfitting. The development set is used for tuning the model hyperparameters and for early stopping.", "We train 5 Bi-LSTM English RE models initiated with different random seeds, apply the 5 models on the target languages, and combine the outputs by selecting the relation type labels with the highest probabilities among the 5 models." ] } ], "annotation_id": [ "0652ee6a3d11af5276f085ea7c4a098b4fd89508" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "In-house dataset consists of 3716 documents \nACE05 dataset consists of 1635 documents", "evidence": [ "FLOAT SELECTED: Table 2: Number of documents in the training/dev/test sets of the in-house and ACE05 datasets.", "Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).", "The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Number of documents in the training/dev/test sets of the in-house and ACE05 datasets.", "Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).", "The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical)." ] } ], "annotation_id": [ "cb2f231c00f9cabcf986a656a15aefc3fe0beeb0" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "English, German, Spanish, Italian, Japanese and Portuguese", " English, Arabic and Chinese" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).", "The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical)." ], "highlighted_evidence": [ "Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. ", "The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical)." ] } ], "annotation_id": [ "b0cb2a3723ff1ea75f6fdbfb4333f58603ace8c7" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "in-house dataset", "ACE05 dataset " ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).", "The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical).", "In this section, we evaluate the performance of the proposed cross-lingual RE approach on both in-house dataset and the ACE (Automatic Content Extraction) 2005 multilingual dataset BIBREF11." ], "highlighted_evidence": [ "Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event_Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).", "The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical).", "the ACE (Automatic Content Extraction) 2005 multilingual dataset BIBREF11." ] } ], "annotation_id": [ "d1547b2e6fc9e3f4b029281744cb4e5e5e3ab697" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ] }
{ "caption": [ "Figure 1: Neural cross-lingual relation extraction based on bilingual word embedding mapping - target language: Portuguese, source language: English.", "Table 1: Comparison with the state-of-the-art RE models on the ACE05 English data (S: Single Model; E: Ensemble Model).", "Table 2: Number of documents in the training/dev/test sets of the in-house and ACE05 datasets.", "Figure 2: Cross-lingual RE performance (F1 score) vs. dictionary size (number of bilingual word pairs for learning the mapping (4)) under the Bi-LSTM English RE model on the target-language development data.", "Table 3: Performance of the supervised English RE models on the in-house and ACE05 English test data.", "Table 4: Comparison of the performance (F1 score) using different mappings on the target-language development data under the Bi-LSTM model.", "Table 5: Performance of the cross-lingual RE approach on the in-house target-language test data.", "Table 6: Performance of the cross-lingual RE approach on the ACE05 target-language test data." ], "file": [ "3-Figure1-1.png", "6-Table1-1.png", "6-Table2-1.png", "7-Figure2-1.png", "7-Table3-1.png", "8-Table4-1.png", "9-Table5-1.png", "9-Table6-1.png" ] }
1910.04887
Visual Natural Language Query Auto-Completion for Estimating Instance Probabilities
We present a new task of query auto-completion for estimating instance probabilities. We complete a user query prefix conditioned upon an image. Given the complete query, we fine tune a BERT embedding for estimating probabilities of a broad set of instances. The resulting instance probabilities are used for selection while being agnostic to the segmentation or attention mechanism. Our results demonstrate that auto-completion using both language and vision performs better than using only language, and that fine tuning a BERT embedding allows to efficiently rank instances in the image. In the spirit of reproducible research we make our data, models, and code available.
{ "section_name": [ "Introduction", "Methods", "Methods ::: Modifying FactorCell LSTM for Image Query Auto-Completion", "Methods ::: Fine Tuning BERT for Instance Probability Estimation", "Methods ::: Data and Training Details", "Results", "Results ::: Conclusions" ], "paragraphs": [ [ "This work focuses on the problem of finding objects in an image based on natural language descriptions. Existing solutions take into account both the image and the query BIBREF0, BIBREF1, BIBREF2. In our problem formulation, rather than having the entire text, we are given only a prefix of the text which requires completing the text based on a language model and the image, and finding a relevant object in the image. We decompose the problem into three components: (i) completing the query from text prefix and an image; (ii) estimating probabilities of objects based on the completed text, and (iii) segmenting and classifying all instances in the image. We combine, extend, and modify state of the art components: (i) we extend a FactorCell LSTM BIBREF3, BIBREF4 which conditionally completes text to complete a query from both a text prefix and an image; (ii) we fine tune a BERT embedding to compute instance probabilities from a complete sentence, and (iii) we use Mask-RCNN BIBREF5 for instance segmentation.", "Recent natural language embeddings BIBREF6 have been trained with the objectives of predicting masked words and determining whether sentences follow each other, and are efficiently used across a dozen of natural language processing tasks. Sequence models have been conditioned to complete text from a prefix and index BIBREF3, however have not been extended to take into account an image. Deep neural networks have been trained to segment all instances in an image at very high quality BIBREF5, BIBREF7. We propose a novel method of natural language query auto-completion for estimating instance probabilities conditioned on the image and a user query prefix. Our system combines and modifies state of the art components used in query completion, language embedding, and masked instance segmentation. Estimating a broad set of instance probabilities enables selection which is agnostic to the segmentation procedure." ], [ "Figure FIGREF2 shows the architecture of our approach. First, we extract image features with a pre-trained CNN. We incorporate the image features into a modified FactorCell LSTM language model along with the user query prefix to complete the query. The completed query is then fed into a fine-tuned BERT embedding to estimate instance probabilities, which in turn are used for instance selection.", "We denote a set of objects $o_k \\in O$ where O is the entire set of recognizable object classes. The user inputs a prefix, $p$, an incomplete query on an image, $I$. Given $p$, we auto-complete the intended query $q$. We define the auto-completion query problem in equation DISPLAY_FORM3 as the maximization of the probability of a query conditioned on an image where $w_i \\in A$ is the word in position $i$.", "", "We pose our instance probability estimation problem given an auto-completed query $\\mathbf {q^*}$ as a multilabel problem where each class can independently exist. Let $O_{q*}$ be the set of instances referred to in $\\mathbf {q^*}$. Given $\\hat{p}_k$ is our estimate of $P(o_k \\in O_{q*})$ and $y_k = \\mathbb {1}[o_k \\in O_{q*}]$, the instance selection model minimizes the sigmoid cross-entropy loss function:" ], [ "We utilize the FactorCell (FC) adaptation of an LSTM with coupled input and forget gates BIBREF4 to autocomplete queries. The FactorCell is an LSTM with a context-dependent weight matrix $\\mathbf {W^{\\prime }} = \\mathbf {W} + \\mathbf {A}$ in place of $\\mathbf {W}$. Given a character embedding $w_t \\in \\mathbb {R}^e$, a previous hidden state $h_{t-1} \\in \\mathbb {R}^h$ , the adaptation matrix, $\\mathbf {A}$, is formed by taking the product of the context, c, with two basis tensors $\\mathbf {Z_L} \\in \\mathbb {R}^{m\\times (e+h)\\times r}$ and $\\mathbf {Z_R} \\in \\mathbb {R}^{r\\times h \\times m}$.", "To adapt the FactorCell BIBREF4 for our purposes, we replace user embeddings with a low-dimensional image representation. Thus, we are able to modify each query completion to be personalized to a specific image representation. We extract features from an input image using a CNN pretrained on ImageNet, retraining only the last two fully connected layers. The image feature vector is fed into the FactorCell through the adaptation matrix. We perform beam search over the sequence of predicted characters to chose the optimal completion for the given prefix." ], [ "We fine tune a pre-trained BERT embedding to perform transfer learning for our instance selection task. We use a 12-layer implementation which has been shown to generalize and perform well when fine-tuned for new tasks such as question answering, text classification, and named entity recognition. To apply the model to our task, we add an additional dense layer to the BERT architecture with 10% dropout, mapping the last pooled layer to the object classes in our data." ], [ "We use the Visual Genome (VG) BIBREF8 and ReferIt BIBREF9 datasets which are suitable for our purposes. The VG data contains images, region descriptions, relationships, question-answers, attributes, and object instances. The region descriptions provide a replacement for queries since they mention various objects in different regions of each image. However, while some region descriptions are referring phrases, some are more similar to descriptions (see examples in Table TABREF10). The large number of examples makes the Visual Genome dataset particularly useful for our task. The smaller ReferIt dataset consists of referring expressions attached to images which more closely resemble potential user queries of images. We train separate models using both datasets.", "For training, we aggregated (query, image) pairs using the region descriptions from the VG dataset and referring expressions from the ReferIt dataset. Our VG training set consists of 85% of the data: 16k images and 740k corresponding region descriptions. The Referit training data consists of 9k images and 54k referring expressions.", "The query completion models are trained using a 128 dimensional image representation, a rank $r=64$ personalized matrix, 24 dimensional character embeddings, 512 dimensional LSTM hidden units, and a max length of 50 characters per query, with Adam at a 5e-4 learning rate, and a batch size of 32 for 80K iterations. The instance selection model is trained using (region description, object set) pairs from the VG dataset resulting in a training set of approximately 1.73M samples. The remaining 300K samples are split into validation and testing. Our training procedure for the instance selection model fine tunes all 12 layers of BERT with 32 sample batch sizes for 250K iterations, using Adam and performing learning rate warm-up for the first 10% of iterations with a target 5e-5 learning rate. The entire training processes takes around a day on an NVIDIA Tesla P100 GPU." ], [ "Figure 3 shows example results. We evaluate query completion by language perplexity and mean reciprocal rank (MRR) and evaluate instance selection by F1-score. We compare the perplexity on both sets of test queries using corresponding images vs. random noise as context. Table TABREF11 shows perplexity on the VG and ReferIt test queries with both corresponding images and random noise. The VG and ReferIt datasets have character vocabulary sizes of 89 and 77 respectively.", "Given the matching index $t$ of the true query in the top 10 completions we compute the MRR as $\\sum _{n}{\\frac{1}{t}}$ where we replace the reciprocal rank with 0 if the true query does not appear in the top ten completions. We evaluate the VG and ReferIt test queries with varying prefix sizes and compare performance with the corresponding image and random noise as context. MRR is influenced by the length of the query, as longer queries are more difficult to match. Therefore, as expected we observe better performance on the ReferIt dataset for all prefix lengths. Finally, our instance selection achieves an F1-score of 0.7618 over all 2,909 instance classes." ], [ "Our results demonstrate that auto-completion based on both language and vision performs better than by using only language, and that fine tuning a BERT embedding allows to efficiently rank instances in the image. In future work we would like to extract referring expressions using simple grammatical rules to differentiate between referring and non-referring region descriptions. We would also like to combine the VG and ReferIt datasets to train a single model and scale up our datasets to improve query completions." ] ] }
{ "question": [ "How better does auto-completion perform when using both language and vision than only language?", "How big is data provided by this research?", "How they complete a user query prefix conditioned upon an image?" ], "question_id": [ "dfb0351e8fa62ceb51ce77b0f607885523d1b8e8", "a130aa735de3b65c71f27018f20d3c068bafb826", "0c1663a7f7750b399f40ef7b4bf19d5c598890ff" ], "nlp_background": [ "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "computer vision", "computer vision", "computer vision" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "a8745e17938206e4d3da4ecdbbcd5b9082e7e265" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "16k images and 740k corresponding region descriptions" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For training, we aggregated (query, image) pairs using the region descriptions from the VG dataset and referring expressions from the ReferIt dataset. Our VG training set consists of 85% of the data: 16k images and 740k corresponding region descriptions. The Referit training data consists of 9k images and 54k referring expressions." ], "highlighted_evidence": [ "Our VG training set consists of 85% of the data: 16k images and 740k corresponding region descriptions." ] } ], "annotation_id": [ "0668f5c3c9566ed638ce7184c3f61e6a81ebb5d2" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "we replace user embeddings with a low-dimensional image representation" ], "yes_no": null, "free_form_answer": "", "evidence": [ "To adapt the FactorCell BIBREF4 for our purposes, we replace user embeddings with a low-dimensional image representation. Thus, we are able to modify each query completion to be personalized to a specific image representation. We extract features from an input image using a CNN pretrained on ImageNet, retraining only the last two fully connected layers. The image feature vector is fed into the FactorCell through the adaptation matrix. We perform beam search over the sequence of predicted characters to chose the optimal completion for the given prefix." ], "highlighted_evidence": [ "To adapt the FactorCell BIBREF4 for our purposes, we replace user embeddings with a low-dimensional image representation. Thus, we are able to modify each query completion to be personalized to a specific image representation." ] } ], "annotation_id": [ "fe78d78d55c7c248074927f5be4c09058b36054d" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: Architecture: Image features are extracted from a pretrained CNN along with the user query prefix are input to an extended FactorCell LSTM which outputs a completed query. The completed query is fed into a fine-tuned BERT embedding which outputs instance probabilities used for instance selection.", "Table 1: Example region descriptions from VG dataset.", "Figure 2: Comparison of image query auto-completion MRR results for VG (left) and ReferIt (right) using the image vs. noise. The horizontal axis denotes varying prefix lengths as a percentage of total query length and context. The MRR improves when increasing query prefix length, and is better when using the image.", "Table 2: Comparison of image query auto-completion perplexity using an image vs. noise, for both datasets. As expected, using the image results in a lower (better) perplexity.", "Figure 3: Example results: (a) input query prefix and image; (b) estimated instance probabilities; (c) instance segmentation; (d) resulting selected instances and auto-completed query conditioned on query prefix and image." ], "file": [ "2-Figure1-1.png", "2-Table1-1.png", "3-Figure2-1.png", "3-Table2-1.png", "4-Figure3-1.png" ] }
1810.00663
Translating Navigation Instructions in Natural Language to a High-Level Plan for Behavioral Robot Navigation
We propose an end-to-end deep learning model for translating free-form natural language instructions to a high-level plan for behavioral robot navigation. We use attention models to connect information from both the user instructions and a topological representation of the environment. We evaluate our model's performance on a new dataset containing 10,050 pairs of navigation instructions. Our model significantly outperforms baseline approaches. Furthermore, our results suggest that it is possible to leverage the environment map as a relevant knowledge base to facilitate the translation of free-form navigational instruction.
{ "section_name": [ "Introduction", "Related work", "Problem Formulation", "The Behavioral Graph: A Knowledge Base For Navigation", "Approach", "Dataset", "Experiments", "Evaluation Metrics", "Models Used in the Evaluation", "Implementation Details", "Quantitative Evaluation", "Qualitative Evaluation", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Enabling robots to follow navigation instructions in natural language can facilitate human-robot interaction across a variety of applications. For instance, within the service robotics domain, robots can follow navigation instructions to help with mobile manipulation BIBREF0 and delivery tasks BIBREF1 .", "Interpreting navigation instructions in natural language is difficult due to the high variability in the way people describe routes BIBREF2 . For example, there are a variety of ways to describe the route in Fig. FIGREF4 (a):", "Each fragment of a sentence within these instructions can be mapped to one or more than one navigation behaviors. For instance, assume that a robot counts with a number of primitive, navigation behaviors, such as “enter the room on the left (or on right)” , “follow the corridor”, “cross the intersection”, etc. Then, the fragment “advance forward” in a navigation instruction could be interpreted as a “follow the corridor” behavior, or as a sequence of “follow the corridor” interspersed with “cross the intersection” behaviors depending on the topology of the environment. Resolving such ambiguities often requires reasoning about “common-sense” concepts, as well as interpreting spatial information and landmarks, e.g., in sentences such as “the room on the left right before the end of the corridor” and “the room which is in the middle of two vases”.", "In this work, we pose the problem of interpreting navigation instructions as finding a mapping (or grounding) of the commands into an executable navigation plan. While the plan is typically modeled as a formal specification of low-level motions BIBREF2 or a grammar BIBREF3 , BIBREF4 , we focus specifically on translating instructions to a high-level navigation plan based on a topological representation of the environment. This representation is a behavioral navigation graph, as recently proposed by BIBREF5 , designed to take advantage of the semantic structure typical of human environments. The nodes of the graph correspond to semantically meaningful locations for the navigation task, such as kitchens or entrances to rooms in corridors. The edges are parameterized, visuo-motor behaviors that allow a robot to navigate between neighboring nodes, as illustrated in Fig. FIGREF4 (b). Under this framework, complex navigation routes can be achieved by sequencing behaviors without an explicit metric representation of the world.", "We formulate the problem of following instructions under the framework of BIBREF5 as finding a path in the behavioral navigation graph that follows the desired route, given a known starting location. The edges (behaviors) along this path serve to reach the – sometimes implicit – destination requested by the user. As in BIBREF6 , our focus is on the problem of interpreting navigation directions. We assume that a robot can realize valid navigation plans according to the graph.", "We contribute a new end-to-end model for following directions in natural language under the behavioral navigation framework. Inspired by the information retrieval and question answering literature BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , we propose to leverage the behavioral graph as a knowledge base to facilitate the interpretation of navigation commands. More specifically, the proposed model takes as input user directions in text form, the behavioral graph of the environment encoded as INLINEFORM0 node; edge; node INLINEFORM1 triplets, and the initial location of the robot in the graph. The model then predicts a set of behaviors to reach the desired destination according to the instructions and the map (Fig. FIGREF4 (c)). Our main insight is that using attention mechanisms to correlate navigation instructions with the topological map of the environment can facilitate predicting correct navigation plans.", "This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.", "We conduct extensive experiments to study the generalization capabilities of the proposed model for following natural language instructions. We investigate both generalization to new instructions in known and in new environments. We conclude this paper by discussing the benefits of the proposed approach as well as opportunities for future research based on our findings." ], [ "This section reviews relevant prior work on following navigation instructions. Readers interested in an in-depth review of methods to interpret spatial natural language for robotics are encouraged to refer to BIBREF11 .", "Typical approaches to follow navigation commands deal with the complexity of natural language by manually parsing commands, constraining language descriptions, or using statistical machine translation methods. While manually parsing commands is often impractical, the first type of approaches are foundational: they showed that it is possible to leverage the compositionality of semantic units to interpret spatial language BIBREF12 , BIBREF13 .", "Constraining language descriptions can reduce the size of the input space to facilitate the interpretation of user commands. For example, BIBREF14 explored using structured, symbolic language phrases for navigation. As in this earlier work, we are also interested in navigation with a topological map of the environment. However, we do not process symbolic phrases. Our aim is to translate free-form natural language instructions to a navigation plan using information from a high-level representation of the environment. This translation problem requires dealing with missing actions in navigation instructions and actions with preconditions, such as “at the end of the corridor, turn right” BIBREF15 .", "Statistical machine translation BIBREF16 is at the core of recent approaches to enable robots to follow navigation instructions. These methods aim to automatically discover translation rules from a corpus of data, and often leverage the fact that navigation directions are composed of sequential commands. For instance, BIBREF17 , BIBREF4 , BIBREF2 used statistical machine translation to map instructions to a formal language defined by a grammar. Likewise, BIBREF18 , BIBREF0 mapped commands to spatial description clauses based on the hierarchical structure of language in the navigation problem. Our approach to machine translation builds on insights from these prior efforts. In particular, we focus on end-to-end learning for statistical machine translation due to the recent success of Neural Networks in Natural Language Processing BIBREF19 .", "Our work is inspired by methods that reduce the task of interpreting user commands to a sequential prediction problem BIBREF20 , BIBREF21 , BIBREF22 . Similar to BIBREF21 and BIBREF22 , we use a sequence-to-sequence model to enable a mobile agent to follow routes. But instead leveraging visual information to output low-level navigation commands, we focus on using a topological map of the environment to output a high-level navigation plan. This plan is a sequence of behaviors that can be executed by a robot to reach a desired destination BIBREF5 , BIBREF6 .", "We explore machine translation from the perspective of automatic question answering. Following BIBREF8 , BIBREF9 , our approach uses attention mechanisms to learn alignments between different input modalities. In our case, the inputs to our model are navigation instructions, a topological environment map, and the start location of the robot (Fig. FIGREF4 (c)). Our results show that the map can serve as an effective source of contextual information for the translation task. Additionally, it is possible to leverage this kind of information in an end-to-end fashion." ], [ "Our goal is to translate navigation instructions in text form into a sequence of behaviors that a robot can execute to reach a desired destination from a known start location. We frame this problem under a behavioral approach to indoor autonomous navigation BIBREF5 and assume that prior knowledge about the environment is available for the translation task. This prior knowledge is a topological map, in the form of a behavioral navigation graph (Fig. FIGREF4 (b)). The nodes of the graph correspond to semantically-meaningful locations for the navigation task, and its directed edges are visuo-motor behaviors that a robot can use to move between nodes. This formulation takes advantage of the rich semantic structure behind man-made environments, resulting in a compact route representation for robot navigation.", "Fig. FIGREF4 (c) provides a schematic view of the problem setting. The inputs are: (1) a navigation graph INLINEFORM0 , (2) the starting node INLINEFORM1 of the robot in INLINEFORM2 , and (3) a set of free-form navigation instructions INLINEFORM3 in natural language. The instructions describe a path in the graph to reach from INLINEFORM4 to a – potentially implicit – destination node INLINEFORM5 . Using this information, the objective is to predict a suitable sequence of robot behaviors INLINEFORM6 to navigate from INLINEFORM7 to INLINEFORM8 according to INLINEFORM9 . From a supervised learning perspective, the goal is then to estimate: DISPLAYFORM0 ", "based on a dataset of input-target pairs INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 , respectively. The sequential execution of the behaviors INLINEFORM3 should replicate the route intended by the instructions INLINEFORM4 . We assume no prior linguistic knowledge. Thus, translation approaches have to cope with the semantics and syntax of the language by discovering corresponding patterns in the data." ], [ "We view the behavioral graph INLINEFORM0 as a knowledge base that encodes a set of navigational rules as triplets INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 are adjacent nodes in the graph, and the edge INLINEFORM4 is an executable behavior to navigate from INLINEFORM5 to INLINEFORM6 . In general, each behaviors includes a list of relevant navigational attributes INLINEFORM7 that the robot might encounter when moving between nodes.", "We consider 7 types of semantic locations, 11 types of behaviors, and 20 different types of landmarks. A location in the navigation graph can be a room, a lab, an office, a kitchen, a hall, a corridor, or a bathroom. These places are labeled with unique tags, such as \"room-1\" or \"lab-2\", except for bathrooms and kitchens which people do not typically refer to by unique names when describing navigation routes.", "Table TABREF7 lists the navigation behaviors that we consider in this work. These behaviors can be described in reference to visual landmarks or objects, such as paintings, book shelfs, tables, etc. As in Fig. FIGREF4 , maps might contain multiple landmarks of the same type. Please see the supplementary material (Appendix A) for more details." ], [ "We leverage recent advances in deep learning to translate natural language instructions to a sequence of navigation behaviors in an end-to-end fashion. Our proposed model builds on the sequence-to-sequence translation model of BIBREF23 , which computes a soft-alignment between a source sequence (natural language instructions in our case) and the corresponding target sequence (navigation behaviors).", "As one of our main contributions, we augment the neural machine translation approach of BIBREF23 to take as input not only natural language instructions, but also the corresponding behavioral navigation graph INLINEFORM0 of the environment where navigation should take place. Specifically, at each step, the graph INLINEFORM1 operates as a knowledge base that the model can access to obtain information about path connectivity, facilitating the grounding of navigation commands.", "Figure FIGREF8 shows the structure of the proposed model for interpreting navigation instructions. The model consists of six layers:", "Embed layer: The model first encodes each word and symbol in the input sequences INLINEFORM0 and INLINEFORM1 into fixed-length representations. The instructions INLINEFORM2 are embedded into a 100-dimensional pre-trained GloVe vector BIBREF24 . Each of the triplet components, INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 of the graph INLINEFORM6 , are one-hot encoded into vectors of dimensionality INLINEFORM7 , where INLINEFORM8 and INLINEFORM9 are the number of nodes and edges in INLINEFORM10 , respectively.", "Encoder layer: The model then uses two bidirectional Gated Recurrent Units (GRUs) BIBREF25 to independently process the information from INLINEFORM0 and INLINEFORM1 , and incorporate contextual cues from the surrounding embeddings in each sequence. The outputs of the encoder layer are the matrix INLINEFORM2 for the navigational commands and the matrix INLINEFORM3 for the behavioral graph, where INLINEFORM4 is the hidden size of each GRU, INLINEFORM5 is the number of words in the instruction INLINEFORM6 , and INLINEFORM7 is the number of triplets in the graph INLINEFORM8 .", "Attention layer: Matrices INLINEFORM0 and INLINEFORM1 generated by the encoder layer are combined using an attention mechanism. We use one-way attention because the graph contains information about the whole environment, while the instruction has (potentially incomplete) local information about the route of interest. The use of attention provides our model with a two-step strategy to interpret commands. This resembles the way people find paths on a map: first, relevant parts on the map are selected according to their affinity to each of the words in the input instruction (attention layer); second, the selected parts are connected to assemble a valid path (decoder layer). More formally, let INLINEFORM2 ( INLINEFORM3 ) be the INLINEFORM4 -th row of INLINEFORM5 , and INLINEFORM6 ( INLINEFORM7 ) the INLINEFORM8 -th row of INLINEFORM9 . We use each encoded triplet INLINEFORM10 in INLINEFORM11 to calculate its associated attention distribution INLINEFORM12 over all the atomic instructions INLINEFORM13 : DISPLAYFORM0 ", "where the matrix INLINEFORM0 serves to combine the different sources of information INLINEFORM1 and INLINEFORM2 . Each component INLINEFORM3 of the attention distributions INLINEFORM4 quantifies the affinity between the INLINEFORM5 -th triplet in INLINEFORM6 and the INLINEFORM7 -th word in the corresponding input INLINEFORM8 .", "The model then uses each attention distribution INLINEFORM0 to obtain a weighted sum of the encodings of the words in INLINEFORM1 , according to their relevance to the corresponding triplet INLINEFORM2 . This results in L attention vectors INLINEFORM3 , INLINEFORM4 .", "The final step in the attention layer concatenates each INLINEFORM0 with INLINEFORM1 to generate the outputs INLINEFORM2 , INLINEFORM3 . Following BIBREF8 , we include the encoded triplet INLINEFORM4 in the output tensor INLINEFORM5 of this layer to prevent early summaries of relevant map information.", "FC layer: The model reduces the dimensionality of each individual vector INLINEFORM0 from INLINEFORM1 to INLINEFORM2 with a fully-connected (FC) layer. The resulting L vectors are output to the next layer as columns of a context matrix INLINEFORM3 .", "Decoder layer: After the FC layer, the model predicts likelihoods over the sequence of behaviors that correspond to the input instructions with a GRU network. Without loss of generality, consider the INLINEFORM0 -th recurrent cell in the GRU network. This cell takes two inputs: a hidden state vector INLINEFORM1 from the prior cell, and a one-hot embedding of the previous behavior INLINEFORM2 that was predicted by the model. Based on these inputs, the GRU cell outputs a new hidden state INLINEFORM3 to compute likelihoods for the next behavior. These likelihoods are estimated by combining the output state INLINEFORM4 with relevant information from the context INLINEFORM5 : DISPLAYFORM0 ", " where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters. The attention vector INLINEFORM3 in Eq. () quantifies the affinity of INLINEFORM4 with respect to each of the columns INLINEFORM5 of INLINEFORM6 , where INLINEFORM7 . The attention vector also helps to estimate a dynamic contextual vector INLINEFORM8 that the INLINEFORM9 -th GRU cell uses to compute logits for the next behavior: DISPLAYFORM0 ", "with INLINEFORM0 trainable parameters. Note that INLINEFORM1 includes a value for each of the pre-defined behaviors in the graph INLINEFORM2 , as well as for a special “stop” symbol to identify the end of the output sequence.", "Output layer: The final layer of the model searches for a valid sequence of robot behaviors based on the robot's initial node, the connectivity of the graph INLINEFORM0 , and the output logits from the previous decoder layer. Again, without loss of generality, consider the INLINEFORM1 -th behavior INLINEFORM2 that is finally predicted by the model. The search for this behavior is implemented as: DISPLAYFORM0 ", "with INLINEFORM0 a masking function that takes as input the graph INLINEFORM1 and the node INLINEFORM2 that the robot reaches after following the sequence of behaviors INLINEFORM3 previously predicted by the model. The INLINEFORM4 function returns a vector of the same dimensionality as the logits INLINEFORM5 , but with zeros for the valid behaviors after the last location INLINEFORM6 and for the special stop symbol, and INLINEFORM7 for any invalid predictions according to the connectivity of the behavioral navigation graph." ], [ "We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 . This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments, each with 6 to 65 rooms. To the best of our knowledge, this is the first benchmark for comparing translation models in the context of behavioral robot navigation.", "As shown in Table TABREF16 , the dataset consists of 8066 pairs of free-form natural language instructions and navigation plans for training. This training data was collected from 88 unique simulated environments, totaling 6064 distinct navigation plans (2002 plans have two different navigation instructions each; the rest has one). The dataset contains two test set variants:", "While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort." ], [ "This section describes our evaluation of the proposed approach for interpreting navigation commands in natural language. We provide both quantitative and qualitative results." ], [ "While computing evaluation metrics, we only consider the behaviors present in the route because they are sufficient to recover the high-level navigation plan from the graph. Our metrics treat each behavior as a single token. For example, the sample plan “R-1 oor C-1 cf C-1 lt C-0 cf C-0 iol O-3\" is considered to have 5 tokens, each corresponding to one of its behaviors (“oor\", “cf\", “lt\", “cf\", “iol\"). In this plan, “R-1\",“C-1\", “C-0\", and “O-3\" are symbols for locations (nodes) in the graph.", "We compare the performance of translation approaches based on four metrics:", "[align=left,leftmargin=0em,labelsep=0.4em,font=]", "As in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.", "The harmonic average of the precision and recall over all the test set BIBREF26 .", "The minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .", "GM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0." ], [ "We compare the proposed approach for translating natural language instructions into a navigation plan against alternative deep-learning models:", "[align=left,leftmargin=0em,labelsep=0.4em,font=]", "The baseline approach is based on BIBREF20 . It divides the task of interpreting commands for behavioral navigation into two steps: path generation, and path verification. For path generation, this baseline uses a standard sequence-to-sequence model augmented with an attention mechanism, similar to BIBREF23 , BIBREF6 . For path verification, the baseline uses depth-first search to find a route in the graph that matches the sequence of predicted behaviors. If no route matches perfectly, the baseline changes up to three behaviors in the predicted sequence to try to turn it into a valid path.", "To test the impact of using the behavioral graphs as an extra input to our translation model, we implemented a version of our approach that only takes natural language instructions as input. In this ablation model, the output of the bidirectional GRU that encodes the input instruction INLINEFORM0 is directly fed to the decoder layer. This model does not have the attention and FC layers described in Sec. SECREF4 , nor uses the masking function in the output layer.", "This model is the same as the previous Ablation model, but with the masking function in the output layer." ], [ "We pre-processed the inputs to the various models that are considered in our experiment. In particular, we lowercased, tokenized, spell-checked and lemmatized the input instructions in text-form using WordNet BIBREF28 . We also truncated the graphs to a maximum of 300 triplets, and the navigational instructions to a maximum of 150 words. Only 6.4% (5.4%) of the unique graphs in the training (validation) set had more than 300 triplets, and less than 0.15% of the natural language instructions in these sets had more than 150 tokens.", "The dimensionality of the hidden state of the GRU networks was set to 128 in all the experiments. In general, we used 12.5% of the training set as validation for choosing models' hyper-parameters. In particular, we used dropout after the encoder and the fully-connected layers of the proposed model to reduce overfitting. Best performance was achieved with a dropout rate of 0.5 and batch size equal to 256. We also used scheduled sampling BIBREF29 at training time for all models except the baseline.", "We input the triplets from the graph to our proposed model in alphabetical order, and consider a modification where the triplets that surround the start location of the robot are provided first in the input graph sequence. We hypothesized that such rearrangement would help identify the starting location (node) of the robot in the graph. In turn, this could facilitate the prediction of correct output sequences. In the remaining of the paper, we refer to models that were provided a rearranged graph, beginning with the starting location of the robot, as models with “Ordered Triplets”." ], [ "Table TABREF28 shows the performance of the models considered in our evaluation on both test sets. The next two sections discuss the results in detail.", "First, we can observe that the final model “Ours with Mask and Ordered Triplets” outperforms the Baseline and Ablation models on all metrics in previously seen environments. The difference in performance is particularly evident for the Exact Match and Goal Match metrics, with our model increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively. These results suggest that providing the behavioral navigation graph to the model and allowing it to process this information as a knowledge base in an end-to-end fashion is beneficial.", "We can also observe from Table TABREF28 that the masking function of Eq. ( EQREF12 ) tends to increase performance in the Test-Repeated Set by constraining the output sequence to a valid set of navigation behaviors. For the Ablation model, using the masking function leads to about INLINEFORM0 increase in EM and GM accuracy. For the proposed model (with or without reordering the graph triplets), the increase in accuracy is around INLINEFORM1 . Note that the impact of the masking function is less evident in terms of the F1 score because this metric considers if a predicted behavior exists in the ground truth navigation plan, irrespective of its specific position in the output sequence.", "The results in the last four rows of Table TABREF28 suggest that ordering the graph triplets can facilitate predicting correct navigation plans in previously seen environments. Providing the triplets that surround the starting location of the robot first to the model leads to a boost of INLINEFORM0 in EM and GM performance. The rearrangement of the graph triplets also helps to reduce ED and increase F1.", "Lastly, it is worth noting that our proposed model (last row of Table TABREF28 ) outperforms all other models in previously seen environments. In particular, we obtain over INLINEFORM0 increase in EM and GM between our model and the next best two models.", "The previous section evaluated model performance on new instructions (and corresponding navigation plans) for environments that were previously seen at training time. Here, we examine whether the trained models succeed on environments that are completely new.", "The evaluation on the Test-New Set helps understand the generalization capabilities of the models under consideration. This experiment is more challenging than the one in the previous section, as can be seen in performance drops in Table TABREF28 for the new environments. Nonetheless, the insights from the previous section still hold: masking in the output layer and reordering the graph triplets tend to increase performance.", "Even though the results in Table TABREF28 suggest that there is room for future work on decoding natural language instructions, our model still outperforms the baselines by a clear margin in new environments. For instance, the difference between our model and the second best model in the Test-New set is about INLINEFORM0 EM and GM. Note that the average number of actions in the ground truth output sequences is 7.07 for the Test-New set. Our model's predictions are just INLINEFORM1 edits off on average from the correct navigation plans." ], [ "This section discusses qualitative results to better understand how the proposed model uses the navigation graph.", "We analyze the evolution of the attention weights INLINEFORM0 in Eq. () to assess if the decoder layer of the proposed model is attending to the correct parts of the behavioral graph when making predictions. Fig FIGREF33 (b) shows an example of the resulting attention map for the case of a correct prediction. In the Figure, the attention map is depicted as a scaled and normalized 2D array of color codes. Each column in the array shows the attention distribution INLINEFORM1 used to generate the predicted output at step INLINEFORM2 . Consequently, each row in the array represents a triplet in the corresponding behavioral graph. This graph consists of 72 triplets for Fig FIGREF33 (b).", "We observe a locality effect associated to the attention coefficients corresponding to high values (bright areas) in each column of Fig FIGREF33 (b). This suggests that the decoder is paying attention to graph triplets associated to particular neighborhoods of the environment in each prediction step. We include additional attention visualizations in the supplementary Appendix, including cases where the dynamics of the attention distribution are harder to interpret.", "All the routes in our dataset are the shortest paths from a start location to a given destination. Thus, we collected a few additional natural language instructions to check if our model was able to follow navigation instructions describing sub-optimal paths. One such example is shown in Fig. FIGREF37 , where the blue route (shortest path) and the red route (alternative path) are described by:", "[leftmargin=*, labelsep=0.2em, itemsep=0em]", "“Go out the office and make a left. Turn right at the corner and go down the hall. Make a right at the next corner and enter the kitchen in front of table.”", "“Exit the room 0 and turn right, go to the end of the corridor and turn left, go straight to the end of the corridor and turn left again. After passing bookshelf on your left and table on your right, Enter the kitchen on your right.”", "For both routes, the proposed model was able to predict the correct sequence of navigation behaviors. This result suggests that the model is indeed using the input instructions and is not just approximating shortest paths in the behavioral graph. Other examples on the prediction of sub-obtimal paths are described in the Appendix." ], [ "This work introduced behavioral navigation through free-form natural language instructions as a challenging and a novel task that falls at the intersection of natural language processing and robotics. This problem has a range of interesting cross-domain applications, including information retrieval.", "We proposed an end-to-end system to translate user instructions to a high-level navigation plan. Our model utilized an attention mechanism to merge relevant information from the navigation instructions with a behavioral graph of the environment. The model then used a decoder to predict a sequence of navigation behaviors that matched the input commands.", "As part of this effort, we contributed a new dataset of 11,051 pairs of user instructions and navigation plans from 100 different environments. Our model achieved the best performance in this dataset in comparison to a two-step baseline approach for interpreting navigation instructions, and a sequence-to-sequence model that does not consider the behavioral graph. Our quantitative and qualitative results suggest that attention mechanisms can help leverage the behavioral graph as a relevant knowledge base to facilitate the translation of free-form navigation instructions. Overall, our approach demonstrated practical form of learning for a complex and useful task. In future work, we are interested in investigating mechanisms to improve generalization to new environments. For example, pointer and graph networks BIBREF30 , BIBREF31 are a promising direction to help supervise translation models and predict motion behaviors." ], [ "The Toyota Research Institute (TRI) provided funds to assist with this research, but this paper solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. This work is also partially funded by Fondecyt grant 1181739, Conicyt, Chile. The authors would also like to thank Gabriel Sepúlveda for his assistance with parts of this project." ] ] }
{ "question": [ "Did the collection process use a WoZ method?", "By how much did their model outperform the baseline?", "What baselines did they compare their model with?", "What was the performance of their model?", "What evaluation metrics are used?", "Did the authors use a crowdsourcing platform?", "How were the navigation instructions collected?", "What language is the experiment done in?" ], "question_id": [ "aa800b424db77e634e82680f804894bfa37f2a34", "fbd47705262bfa0a2ba1440a2589152def64cbbd", "51aaec4c511d96ef5f5c8bae3d5d856d8bc288d3", "3aee5c856e0ee608a7664289ffdd11455d153234", "f42d470384ca63a8e106c7caf1cb59c7b92dbc27", "29bdd1fb20d013b23b3962a065de3a564b14f0fb", "25b2ae2d86b74ea69b09c140a41593c00c47a82b", "fd7f13b63f6ba674f5d5447b6114a201fe3137cb" ], "nlp_background": [ "", "", "", "", "", "", "", "" ], "topic_background": [ "", "", "", "", "", "", "", "" ], "paper_read": [ "", "", "", "", "", "", "", "" ], "search_query": [ "", "", "", "", "", "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.", "While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort." ], "highlighted_evidence": [ "This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.", "While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort." ] } ], "annotation_id": [ "a38c1c344ccb96f3ff31ef6c371b2260c3d8db43" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively", "over INLINEFORM0 increase in EM and GM between our model and the next best two models" ], "yes_no": null, "free_form_answer": "", "evidence": [ "First, we can observe that the final model “Ours with Mask and Ordered Triplets” outperforms the Baseline and Ablation models on all metrics in previously seen environments. The difference in performance is particularly evident for the Exact Match and Goal Match metrics, with our model increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively. These results suggest that providing the behavioral navigation graph to the model and allowing it to process this information as a knowledge base in an end-to-end fashion is beneficial.", "Lastly, it is worth noting that our proposed model (last row of Table TABREF28 ) outperforms all other models in previously seen environments. In particular, we obtain over INLINEFORM0 increase in EM and GM between our model and the next best two models." ], "highlighted_evidence": [ "First, we can observe that the final model “Ours with Mask and Ordered Triplets” outperforms the Baseline and Ablation models on all metrics in previously seen environments. The difference in performance is particularly evident for the Exact Match and Goal Match metrics, with our model increasing accuracy by 35% and 25% in comparison to the Baseline and Ablation models, respectively. These results suggest that providing the behavioral navigation graph to the model and allowing it to process this information as a knowledge base in an end-to-end fashion is beneficial.", "Lastly, it is worth noting that our proposed model (last row of Table TABREF28 ) outperforms all other models in previously seen environments. In particular, we obtain over INLINEFORM0 increase in EM and GM between our model and the next best two models." ] } ], "annotation_id": [ "a6c1cfab37b756275380368b1d9f8cdb8929f57e" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "the baseline where path generation uses a standard sequence-to-sequence model augmented with attention mechanism and path verification uses depth-first search", "evidence": [ "The baseline approach is based on BIBREF20 . It divides the task of interpreting commands for behavioral navigation into two steps: path generation, and path verification. For path generation, this baseline uses a standard sequence-to-sequence model augmented with an attention mechanism, similar to BIBREF23 , BIBREF6 . For path verification, the baseline uses depth-first search to find a route in the graph that matches the sequence of predicted behaviors. If no route matches perfectly, the baseline changes up to three behaviors in the predicted sequence to try to turn it into a valid path." ], "highlighted_evidence": [ "The baseline approach is based on BIBREF20 . It divides the task of interpreting commands for behavioral navigation into two steps: path generation, and path verification. For path generation, this baseline uses a standard sequence-to-sequence model augmented with an attention mechanism, similar to BIBREF23 , BIBREF6 . For path verification, the baseline uses depth-first search to find a route in the graph that matches the sequence of predicted behaviors. If no route matches perfectly, the baseline changes up to three behaviors in the predicted sequence to try to turn it into a valid path." ] } ], "annotation_id": [ "3f26b75051da0d7d675aa8f3a519f596e587b5a1" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "For test-repeated set, EM score of 61.17, F1 of 93.54, ED of 0.75 and GM of 61.36. For test-new set, EM score of 41.71, F1 of 91.02, ED of 1.22 and GM of 41.81", "evidence": [ "FLOAT SELECTED: Table 3: Performance of different models on the test datasets. EM and GM report percentages, and ED corresponds to average edit distance. The symbol ↑ indicates that higher results are better in the corresponding column; ↓ indicates that lower is better." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Performance of different models on the test datasets. EM and GM report percentages, and ED corresponds to average edit distance. The symbol ↑ indicates that higher results are better in the corresponding column; ↓ indicates that lower is better." ] } ], "annotation_id": [ "b61847a85ff71d95db307804edaf69a7e8fbd569" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "exact match, f1 score, edit distance and goal match", "evidence": [ "We compare the performance of translation approaches based on four metrics:", "[align=left,leftmargin=0em,labelsep=0.4em,font=]", "As in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.", "The harmonic average of the precision and recall over all the test set BIBREF26 .", "The minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .", "GM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0." ], "highlighted_evidence": [ "We compare the performance of translation approaches based on four metrics:\n\n[align=left,leftmargin=0em,labelsep=0.4em,font=]\n\nAs in BIBREF20 , EM is 1 if a predicted plan matches exactly the ground truth; otherwise it is 0.\n\nThe harmonic average of the precision and recall over all the test set BIBREF26 .\n\nThe minimum number of insertions, deletions or swap operations required to transform a predicted sequence of behaviors into the ground truth sequence BIBREF27 .\n\nGM is 1 if a predicted plan reaches the ground truth destination (even if the full sequence of behaviors does not match exactly the ground truth). Otherwise, GM is 0." ] } ], "annotation_id": [ "2392fbdb4eb273ea6706198fcfecc097f50785c9" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans." ], "highlighted_evidence": [ "This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. " ] } ], "annotation_id": [ "16c3f79289f6601abd20ee058392d5dd7d0f0485" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "using Amazon Mechanical Turk using simulated environments with topological maps", "evidence": [ "This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans.", "We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 . This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments, each with 6 to 65 rooms. To the best of our knowledge, this is the first benchmark for comparing translation models in the context of behavioral robot navigation.", "As shown in Table TABREF16 , the dataset consists of 8066 pairs of free-form natural language instructions and navigation plans for training. This training data was collected from 88 unique simulated environments, totaling 6064 distinct navigation plans (2002 plans have two different navigation instructions each; the rest has one). The dataset contains two test set variants:", "While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort." ], "highlighted_evidence": [ "This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. ", "We created a new dataset for the problem of following navigation instructions under the behavioral navigation framework of BIBREF5 . This dataset was created using Amazon Mechanical Turk and 100 maps of simulated indoor environments, each with 6 to 65 rooms. To the best of our knowledge, this is the first benchmark for comparing translation models in the context of behavioral robot navigation.\n\nAs shown in Table TABREF16 , the dataset consists of 8066 pairs of free-form natural language instructions and navigation plans for training. This training data was collected from 88 unique simulated environments, totaling 6064 distinct navigation plans (2002 plans have two different navigation instructions each; the rest has one). The dataset contains two test set variants:\n\nWhile the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort.\n\n" ] } ], "annotation_id": [ "069af51dc41d41489fd579ea994c1b247827b4e5" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "english language", "evidence": [ "While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort." ], "highlighted_evidence": [ "While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort." ] } ], "annotation_id": [ "33d46a08d2e593401d4ecb1f77de6b81ad8a70d1" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Figure 1: Map of an environment (a), its (partial) behavioral navigation graph (b), and the problem setting of interest (c). The red part of (b) corresponds to the representation of the route highlighted in blue in (a). The codes “oo-left”, “oo-right”, “cf”, “left-io”, and “right-io” correspond to the behaviors “go out and turn left”, “go out and turn right”, “follow the corridor”, “enter the room on left”, and “enter office on right”, respectively.", "Table 1: Behaviors (edges) of the navigation graphs considered in this work. The direction <d> can be left or right.", "Figure 2: Model overview. The model contains six layers, takes the input of behavioral graph representation, free-form instruction, and the start location (yellow block marked as START in the decoder layer) and outputs a sequence of behaviors.", "Table 2: Dataset statistics. “# Single” indicates the number of navigation plans with a single natural language instruction. “# Double” is the number of plans with two different instructions. The total number of plans is (# Single) × 2(# Double).", "Table 3: Performance of different models on the test datasets. EM and GM report percentages, and ED corresponds to average edit distance. The symbol ↑ indicates that higher results are better in the corresponding column; ↓ indicates that lower is better.", "Figure 3: Visualization of the attention weights of the decoder layer. The color-coded and numbered regions on the map (left) correspond to the triplets that are highlighted with the corresponding color in the attention map (right).", "Figure 4: An example of two different navigation paths between the same pair of start and goal locations." ], "file": [ "2-Figure1-1.png", "4-Table1-1.png", "5-Figure2-1.png", "6-Table2-1.png", "8-Table3-1.png", "8-Figure3-1.png", "9-Figure4-1.png" ] }
1809.05752
Analysis of Risk Factor Domains in Psychosis Patient Health Records
Readmission after discharge from a hospital is disruptive and costly, regardless of the reason. However, it can be particularly problematic for psychiatric patients, so predicting which patients may be readmitted is critically important but also very difficult. Clinical narratives in psychiatric electronic health records (EHRs) span a wide range of topics and vocabulary; therefore, a psychiatric readmission prediction model must begin with a robust and interpretable topic extraction component. We created a data pipeline for using document vector similarity metrics to perform topic extraction on psychiatric EHR data in service of our long-term goal of creating a readmission risk classifier. We show initial results for our topic extraction model and identify additional features we will be incorporating in the future.
{ "section_name": [ "Introduction", "Related Work", "Data", "Annotation Task", "Inter-Annotator Agreement", "Topic Extraction", "Results and Discussion", "Future Work and Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Psychotic disorders typically emerge in late adolescence or early adulthood BIBREF0 , BIBREF1 and affect approximately 2.5-4% of the population BIBREF2 , BIBREF3 , making them one of the leading causes of disability worldwide BIBREF4 . A substantial proportion of psychiatric inpatients are readmitted after discharge BIBREF5 . Readmissions are disruptive for patients and families, and are a key driver of rising healthcare costs BIBREF6 , BIBREF7 . Reducing readmission risk is therefore a major unmet need of psychiatric care. Developing clinically implementable machine learning tools to enable accurate assessment of risk factors associated with readmission offers opportunities to inform the selection of treatment interventions and implement appropriate preventive measures.", "In psychiatry, traditional strategies to study readmission risk factors rely on clinical observation and manual retrospective chart review BIBREF8 , BIBREF9 . This approach, although benefitting from clinical expertise, does not scale well for large data sets, is effort-intensive, and lacks automation. An efficient, more robust, and cheaper NLP-based alternative approach has been developed and met with some success in other medical fields BIBREF10 . However, this approach has seldom been applied in psychiatry because of the unique characteristics of psychiatric medical record content.", "There are several challenges for topic extraction when dealing with clinical narratives in psychiatric EHRs. First, the vocabulary used is highly varied and context-sensitive. A patient may report “feeling `really great and excited'\" – symptoms of mania – without any explicit mention of keywords that differ from everyday vocabulary. Also, many technical terms in clinical narratives are multiword expressions (MWEs) such as `obsessive body image', `linear thinking', `short attention span', or `panic attack'. These phrasemes are comprised of words that in isolation do not impart much information in determining relatedness to a given topic but do in the context of the expression.", "Second, the narrative structure in psychiatric clinical narratives varies considerably in how the same phenomenon can be described. Hallucinations, for example, could be described as “the patient reports auditory hallucinations,\" or “the patient has been hearing voices for several months,\" amongst many other possibilities.", "Third, phenomena can be directly mentioned without necessarily being relevant to the patient specifically. Psychosis patient discharge summaries, for instance, can include future treatment plans (e.g. “Prevent relapse of a manic or major depressive episode.\", “Prevent recurrence of psychosis.\") containing vocabulary that at the word-level seem strongly correlated with readmission risk. Yet at the paragraph-level these do not indicate the presence of a readmission risk factor in the patient and in fact indicate the absence of a risk factor that was formerly present.", "Lastly, given the complexity of phenotypic assessment in psychiatric illnesses, patients with psychosis exhibit considerable differences in terms of illness and symptom presentation. The constellation of symptoms leads to various diagnoses and comorbidities that can change over time, including schizophrenia, schizoaffective disorder, bipolar disorder with psychosis, and substance use induced psychosis. Thus, the lexicon of words and phrases used in EHRs differs not only across diagnoses but also across patients and time.", "Taken together, these factors make topic extraction a difficult task that cannot be accomplished by keyword search or other simple text-mining techniques.", "To identify specific risk factors to focus on, we not only reviewed clinical literature of risk factors associated with readmission BIBREF11 , BIBREF12 , but also considered research related to functional remission BIBREF13 , forensic risk factors BIBREF14 , and consulted clinicians involved with this project. Seven risk factor domains – Appearance, Mood, Interpersonal, Occupation, Thought Content, Thought Process, and Substance – were chosen because they are clinically relevant, consistent with literature, replicable across data sets, explainable, and implementable in NLP algorithms.", "In our present study, we evaluate multiple approaches to automatically identify which risk factor domains are associated with which paragraphs in psychotic patient EHRs. We perform this study in support of our long-term goal of creating a readmission risk classifier that can aid clinicians in targeting individual treatment interventions and assessing patient risk of harm (e.g. suicide risk, homicidal risk). Unlike other contemporary approaches in machine learning, we intend to create a model that is clinically explainable and flexible across training data while maintaining consistent performance.", "To incorporate clinical expertise in the identification of risk factor domains, we undertake an annotation project, detailed in section 3.1. We identify a test set of over 1,600 EHR paragraphs which a team of three domain-expert clinicians annotate paragraph-by-paragraph for relevant risk factor domains. Section 3.2 describes the results of this annotation task. We then use the gold standard from the annotation project to assess the performance of multiple neural classification models trained exclusively on Term Frequency – Inverse Document Frequency (TF-IDF) vectorized EHR data, described in section 4. To further improve the performance of our model, we incorporate domain-relevant MWEs identified using all in-house data." ], [ "McCoy et al. mccoy2015clinical constructed a corpus of web data based on the Research Domain Criteria (RDoC) BIBREF15 , and used this corpus to create a vector space document similarity model for topic extraction. They found that the `negative valence' and `social' RDoC domains were associated with readmission. Using web data (in this case data retrieved from the Bing API) to train a similarity model for EHR texts is problematic since it differs from the target data in both structure and content. Based on reconstruction of the procedure, we conclude that many of the informative MWEs critical to understanding the topics of paragraphs in EHRs are not captured in the web data. Additionally, RDoC is by design a generalized research construct to describe the entire spectrum of mental disorders and does not include domains that are based on observation or causes of symptoms. Important indicators within EHRs of patient health, like appearance or occupation, are not included in the RDoC constructs.", "Rumshisky et al. rumshisky2016predicting used a corpus of EHRs from patients with a primary diagnosis of major depressive disorder to create a 75-topic LDA topic model that they then used in a readmission prediction classifier pipeline. Like with McCoy et al. mccoy2015clinical, the data used to train the LDA model was not ideal as the generalizability of the data was narrow, focusing on only one disorder. Their model achieved readmission prediction performance with an area under the curve of .784 compared to a baseline of .618. To perform clinical validation of the topics derived from the LDA model, they manually evaluated and annotated the topics, identifying the most informative vocabulary for the top ten topics. With their training data, they found the strongest coherence occurred in topics involving substance use, suicidality, and anxiety disorders. But given the unsupervised nature of the LDA clustering algorithm, the topic coherence they observed is not guaranteed across data sets." ], [ "[2]The vast majority of patients in our target cohort are", "dependents on a parental private health insurance plan.", "Our target data set consists of a corpus of discharge summaries, admission notes, individual encounter notes, and other clinical notes from 220 patients in the OnTrackTM program at McLean Hospital. OnTrackTM is an outpatient program, focusing on treating adults ages 18 to 30 who are experiencing their first episodes of psychosis. The length of time in the program varies depending on patient improvement and insurance coverage, with an average of two to three years. The program focuses primarily on early intervention via individual therapy, group therapy, medication evaluation, and medication management. See Table TABREF2 for a demographic breakdown of the 220 patients, for which we have so far extracted approximately 240,000 total EHR paragraphs spanning from 2011 to 2014 using Meditech, the software employed by McLean for storing and organizing EHR data.", "These patients are part of a larger research cohort of approximately 1,800 psychosis patients, which will allow us to connect the results of this EHR study with other ongoing research studies incorporating genetic, cognitive, neurobiological, and functional outcome data from this cohort.", "We also use an additional data set for training our vector space model, comprised of EHR texts queried from the Research Patient Data Registry (RPDR), a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network. These records are highly comparable in style and vocabulary to our target data set. The corpus consists of discharge summaries, encounter notes, and visit notes from approximately 30,000 patients admitted to the system's hospitals with psychiatric diagnoses and symptoms. This breadth of data captures a wide range of clinical narratives, creating a comprehensive foundation for topic extraction.", "After using the RPDR query tool to extract EHR paragraphs from the RPDR database, we created a training corpus by categorizing the extracted paragraphs according to their risk factor domain using a lexicon of 120 keywords that were identified by the clinicians involved in this project. Certain domains – particularly those involving thoughts and other abstract concepts – are often identifiable by MWEs rather than single words. The same clinicians who identified the keywords manually examined the bigrams and trigrams with the highest TF-IDF scores for each domain in the categorized paragraphs, identifying those which are conceptually related to the given domain. We then used this lexicon of 775 keyphrases to identify more relevant training paragraphs in RPDR and treat them as (non-stemmed) unigrams when generating the matrix. By converting MWEs such as `shortened attention span', `unusual motor activity', `wide-ranging affect', or `linear thinking' to non-stemmed unigrams, the TF-IDF score (and therefore the predictive value) of these terms is magnified. In total, we constructed a corpus of roughly 100,000 paragraphs consisting of 7,000,000 tokens for training our model." ], [ "In order to evaluate our models, we annotated 1,654 paragraphs selected from the 240,000 paragraphs extracted from Meditech with the clinically relevant domains described in Table TABREF3 . The annotation task was completed by three licensed clinicians. All paragraphs were removed from the surrounding EHR context to ensure annotators were not influenced by the additional contextual information. Our domain classification models consider each paragraph independently and thus we designed the annotation task to mirror the information available to the models.", "The annotators were instructed to label each paragraph with one or more of the seven risk factor domains. In instances where more than one domain was applicable, annotators assigned the domains in order of prevalence within the paragraph. An eighth label, `Other', was included if a paragraph was ambiguous, uninterpretable, or about a domain not included in the seven risk factor domains (e.g. non-psychiatric medical concerns and lab results). The annotations were then reviewed by a team of two clinicians who adjudicated collaboratively to create a gold standard. The gold standard and the clinician-identified keywords and MWEs have received IRB approval for release to the community. They are available as supplementary data to this paper." ], [ "Inter-annotator agreement (IAA) was assessed using a combination of Fleiss's Kappa (a variant of Scott's Pi that measures pairwise agreement for annotation tasks involving more than two annotators) BIBREF16 and Cohen's Multi-Kappa as proposed by Davies and Fleiss davies1982measuring. Table TABREF6 shows IAA calculations for both overall agreement and agreement on the first (most important) domain only. Following adjudication, accuracy scores were calculated for each annotator by evaluating their annotations against the gold standard.", "Overall agreement was generally good and aligned almost exactly with the IAA on the first domain only. Out of the 1,654 annotated paragraphs, 671 (41%) had total agreement across all three annotators. We defined total agreement for the task as a set-theoretic complete intersection of domains for a paragraph identified by all annotators. 98% of paragraphs in total agreement involved one domain. Only 35 paragraphs had total disagreement, which we defined as a set-theoretic null intersection between the three annotators. An analysis of the 35 paragraphs with total disagreement showed that nearly 30% included the term “blunted/restricted\". In clinical terminology, these terms can be used to refer to appearance, affect, mood, or emotion. Because the paragraphs being annotated were extracted from larger clinical narratives and examined independently of any surrounding context, it was difficult for the annotators to determine the most appropriate domain. This lack of contextual information resulted in each annotator using a different `default' label: Appearance, Mood, and Other. During adjudication, Other was decided as the most appropriate label unless the paragraph contained additional content that encompassed other domains, as it avoids making unnecessary assumptions. [3]Suicidal ideation [4]Homicidal ideation [5]Ethyl alcohol and ethanol", "A Fleiss's Kappa of 0.575 lies on the boundary between `Moderate' and `Substantial' agreement as proposed by Landis and Koch landis1977measurement. This is a promising indication that our risk factor domains are adequately defined by our present guidelines and can be employed by clinicians involved in similar work at other institutions.", "The fourth column in Table TABREF6 , Mean Accuracy, was calculated by averaging the three annotator accuracies as evaluated against the gold standard. This provides us with an informative baseline of human parity on the domain classification task.", "[6]Rectified Linear Units, INLINEFORM0 BIBREF17 [7]Adaptive Moment Estimation BIBREF18 " ], [ "Figure FIGREF8 illustrates the data pipeline for generating our training and testing corpora, and applying them to our classification models.", "We use the TfidfVectorizer tool included in the scikit-learn machine learning toolkit BIBREF19 to generate our TF-IDF vector space models, stemming tokens with the Porter Stemmer tool provided by the NLTK library BIBREF20 , and calculating TF-IDF scores for unigrams, bigrams, and trigrams. Applying Singular Value Decomposition (SVD) to the TF-IDF matrix, we reduce the vector space to 100 dimensions, which Zhang et al. zhang2011comparative found to improve classifier performance.", "Starting with the approach taken by McCoy et al. mccoy2015clinical, who used aggregate cosine similarity scores to compute domain similarity directly from their TF-IDF vector space model, we extend this method by training a suite of three-layer multilayer perceptron (MLP) and radial basis function (RBF) neural networks using a variety of parameters to compare performance. We employ the Keras deep learning library BIBREF21 using a TensorFlow backend BIBREF22 for this task. The architectures of our highest performing MLP and RBF models are summarized in Table TABREF7 . Prototype vectors for the nodes in the hidden layer of our RBF model are selected via k-means clustering BIBREF23 on each domain paragraph megadocument individually. The RBF transfer function for each hidden layer node is assigned the same width, which is based off the maximum Euclidean distance between the centroids that were computed using k-means.", "To prevent overfitting to the training data, we utilize a dropout rate BIBREF24 of 0.2 on the input layer of all models and 0.5 on the MLP hidden layer.", "Since our classification problem is multiclass, multilabel, and open-world, we employ seven nodes with sigmoid activations in the output layer, one for each risk factor domain. This allows us to identify paragraphs that fall into more than one of the seven domains, as well as determine paragraphs that should be classified as Other. Unlike the traditionally used softmax activation function, which is ideal for single-label, closed-world classification tasks, sigmoid nodes output class likelihoods for each node independently without the normalization across all classes that occurs in softmax.", "We find that the risk factor domains vary in the degree of homogeneity of language used, and as such certain domains produce higher similarity scores, on average, than others. To account for this, we calculate threshold similarity scores for each domain using the formula min=avg(sim)+ INLINEFORM0 * INLINEFORM1 (sim), where INLINEFORM2 is standard deviation and INLINEFORM3 is a constant, which we set to 0.78 for our MLP model and 1.2 for our RBF model through trial-and-error. Employing a generalized formula as opposed to manually identifying threshold similarity scores for each domain has the advantage of flexibility in regards to the target data, which may vary in average similarity scores depending on its similarity to the training data. If a paragraph does not meet threshold on any domain, it is classified as Other." ], [ "Table TABREF9 shows the performance of our models on classifying the paragraphs in our gold standard. To assess relative performance of feature representations, we also include performance metrics of our models without MWEs. Because this is a multilabel classification task we use macro-averaging to compute precision, recall, and F1 scores for each paragraph in the testing set. In identifying domains individually, our models achieved the highest per-domain scores on Substance (F1 INLINEFORM0 0.8) and the lowest scores on Interpersonal and Mood (F1 INLINEFORM1 0.5). We observe a consistency in per-domain performance rankings between our MLP and RBF models.", "The wide variance in per-domain performance is due to a number of factors. Most notably, the training examples we extracted from RPDR – while very comparable to our target OnTrackTM data – may not have an adequate variety of content and range of vocabulary. Although using keyword and MWE matching to create our training corpus has the advantage of being significantly less labor intensive than manually labeling every paragraph in the corpus, it is likely that the homogeneity of language used in the training paragraphs is higher than it would be otherwise. Additionally, all of the paragraphs in the training data are assigned exactly one risk factor domain even if they actually involve multiple risk factor domains, making the clustering behavior of the paragraphs more difficult to define. Figure FIGREF10 illustrates the distribution of paragraphs in vector space using 2-component Linear Discriminant Analysis (LDA) BIBREF26 .", "Despite prior research indicating that similar classification tasks to ours are more effectively performed by RBF networks BIBREF27 , BIBREF28 , BIBREF29 , we find that a MLP network performs marginally better with significantly less preprocessing (i.e. k-means and width calculations) involved. We can see in Figure FIGREF10 that Thought Process, Appearance, Substance, and – to a certain extent – Occupation clearly occupy specific regions, whereas Interpersonal, Mood, and Thought Content occupy the same noisy region where multiple domains overlap. Given that similarity is computed using Euclidean distance in an RBF network, it is difficult to accurately classify paragraphs that fall in regions occupied by multiple risk factor domain clusters since prototype centroids from the risk factor domains will overlap and be less differentiable. This is confirmed by the results in Table TABREF9 , where the differences in performance between the RBF and MLP models are more pronounced in the three overlapping domains (0.496 vs 0.448 for Interpersonal, 0.530 vs 0.496 for Mood, and 0.721 vs 0.678 for Thought Content) compared to the non-overlapping domains (0.564 vs 0.566 for Appearance, 0.592 vs 0.598 for Occupation, 0.797 vs 0.792 for Substance, and 0.635 vs 0.624 for Thought Process). We also observe a similarity in the words and phrases with the highest TF-IDF scores across the overlapping domains: many of the Thought Content words and phrases with the highest TF-IDF scores involve interpersonal relations (e.g. `fear surrounding daughter', `father', `family history', `familial conflict') and there is a high degree of similarity between high-scoring words for Mood (e.g. `meets anxiety criteria', `cope with mania', `ocd'[8]) and Thought Content (e.g. `mania', `feels anxious', `feels exhausted').", "[8]Obsessive-compulsive disorder", "MWEs play a large role in correctly identifying risk factor domains. Factoring them into our models increased classification performance by 15%, a marked improvement over our baseline model. This aligns with our expectations that MWEs comprised of a quotidian vocabulary hold much more clinical significance than when the words in the expressions are treated independently.", "Threshold similarity scores also play a large role in determining the precision and recall of our models: higher thresholds lead to a smaller number of false positives and a greater number of false negatives for each risk factor domain. Conversely, more paragraphs are incorrectly classified as Other when thresholds are set higher. Since our classifier will be used in future work as an early step in a data analysis pipeline for determining readmission risk, misclassifying a paragraph with an incorrect risk factor domain at this stage can lead to greater inaccuracies at later stages. Paragraphs misclassified as Other, however, will be discarded from the data pipeline. Therefore, we intentionally set a conservative threshold where only the most confidently labeled paragraphs are assigned membership in a particular domain." ], [ "To achieve our goal of creating a framework for a readmission risk classifier, the present study performed necessary evaluation steps by updating and adding to our model iteratively. In the first stage of the project, we focused on collecting the data necessary for training and testing, and on the domain classification annotation task. At the same time, we began creating the tools necessary for automatically extracting domain relevance scores at the paragraph and document level from patient EHRs using several forms of vectorization and topic modeling. In future versions of our risk factor domain classification model we will explore increasing robustness through sequence modeling that considers more contextual information.", "Our current feature set for training a machine learning classifier is relatively small, consisting of paragraph domain scores, bag-of-words, length of stay, and number of previous admissions, but we intend to factor in many additional features that extend beyond the scope of the present study. These include a deeper analysis of clinical narratives in EHRs: our next task will be to extend our EHR data pipeline by distinguishing between clinically positive and negative phenomena within each risk factor domain. This will involve a series of annotation tasks that will allow us to generate lexicon-based and corpus-based sentiment analysis tools. We can then use these clinical sentiment scores to generate a gradient of patient improvement or deterioration over time.", "We will also take into account structured data that have been collected on the target cohort throughout the course of this study such as brain based electrophysiological (EEG) biomarkers, structural brain anatomy from MRI scans (gray matter volume, cortical thickness, cortical surface-area), social and role functioning assessments, personality assessment (NEO-FFI[9]), and various symptom scales (PANSS[10], MADRS[11], YMRS[12]). For each feature we consider adding, we will evaluate the performance of the classifier with and without the feature to determine its contribution as a predictor of readmission." ], [ "This work was supported by a grant from the National Institute of Mental Health (grant no. 5R01MH109687 to Mei-Hua Hall). We would also like to thank the LOUHI 2018 Workshop reviewers for their constructive and helpful comments.", "[9]NEO Five-Factor Inventory BIBREF30 [10]Positive and Negative Syndrome Scale BIBREF31 [11]Montgomery-Asperg Depression Rating Scale BIBREF32 [12]Young Mania Rating Scale BIBREF33 " ] ] }
{ "question": [ "What additional features are proposed for future work?", "What are their initial results on this task?", "What datasets did the authors use?" ], "question_id": [ "c82e945b43b2e61c8ea567727e239662309e9508", "fbee81a9d90ff23603ee4f5986f9e8c0eb035b52", "39cf0b3974e8a19f3745ad0bcd1e916bf20eeab8" ], "nlp_background": [ "", "", "" ], "topic_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "search_query": [ "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "distinguishing between clinically positive and negative phenomena within each risk factor domain and accounting for structured data collected on the target cohort", "evidence": [ "Our current feature set for training a machine learning classifier is relatively small, consisting of paragraph domain scores, bag-of-words, length of stay, and number of previous admissions, but we intend to factor in many additional features that extend beyond the scope of the present study. These include a deeper analysis of clinical narratives in EHRs: our next task will be to extend our EHR data pipeline by distinguishing between clinically positive and negative phenomena within each risk factor domain. This will involve a series of annotation tasks that will allow us to generate lexicon-based and corpus-based sentiment analysis tools. We can then use these clinical sentiment scores to generate a gradient of patient improvement or deterioration over time." ], "highlighted_evidence": [ "Our current feature set for training a machine learning classifier is relatively small, consisting of paragraph domain scores, bag-of-words, length of stay, and number of previous admissions, but we intend to factor in many additional features that extend beyond the scope of the present study. These include a deeper analysis of clinical narratives in EHRs: our next task will be to extend our EHR data pipeline by distinguishing between clinically positive and negative phenomena within each risk factor domain. This will involve a series of annotation tasks that will allow us to generate lexicon-based and corpus-based sentiment analysis tools. We can then use these clinical sentiment scores to generate a gradient of patient improvement or deterioration over time." ] } ], "annotation_id": [ "096ace95350d743436952360918474c6160465ba" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Achieved the highest per-domain scores on Substance (F1 ≈ 0.8) and the lowest scores on Interpersonal and Mood (F1 ≈ 0.5), and show consistency in per-domain performance rankings between MLP and RBF models.", "evidence": [ "FLOAT SELECTED: Table 5: Overall and domain-specific Precision, Recall, and F1 scores for our models. The first row computes similarity directly from the TF-IDF matrix, as in (McCoy et al., 2015). All other rows are classifier outputs." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 5: Overall and domain-specific Precision, Recall, and F1 scores for our models. The first row computes similarity directly from the TF-IDF matrix, as in (McCoy et al., 2015). All other rows are classifier outputs." ] } ], "annotation_id": [ "06b60e5ec5adfa077523088275192cbf8e031661" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " a corpus of discharge summaries, admission notes, individual encounter notes, and other clinical notes from 220 patients in the OnTrackTM program at McLean Hospital", "an additional data set for training our vector space model, comprised of EHR texts queried from the Research Patient Data Registry (RPDR)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our target data set consists of a corpus of discharge summaries, admission notes, individual encounter notes, and other clinical notes from 220 patients in the OnTrackTM program at McLean Hospital. OnTrackTM is an outpatient program, focusing on treating adults ages 18 to 30 who are experiencing their first episodes of psychosis. The length of time in the program varies depending on patient improvement and insurance coverage, with an average of two to three years. The program focuses primarily on early intervention via individual therapy, group therapy, medication evaluation, and medication management. See Table TABREF2 for a demographic breakdown of the 220 patients, for which we have so far extracted approximately 240,000 total EHR paragraphs spanning from 2011 to 2014 using Meditech, the software employed by McLean for storing and organizing EHR data.", "These patients are part of a larger research cohort of approximately 1,800 psychosis patients, which will allow us to connect the results of this EHR study with other ongoing research studies incorporating genetic, cognitive, neurobiological, and functional outcome data from this cohort.", "We also use an additional data set for training our vector space model, comprised of EHR texts queried from the Research Patient Data Registry (RPDR), a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network. These records are highly comparable in style and vocabulary to our target data set. The corpus consists of discharge summaries, encounter notes, and visit notes from approximately 30,000 patients admitted to the system's hospitals with psychiatric diagnoses and symptoms. This breadth of data captures a wide range of clinical narratives, creating a comprehensive foundation for topic extraction." ], "highlighted_evidence": [ "Our target data set consists of a corpus of discharge summaries, admission notes, individual encounter notes, and other clinical notes from 220 patients in the OnTrackTM program at McLean Hospital. OnTrackTM is an outpatient program, focusing on treating adults ages 18 to 30 who are experiencing their first episodes of psychosis. The length of time in the program varies depending on patient improvement and insurance coverage, with an average of two to three years. The program focuses primarily on early intervention via individual therapy, group therapy, medication evaluation, and medication management. See Table TABREF2 for a demographic breakdown of the 220 patients, for which we have so far extracted approximately 240,000 total EHR paragraphs spanning from 2011 to 2014 using Meditech, the software employed by McLean for storing and organizing EHR data.\n\nThese patients are part of a larger research cohort of approximately 1,800 psychosis patients, which will allow us to connect the results of this EHR study with other ongoing research studies incorporating genetic, cognitive, neurobiological, and functional outcome data from this cohort.\n\nWe also use an additional data set for training our vector space model, comprised of EHR texts queried from the Research Patient Data Registry (RPDR), a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network. These records are highly comparable in style and vocabulary to our target data set. The corpus consists of discharge summaries, encounter notes, and visit notes from approximately 30,000 patients admitted to the system's hospitals with psychiatric diagnoses and symptoms. This breadth of data captures a wide range of clinical narratives, creating a comprehensive foundation for topic extraction." ] } ], "annotation_id": [ "c38bf256704127e0cac06bbceb4790090bb9063a" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Table 1: Demographic breakdown of the target cohort.", "Table 2: Annotation scheme for the domain classification task.", "Table 3: Inter-annotator agreement", "Table 4: Architectures of our highest-performing MLP and RBF networks.", "Figure 1: Data pipeline for training and evaluating our risk factor domain classifiers.", "Table 5: Overall and domain-specific Precision, Recall, and F1 scores for our models. The first row computes similarity directly from the TF-IDF matrix, as in (McCoy et al., 2015). All other rows are classifier outputs.", "Figure 2: 2-component linear discriminant analysis of the RPDR training data." ], "file": [ "3-Table1-1.png", "4-Table2-1.png", "5-Table3-1.png", "5-Table4-1.png", "6-Figure1-1.png", "6-Table5-1.png", "7-Figure2-1.png" ] }
2001.01589
Morphological Word Segmentation on Agglutinative Languages for Neural Machine Translation
Neural machine translation (NMT) has achieved impressive performance on machine translation task in recent years. However, in consideration of efficiency, a limited-size vocabulary that only contains the top-N highest frequency words are employed for model training, which leads to many rare and unknown words. It is rather difficult when translating from the low-resource and morphologically-rich agglutinative languages, which have complex morphology and large vocabulary. In this paper, we propose a morphological word segmentation method on the source-side for NMT that incorporates morphology knowledge to preserve the linguistic and semantic information in the word structure while reducing the vocabulary size at training time. It can be utilized as a preprocessing tool to segment the words in agglutinative languages for other natural language processing (NLP) tasks. Experimental results show that our morphologically motivated word segmentation method is better suitable for the NMT model, which achieves significant improvements on Turkish-English and Uyghur-Chinese machine translation tasks on account of reducing data sparseness and language complexity.
{ "section_name": [ "Introduction", "Approach", "Approach ::: Morpheme Segmentation", "Approach ::: Morpheme Segmentation ::: Stem with Combined Suffix", "Approach ::: Morpheme Segmentation ::: Stem with Singular Suffix", "Approach ::: Byte Pair Encoding (BPE)", "Approach ::: Morphologically Motivated Segmentation", "Approach ::: Morphologically Motivated Segmentation ::: BPE on Stem with Combined Suffix", "Approach ::: Morphologically Motivated Segmentation ::: BPE on Stem with Singular Suffix", "Experiments ::: Experimental Setup ::: Turkish-English Data :", "Experiments ::: Experimental Setup ::: Uyghur-Chinese Data :", "Experiments ::: Experimental Setup ::: Data Preprocessing :", "Experiments ::: Experimental Setup ::: Number of Merge Operations :", "Experiments ::: NMT Configuration", "Results", "Discussion", "Related Work", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Neural machine translation (NMT) has achieved impressive performance on machine translation task in recent years for many language pairs BIBREF0, BIBREF1, BIBREF2. However, in consideration of time cost and space capacity, the NMT model generally employs a limited-size vocabulary that only contains the top-N highest frequency words (commonly in the range of 30K to 80K) BIBREF3, which leads to the Out-of-Vocabulary (OOV) problem following with inaccurate and terrible translation results. Research indicated that sentences with too many unknown words tend to be translated much more poorly than sentences with mainly frequent words. For the low-resource and source-side morphologically-rich machine translation tasks, such as Turkish-English and Uyghur-Chinese, all the above issues are more serious due to the fact that the NMT model cannot effectively identify the complex morpheme structure or capture the linguistic and semantic information with too many rare and unknown words in the training corpus.", "Both the Turkish and Uyghur are agglutinative and highly-inflected languages in which the word is formed by suffixes attaching to a stem BIBREF4. The word consists of smaller morpheme units without any splitter between them and its structure can be denoted as “stem + suffix1 + suffix2 + ... + suffixN”. A stem is attached in the rear by zero to many suffixes that have many inflected and morphological variants depending on case, number, gender, and so on. The complex morpheme structure and relatively free constituent order can produce very large vocabulary because of the derivational morphology, so when translating from the agglutinative languages, many words are unseen at training time. Moreover, due to the semantic context, the same word generally has different segmentation forms in the training corpus.", "For the purpose of incorporating morphology knowledge of agglutinative languages into word segmentation for NMT, we propose a morphological word segmentation method on the source-side of Turkish-English and Uyghur-Chinese machine translation tasks, which segments the complex words into simple and effective morpheme units while reducing the vocabulary size for model training. In this paper, we investigate and compare the following segmentation strategies:", "Stem with combined suffix", "Stem with singular suffix", "Byte Pair Encoding (BPE)", "BPE on stem with combined suffix", "BPE on stem with singular suffix", "The latter two segmentation strategies are our newly proposed methods. Experimental results show that our morphologically motivated word segmentation method can achieve significant improvement of up to 1.2 and 2.5 BLEU points on Turkish-English and Uyghur-Chinese machine translation tasks over the strong baseline of pure BPE method respectively, indicating that it can provide better translation performance for the NMT model." ], [ "We will elaborate two popular word segmentation methods and our newly proposed segmentation strategies in this section. The two popular segmentation methods are morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5. After word segmentation, we additionally add an specific symbol behind each separated subword unit, which aims to assist the NMT model to identify the morpheme boundaries and capture the semantic information effectively. The sentence examples with different segmentation strategies for Turkish-English machine translation task are shown in Table 1." ], [ "The words of Turkish and Uyghur are formed by a stem followed with unlimited number of suffixes. Both of the stem and suffix are called morphemes, and they are the smallest functional unit in agglutinative languages. Study indicated that modeling language based on the morpheme units can provide better performance BIBREF6. Morpheme segmentation can segment the complex word into morpheme units of stem and suffix. This representation maintains a full description of the morphological properties of subwords while minimizing the data sparseness caused by inflection and allomorphy phenomenon in highly-inflected languages." ], [ "In this segmentation strategy, each word is segmented into a stem unit and a combined suffix unit. We add “##” behind the stem unit and add “$$” behind the combined suffix unit. We denote this method as SCS. The segmented word can be denoted as two parts of “stem##” and “suffix1suffix2...suffixN$$”. If the original word has no suffix unit, the word is treated as its stem unit. All the following segmentation strategies will follow this rule." ], [ "In this segmentation strategy, each word is segmented into a stem unit and a sequence of suffix units. We add “##” behind the stem unit and add “$$” behind each singular suffix unit. We denote this method as SSS. The segmented word can be denoted as a sequence of “stem##”, “suffix1$$”, “suffix2$$” until “suffixN$$”." ], [ "BPE BIBREF7 is originally a data compression technique and it is adapted by BIBREF5 for word segmentation and vocabulary reduction by encoding the rare and unknown words as a sequence of subword units, in which the most frequent character sequences are merged iteratively. Frequent character n-grams are eventually merged into a single symbol. This is based on the intuition that various word classes are translatable via smaller units than words. This method making the NMT model capable of open-vocabulary translation, which can generalize to translate and produce new words on the basis of these subword units. The BPE algorithm can be run on the dictionary extracted from a training text, with each word being weighted by its frequency. In this segmentation strategy, we add “@@” behind each no-final subword unit of the segmented word." ], [ "The problem with morpheme segmentation is that the vocabulary of stem units is still very large, which leads to many rare and unknown words at the training time. The problem with BPE is that it do not consider the morpheme boundaries inside words, which might cause a loss of morphological properties and semantic information. Hence, on the analyses of the above popular word segmentation methods, we propose the morphologically motivated segmentation strategy that combines the morpheme segmentation and BPE for further improving the translation performance of NMT.", "Compared with the sentence of word surface forms, the corresponding sentence of stem units only contains the structure information without considering morphological information, which can make better generalization over inflectional variants of the same word and reduce data sparseness BIBREF8. Therefore, we learn a BPE model on the stem units in the training corpus rather than the words, and then apply it on the stem unit of each word after morpheme segmentation." ], [ "In this segmentation strategy, firstly we segment each word into a stem unit and a combined suffix unit as SCS. Secondly, we apply BPE on the stem unit. Thirdly, we add “$$” behind the combined suffix unit. If the stem unit is not segmented, we add “##” behind itself. Otherwise, we add “@@” behind each no-final subword of the segmented stem unit. We denote this method as BPE-SCS." ], [ "In this segmentation strategy, firstly we segment each word into a stem unit and a sequence of suffix units as SSS. Secondly, we apply BPE on the stem unit. Thirdly, we add “$$” behind each singular suffix unit. If the stem unit is not segmented, we add “##” behind itself. Otherwise, we add “@@” behind each no-final subword of the segmented stem unit. We denote this method as BPE-SSS." ], [ "Following BIBREF9, we use the WIT corpus BIBREF10 and SETimes corpus BIBREF11 for model training, and use the newsdev2016 from Workshop on Machine Translation in 2016 (WMT2016) for validation. The test data are newstest2016 and newstest2017." ], [ "We use the news data from China Workshop on Machine Translation in 2017 (CWMT2017) for model training, validation and test." ], [ "We utilize the Zemberek with a morphological disambiguation tool to segment the Turkish words into morpheme units, and utilize the morphology analysis tool BIBREF12 to segment the Uyghur words into morpheme units. We employ the python toolkits of jieba for Chinese word segmentation. We apply BPE on the target-side words and we set the number of merge operations to 35K for Chinese and 30K for English and we set the maximum sentence length to 150 tokens. The training corpus statistics of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 2 and Table 3 respectively." ], [ "We set the number of merge operations on the stem units in the consideration of keeping the vocabulary size of BPE, BPE-SCS and BPE-SSS segmentation strategies on the same scale. We will elaborate the number settings for our proposed word segmentation strategies in this section.", "In the Turkish-English machine translation task, for the pure BPE strategy, we set the number of merge operations on the words to 35K, set the number of merge operations on the stem units for BPE-SCS strategy to 15K, and set the number of merge operations on the stem units for BPE-SSS strategy to 25K. In the Uyghur-Chinese machine translation task, for the pure BPE strategy, we set the number of merge operations on the words to 38K, set the number of merge operations on the stem units for BPE-SCS strategy to 10K, and set the number of merge operations on the stem units for BPE-SSS strategy to 35K. The detailed training corpus statistics with different segmentation strategies of Turkish and Uyghur are shown in Table 4 and Table 5 respectively.", "According to Table 4 and Table 5, we can find that both the Turkish and Uyghur have a very large vocabulary even in the low-resource training corpus. So we propose the morphological word segmentation strategies of BPE-SCS and BPE-SSS that additionally applying BPE on the stem units after morpheme segmentation, which not only consider the morphological properties but also eliminate the rare and unknown words." ], [ "We employ the Transformer model BIBREF13 with self-attention mechanism architecture implemented in Sockeye toolkit BIBREF14. Both the encoder and decoder have 6 layers. We set the number of hidden units to 512, the number of heads for self-attention to 8, the source and target word embedding size to 512, and the number of hidden units in feed-forward layers to 2048. We train the NMT model by using the Adam optimizer BIBREF15 with a batch size of 128 sentences, and we shuffle all the training data at each epoch. The label smoothing is set to 0.1. We report the result of averaging the parameters of the 4 best checkpoints on the validation perplexity. Decoding is performed by beam search with beam size of 5. To effectively evaluate the machine translation quality, we report case-sensitive BLEU score with standard tokenization and character n-gram ChrF3 score ." ], [ "In this paper, we investigate and compare morpheme segmentation, BPE and our proposed morphological segmentation strategies on the low resource and morphologically-rich agglutinative languages. Experimental results of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 6 and Table 7 respectively." ], [ "According to Table 6 and Table 7, we can find that both the BPE-SCS and BPE-SSS strategies outperform morpheme segmentation and the strong baseline of pure BPE method. Especially, the BPE-SSS strategy is better and it achieves significant improvement of up to 1.2 BLEU points on Turkish-English machine translation task and 2.5 BLEU points on Uyghur-Chinese machine translation task. Furthermore, we also find that the translation performance of our proposed segmentation strategy on Turkish-English machine translation task is not obvious than Uyghur-Chinese machine translation task, the probable reasons are: the training corpus of Turkish-English consists of talk and news data while most of the talk data are short informal sentences compared with the news data, which cannot provide more language information for the NMT model. Moreover, the test corpus consists of news data, so due to the data domain is different, the improvement of machine translation quality is limited.", "In addition, we estimate how the number of merge operations on the stem units for BPE-SSS strategy effects the machine translation quality. Experimental results are shown in Table 8 and Table 9. We find that the number of 25K for Turkish, 30K and 35K for Uyghur maximizes the translation performance. The probable reason is that these numbers of merge operations are able to generate a more appropriate vocabulary that containing effective morpheme units and moderate subword units, which makes better generalization over the morphologically-rich words." ], [ "The NMT system is typically trained with a limited vocabulary, which creates bottleneck on translation accuracy and generalization capability. Many word segmentation methods have been proposed to cope with the above problems, which consider the morphological properties of different languages.", "Bradbury and Socher BIBREF16 employed the modified Morfessor to provide morphology knowledge into word segmentation, but they neglected the morphological varieties between subword units, which might result in ambiguous translation results. Sanchez-Cartagena and Toral BIBREF17 proposed a rule-based morphological word segmentation for Finnish, which applies BPE on all the morpheme units uniformly without distinguishing their inner morphological roles. Huck BIBREF18 explored target-side segmentation method for German, which shows that the cascading of suffix splitting and compound splitting with BPE can achieve better translation results. Ataman et al. BIBREF19 presented a linguistically motivated vocabulary reduction approach for Turkish, which optimizes the segmentation complexity with constraint on the vocabulary based on a category-based hidden markov model (HMM). Our work is closely related to their idea while ours are more simple and realizable. Tawfik et al. BIBREF20 confirmed that there is some advantage from using a high accuracy dialectal segmenter jointly with a language independent word segmentation method like BPE. The main difference is that their approach needs sufficient monolingual data additionally to train a segmentation model while ours do not need any external resources, which is very convenient for word segmentation on the low-resource and morphologically-rich agglutinative languages." ], [ "In this paper, we investigate morphological segmentation strategies on the low-resource and morphologically-rich languages of Turkish and Uyghur. Experimental results show that our proposed morphologically motivated word segmentation method is better suitable for NMT. And the BPE-SSS strategy achieves the best machine translation performance, as it can better preserve the syntactic and semantic information of the words with complex morphology as well as reduce the vocabulary size for model training. Moreover, we also estimate how the number of merge operations on the stem units for BPE-SSS strategy effects the translation quality, and we find that an appropriate vocabulary size is more useful for the NMT model.", "In future work, we are planning to incorporate more linguistic and morphology knowledge into the training process of NMT to enhance its capacity of capturing syntactic structure and semantic information on the low-resource and morphologically-rich languages." ], [ "This work is supported by the National Natural Science Foundation of China, the Open Project of Key Laboratory of Xinjiang Uygur Autonomous Region, the Youth Innovation Promotion Association of the Chinese Academy of Sciences, and the High-level Talents Introduction Project of Xinjiang Uyghur Autonomous Region." ] ] }
{ "question": [ "How many linguistic and semantic features are learned?", "How is morphology knowledge implemented in the method?", "How does the word segmentation method work?", "Is the word segmentation method independently evaluated?" ], "question_id": [ "1f6180bba0bc657c773bd3e4269f87540a520ead", "57388bf2693d71eb966d42fa58ab66d7f595e55f", "47796c7f0a7de76ccb97ccbd43dc851bb8a613d5", "9d5153a7553b7113716420a6ddceb59f877eb617" ], "nlp_background": [ "two", "two", "two", "two" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "word segmentation", "word segmentation", "word segmentation", "word segmentation" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "8282253adbf7ac7e6158ff0b754a6b9d59034db0" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "A BPE model is applied to the stem after morpheme segmentation.", "evidence": [ "The problem with morpheme segmentation is that the vocabulary of stem units is still very large, which leads to many rare and unknown words at the training time. The problem with BPE is that it do not consider the morpheme boundaries inside words, which might cause a loss of morphological properties and semantic information. Hence, on the analyses of the above popular word segmentation methods, we propose the morphologically motivated segmentation strategy that combines the morpheme segmentation and BPE for further improving the translation performance of NMT.", "Compared with the sentence of word surface forms, the corresponding sentence of stem units only contains the structure information without considering morphological information, which can make better generalization over inflectional variants of the same word and reduce data sparseness BIBREF8. Therefore, we learn a BPE model on the stem units in the training corpus rather than the words, and then apply it on the stem unit of each word after morpheme segmentation." ], "highlighted_evidence": [ "The problem with morpheme segmentation is that the vocabulary of stem units is still very large, which leads to many rare and unknown words at the training time. ", "Therefore, we learn a BPE model on the stem units in the training corpus rather than the words, and then apply it on the stem unit of each word after morpheme segmentation." ] } ], "annotation_id": [ "a41011c056c976583dbf7ab2539065e7263beddf" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5", "Zemberek", "BIBREF12" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We will elaborate two popular word segmentation methods and our newly proposed segmentation strategies in this section. The two popular segmentation methods are morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5. After word segmentation, we additionally add an specific symbol behind each separated subword unit, which aims to assist the NMT model to identify the morpheme boundaries and capture the semantic information effectively. The sentence examples with different segmentation strategies for Turkish-English machine translation task are shown in Table 1.", "We utilize the Zemberek with a morphological disambiguation tool to segment the Turkish words into morpheme units, and utilize the morphology analysis tool BIBREF12 to segment the Uyghur words into morpheme units. We employ the python toolkits of jieba for Chinese word segmentation. We apply BPE on the target-side words and we set the number of merge operations to 35K for Chinese and 30K for English and we set the maximum sentence length to 150 tokens. The training corpus statistics of Turkish-English and Uyghur-Chinese machine translation tasks are shown in Table 2 and Table 3 respectively." ], "highlighted_evidence": [ "The two popular segmentation methods are morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5. After word segmentation, we additionally add an specific symbol behind each separated subword unit, which aims to assist the NMT model to identify the morpheme boundaries and capture the semantic information effectively. ", "We utilize the Zemberek with a morphological disambiguation tool to segment the Turkish words into morpheme units, and utilize the morphology analysis tool BIBREF12 to segment the Uyghur words into morpheme units. " ] } ], "annotation_id": [ "b791a08714ae7a7ec762f5a4b6c5e062579a4f15" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "06be1d572fd7d71ab3d646c5f4a4f4ed57a31b52" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] } ] }
{ "caption": [ "Table 1: The sentence examples with different segmentation strategies for Turkish-English.", "Table 2: The training corpus statistics of TurkishEnglish machine translation task.", "Table 3: The training corpus statistics of UyghurChinese machine translation task.", "Table 4: The training corpus statistics with different segmentation strategies of Turkish", "Table 5: The training corpus statistics with different segmentation strategies of Uyghur", "Table 6: Experimental results of Turkish-English machine translation task.", "Table 7: Experimental results of Uyghur-Chinese machine translation task.", "Table 8: Different numbers of merge operations for BPE-SSS strategy on Turkish-English.", "Table 9: Different numbers of merge operations for BPE-SSS strategy on Uyghur-Chinese." ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "3-Table3-1.png", "4-Table4-1.png", "4-Table5-1.png", "5-Table6-1.png", "5-Table7-1.png", "6-Table8-1.png", "6-Table9-1.png" ] }
1910.10324
Deja-vu: Double Feature Presentation and Iterated Loss in Deep Transformer Networks
Deep acoustic models typically receive features in the first layer of the network, and process increasingly abstract representations in the subsequent layers. Here, we propose to feed the input features at multiple depths in the acoustic model. As our motivation is to allow acoustic models to re-examine their input features in light of partial hypotheses we introduce intermediate model heads and loss function. We study this architecture in the context of deep Transformer networks, and we use an attention mechanism over both the previous layer activations and the input features. To train this model's intermediate output hypothesis, we apply the objective function at each layer right before feature re-use. We find that the use of such intermediate losses significantly improves performance by itself, as well as enabling input feature re-use. We present results on both Librispeech, and a large scale video dataset, with relative improvements of 10 - 20% for Librispeech and 3.2 - 13% for videos.
{ "section_name": [ "Introduction", "Transformer Modules", "Iterated Feature Presentation", "Iterated Feature Presentation ::: Feature Re-Presentation", "Iterated Feature Presentation ::: Iterated Loss", "Experimental results ::: Dataset", "Experimental results ::: Target Units", "Experimental results ::: Deep Transformer Acoustic Model", "Experimental results ::: Results", "Related Work", "Conclusion" ], "paragraphs": [ [ "In this paper, we propose the processing of features not only in the input layer of a deep network, but in the intermediate layers as well. We are motivated by a desire to enable a neural network acoustic model to adaptively process the features depending on partial hypotheses and noise conditions. Many previous methods for adaptation have operated by linearly transforming either input features or intermediate layers in a two pass process where the transform is learned to maximize the likelihood of some adaptation data BIBREF0, BIBREF1, BIBREF2. Other methods have involved characterizing the input via factor analysis or i-vectors BIBREF3, BIBREF4. Here, we suggest an alternative approach in which adaptation can be achieved by re-presenting the feature stream at an intermediate layer of the network that is constructed to be correlated with the ultimate graphemic or phonetic output of the system.", "We present this work in the context of Transformer networks BIBREF5. Transformers have become a popular deep learning architecture for modeling sequential datasets, showing improvements in many tasks such as machine translation BIBREF5, language modeling BIBREF6 and autoregressive image generation BIBREF7. In the speech recognition field, Transformers have been proposed to replace recurrent neural network (RNN) architectures such as LSTMs and GRUs BIBREF8. A recent survey of Transformers in many speech related applications may be found in BIBREF9. Compared to RNNs, Transformers have several advantages, specifically an ability to aggregate information across all the time-steps by using a self-attention mechanism. Unlike RNNs, the hidden representations do not need to be computed sequentially across time, thus enabling significant efficiency improvements via parallelization.", "In the context of Transformer module, secondary feature analysis is enabled through an additional mid-network transformer module that has access both to previous-layer activations and the raw features. To implement this model, we apply the objective function several times at the intermediate layers, to encourage the development of phonetically relevant hypotheses. Interestingly, we find that the iterated use of an auxiliary loss in the intermediate layers significantly improves performance by itself, as well as enabling the secondary feature analysis.", "This paper makes two main contributions:", "We present improvements in the basic training process of deep transformer networks, specifically the iterated use of CTC or CE in intermediate layers, and", "We show that an intermediate-layer attention model with access to both previous-layer activations and raw feature inputs can significantly improve performance.", "We evaluate our proposed model on Librispeech and a large-scale video dataset. From our experimental results, we observe 10-20% relative improvement on Librispeech and 3.2-11% on the video dataset." ], [ "A transformer network BIBREF5 is a powerful approach to learning and modeling sequential data. A transformer network is itself constructed with a series of transformer modules that each perform some processing. Each module has a self-attention mechanism and several feed-forward layers, enabling easy parallelization over time-steps compared to recurrent models such as RNNs or LSTMs BIBREF10. We use the architecture defined in BIBREF5, and provide only a brief summary below.", "Assume we have an input sequence that is of length $S$: $X = [x_1,...,x_S]$. Each $x_i$ is itself a vector of activations. A transformer layer encodes $X$ into a corresponding output representation $Z = [z_1,...,z_S]$ as described below.", "Transformers are built around the notion of a self-attention mechanism that is used to extract the relevant information for each time-step $s$ from all time-steps $[1..S]$ in the preceding layer. Self attention is defined in terms of a Query, Key, Value triplet $\\lbrace {Q}, {K}, {V}\\rbrace \\in \\mathbb {R}^{S \\times d_k}$. In self-attention, the queries, keys and values are the columns of the input itself, $[x_1,...,x_S]$. The output activations are computed as:", "Transformer modules deploy a multi-headed version of self-attention. As described in BIBREF5, this is done by linearly projecting the queries, keys and values $P$ times with different, learned linear projections. Self-attention is then applied to each of these projected versions of Queries, Keys and Values. These are concatenated and once again projected, resulting in the final values. We refer to the input projection matrices as $W_p^{Q}, W_p^{K}, W_p^{V}$, and to the output projection as $W_O$. Multihead attention is implemented as", "Here, $ W_p^Q, W_p^K, W_p^V \\in \\mathbb {R}^{d_{k} \\times d_m}$, $d_m = d_{k} / P$, and $W_O \\in \\mathbb {R}^{Pd_m \\times d_k}$.", "After self-attention, a transformer module applies a series of linear layer, RELU, layer-norm and dropout operations, as well as the application of residual connections. The full sequence of processing is illustrated in Figure FIGREF3." ], [ "In this section, we present our proposal for allowing the network to (re)-consider the input features in the light of intermediate processing. We do this by again deploying a self-attention mechanism to combine the information present in the original features with the information available in the activations of an intermediate layer. As described earlier, we calculate the output posteriors and auxiliary loss at the intermediate layer as well. The overall architecture is illustrated in Figure FIGREF6. Here, we have used a 24 layer network, with feature re-presentation after the 12th layer.", "In the following subsections, we provide detail on the feature re-presentation mechanism, and iterated loss calculation." ], [ "We process the features in the intermediate later by concatenating a projection of the original features with a projection of previous hidden layer activations, and then applying self-attention.", "First, we project both the the input and intermediate layer features $(Z_0 \\in \\mathbb {R}^{S \\times d_0}, Z_{k} \\in \\mathbb {R}^{S \\times d_{k}} )$, apply layer normalization and concatenate with position encoding:", "where $d_0$ is the input feature dimension, $d_k$ is the Transformer output dimension, $W_1 \\in \\mathbb {R}^{d_0 \\times d_c}, W_2 \\in \\mathbb {R}^{d_{k} \\times d_c}$ and $E \\in \\mathbb {R}^{S \\times d_{e}}$ is a sinusoidal position encoding BIBREF5.", "After we project both information sources to the same dimensionality, we merge the information by using time-axis concatenation:", "Then, we extract relevant features with extra Transformer layer and followed by linear projection and ReLU:", "where $W_3 \\in \\mathbb {R}^{d_{k+1}^{^{\\prime }} \\times d_{k+1}}$ is a linear projection. All biases in the formula above are omitted for simplicity.", "Note that in doing time-axis concatenation, our Key and Value sequences are twice as long as the original input. In the standard self-attention where the Query is the same as the Key and Value, the output preserves the sequence length. Therefore, in order to maintain the necessary sequence length $S$, we select either the first half (split A) or the second half (split B) to represent the combined information. The difference between these two is that the use of split A uses the projected input features as the Query set, while split B uses the projected higher level activations as the Query. In initial experiments, we found that the use of high-level features (split B) as queries is preferable. We illustrates this operation on Figure FIGREF11.", "Another way of combining information from the features with an intermediate layer is to concatenate the two along with the feature rather than the time axis. However, in initial experiments, we found that time axis concatenation produces better results, and focus on that in the experimental results." ], [ "We have found it beneficial to apply the loss function at several intermediate layers of the network. Suppose there are $M$ total layers, and define a subset of these layers at which to apply the loss function: $K = \\lbrace k_1, k_2, ..., k_L\\rbrace \\subseteq \\lbrace 1,..,M-1\\rbrace $. The total objective function is then defined as", "where $Z_{k_l}$ is the $k_l$-th Transformer layer activations, $Y$ is the ground-truth transcription for CTC and context dependent states for hybrid ASR, and $Loss(P, Y)$ can be defined as CTC objective BIBREF11 or cross entropy for hybrid ASR. The coefficient $\\lambda $ scales the auxiliary loss and we set $\\lambda = 0.3$ based on our preliminary experiments. We illustrate the auxiliary prediction and loss in Figure FIGREF6." ], [ "We evaluate our proposed module on both the Librispeech BIBREF12 dataset and a large-scale English video dataset. In the Librispeech training set, there are three splits, containing 100 and 360 hours sets of clean speech and 500 hours of other speech. We combined everything, resulting in 960 hours of training data. For the development set, there are also two splits: dev-clean and dev-other. For the test set, there is an analogous split.", "The video dataset is a collection of public and anonymized English videos. It consists of a 1000 hour training set, a 9 hour dev set, and a $46.1$ hour test set. The test set comprises an $8.5$ hour curated set of carefully selected very clean videos, a 19 hour clean set and a $18.6$ hour noisy set BIBREF13. For the hybrid ASR experiments on video dataset, alignments were generated with a production system trained with 14k hours.", "All speech features are extracted by using log Mel-filterbanks with 80 dimensions, a 25 ms window size and a 10 ms time step between two windows. Then we apply mean and variance normalization." ], [ "For CTC training, we use word-pieces as our target. During training, the reference is tokenized to 5000 sub-word units using sentencepiece with a uni-gram language model BIBREF14. Neural networks are thus used to produce a posterior distribution for 5001 symbols (5000 sub-word units plus blank symbol) every frame. For decoding, each sub-word is modeled by a HMM with two states where the last states share the same blank symbol probability; the best sub-word segmentation of each word is used to form a lexicon; these HMMs, lexicon are then combined with the standard $n$-gram via FST BIBREF15 to form a static decoding graph. Kaldi decoderBIBREF16 is used to produce the best hypothesis.", "We further present results with hybrid ASR systems. In this, we use the same HMM topology, GMM bootstrapping and decision tree building procedure as BIBREF13. Specifically, we use context-dependent (CD) graphemes as modeling units. On top of alignments from a GMM model, we build a decision tree to cluster CD graphemes. This results in 7248 context dependent units for Librispeech, and 6560 units for the video dataset. Training then proceeds with the CE loss function. We also apply SpecAugment BIBREF17 online during training, using the LD policy without time warping. For decoding, a standard Kaldi's WFST decoder BIBREF16 is used." ], [ "All neural networks are implemented with the in-house extension of the fairseq BIBREF18 toolkit. Our speech features are produced by processing the log Mel-spectrogram with two VGG BIBREF19 layers that have the following configurations: (1) two 2-D convolutions with 32 output filters, kernel=3, stride=1, ReLU activation, and max-pooling kernel=2, (2) two 2-D convolutions with 64 output filters, kernel=3, stride=1 and max-pooling kernel=2 for CTC or max-pooling kernel=1 for hybrid. After the VGG layers, the total number of frames are subsampled by (i) 4x for CTC, or (ii) 2x for hybrid, thus enabling us to reduce the run-time and memory usage significantly. After VGG processing, we use 24 Transformer layers with $d_k=512$ head dimensions (8 heads, each head has 64 dimensions), 2048 feedforward hidden dimensions (total parameters $\\pm $ 80 millions), and dropout $0.15$. For the proposed models, we utilized an auxiliary MLP with two linear layers with 256 hidden units, LeakyReLU activation and softmax (see Sec. SECREF3). We set our position encoding dimensions $d_e=256$ and pre-concatenation projection $d_c=768$ for the feature re-presentation layer. The loss function is either CTC loss or hybrid CE loss." ], [ "Table TABREF19 presents CTC based results for the Librispeech dataset, without data augmentation. Our baseline is a 24 layer Transformer network trained with CTC. For the proposed method, we varied the number and placement of iterated loss and the feature re-presentation. The next three results show the effect of using CTC multiple times. We see 12 and 8% relative improvements for test-clean and test-other. Adding feature re-presentation gives a further boost, with net 20 and 18% relative improvements over the baseline.", "Table TABREF20 shows results for Librispeech with SpecAugment. We test both CTC and CE/hybrid systems. There are consistent gains first from iterated loss, and then from multiple feature presentation. We also run additional CTC experiments with 36 layers Transformer (total parameters $\\pm $120 millions). The baseline with 36 layers has the same performance with 24 layers, but by adding the proposed methods, the 36 layer performance improved to give the best results. This shows that our proposed methods can improve even very deep models.", "As shown in Table TABREF21, the proposed methods also provide large performance improvements on the curated video set, up to 13% with CTC, and up to 9% with the hybrid model. We also observe moderate gains of between 3.2 and 8% relative on the clean and noisy video sets." ], [ "In recent years, Transformer models have become an active research topic in speech processing. The key features of Transformer networks is self-attention, which produces comparable or better performance to LSTMs when used for encoder-decoder based ASR BIBREF23, as well as when trained with CTC BIBREF9. Speech-Transformers BIBREF24 also produce comparable performance to the LSTM-based attention model, but with higher training speed in a single GPU. Abdelrahman et al.BIBREF8 integrates a convolution layer to capture audio context and reduces WER in Librispeech.", "The use of an objective function in intermediate layers has been found useful in several previous works such as image classification BIBREF25 and language modeling BIBREF26. In BIBREF27, the authors did pre-training with an RNN-T based model by using a hierarchical CTC criterion with different target units. In this paper, we don't need additional types of target unit, instead we just use same tokenization and targets for both intermediate and final losses.", "The application of the objective function to intermediate layers is also similar in spirit to the use of KL-divergence in BIBREF28, which estimates output posteriors at an intermediate layer and regularizes them towards the distributions at the final layer. In contrast to this approach, the direct application of the objective function does not require the network to have a good output distribution before the new gradient contribution is meaningful." ], [ "In this paper, we have proposed a method for re-processing the input features in light of the information available at an intermediate network layer. We do this in the context of deep transformer networks, via a self-attention mechanism on both features and hidden states representation. To encourage meaningful partial results, we calculate the objective function at intermediate layers of the network as well as the output layer. This improves performance in and of itself, and when combined with feature re-presentation we observe consistent relative improvements of 10 - 20% for Librispeech and 3.2 - 13% for videos." ] ] }
{ "question": [ "Do they normalize the calculated intermediate output hypotheses to compensate for the incompleteness?", "How many layers do they use in their best performing network?", "Do they just sum up all the loses the calculate to end up with one single loss?", "Does their model take more time to train than regular transformer models?" ], "question_id": [ "55c840a2f1f663ab2bff984ae71501b17429d0c0", "fa5357c56ea80a21a7ca88a80f21711c5431042c", "35915166ab2fd3d39c0297c427d4ac00e8083066", "e6c872fea474ea96ca2553f7e9d5875df4ef55cd" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "06c093783cd956f89b428df62843b8f6166d42a9" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "36" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Table TABREF20 shows results for Librispeech with SpecAugment. We test both CTC and CE/hybrid systems. There are consistent gains first from iterated loss, and then from multiple feature presentation. We also run additional CTC experiments with 36 layers Transformer (total parameters $\\pm $120 millions). The baseline with 36 layers has the same performance with 24 layers, but by adding the proposed methods, the 36 layer performance improved to give the best results. This shows that our proposed methods can improve even very deep models." ], "highlighted_evidence": [ "We also run additional CTC experiments with 36 layers Transformer (total parameters $\\pm $120 millions). The baseline with 36 layers has the same performance with 24 layers, but by adding the proposed methods, the 36 layer performance improved to give the best results. " ] } ], "annotation_id": [ "cb42b9892ca7565f4e0da438545453083ac4d2b4" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "We have found it beneficial to apply the loss function at several intermediate layers of the network. Suppose there are $M$ total layers, and define a subset of these layers at which to apply the loss function: $K = \\lbrace k_1, k_2, ..., k_L\\rbrace \\subseteq \\lbrace 1,..,M-1\\rbrace $. The total objective function is then defined as", "where $Z_{k_l}$ is the $k_l$-th Transformer layer activations, $Y$ is the ground-truth transcription for CTC and context dependent states for hybrid ASR, and $Loss(P, Y)$ can be defined as CTC objective BIBREF11 or cross entropy for hybrid ASR. The coefficient $\\lambda $ scales the auxiliary loss and we set $\\lambda = 0.3$ based on our preliminary experiments. We illustrate the auxiliary prediction and loss in Figure FIGREF6." ], "highlighted_evidence": [ "Suppose there are $M$ total layers, and define a subset of these layers at which to apply the loss function: $K = \\lbrace k_1, k_2, ..., k_L\\rbrace \\subseteq \\lbrace 1,..,M-1\\rbrace $. The total objective function is then defined as\n\nwhere $Z_{k_l}$ is the $k_l$-th Transformer layer activations, $Y$ is the ground-truth transcription for CTC and context dependent states for hybrid ASR, and $Loss(P, Y)$ can be defined as CTC objective BIBREF11 or cross entropy for hybrid ASR. T" ] } ], "annotation_id": [ "c84e6b5dd4cad675549fefc0b9eb2da817cb56b7" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "9d90dd89f90c61c800b7589ee0eb34b2e277fec1" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Fig. 2. A 24 layer transformer with one auxiliary loss and feature re-presentation in the 12-th layer. Z0 represents the input features. Orange boxes represent an additional MLP network and softmax. Green boxes represent linear projections and layer-norm.", "Fig. 3. Merging input features and intermediate layer activations with time axis concatenation for the Key and Value. Transformer layer finds relevant features based on the Query. Split A uses projected input features as the Query and Split B used projected intermediate layer activations as the Query.", "Table 1. Librispeech CTC experimental results without any data augmentation technique and decoded with FST based on 4-gram LM.", "Table 3. Video English dataset experimental results.", "Table 2. Librispeech experimental results. The baseline consists of VGG + 24 layers of Transformers trained with SpecAugment [18]. Trf is transformer. 4-gr LM is the official 4-gram word LM. S2S denotes sequence-to-sequence architecture." ], "file": [ "2-Figure2-1.png", "3-Figure3-1.png", "4-Table1-1.png", "4-Table3-1.png", "4-Table2-1.png" ] }
1910.05456
Acquisition of Inflectional Morphology in Artificial Neural Networks With Prior Knowledge
How does knowledge of one language's morphology influence learning of inflection rules in a second one? In order to investigate this question in artificial neural network models, we perform experiments with a sequence-to-sequence architecture, which we train on different combinations of eight source and three target languages. A detailed analysis of the model outputs suggests the following conclusions: (i) if source and target language are closely related, acquisition of the target language's inflectional morphology constitutes an easier task for the model; (ii) knowledge of a prefixing (resp. suffixing) language makes acquisition of a suffixing (resp. prefixing) language's morphology more challenging; and (iii) surprisingly, a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology, independent of their relatedness.
{ "section_name": [ "Introduction", "Task", "Task ::: Formal definition.", "Model ::: Pointer–Generator Network", "Model ::: Pointer–Generator Network ::: Encoders.", "Model ::: Pointer–Generator Network ::: Attention.", "Model ::: Pointer–Generator Network ::: Decoder.", "Model ::: Pretraining and Finetuning", "Experimental Design ::: Target Languages", "Experimental Design ::: Source Languages", "Experimental Design ::: Hyperparameters and Data", "Quantitative Results", "Qualitative Results", "Qualitative Results ::: Stem Errors", "Qualitative Results ::: Affix Errors", "Qualitative Results ::: Miscellaneous Errors", "Qualitative Results ::: Error Analysis: English", "Qualitative Results ::: Error Analysis: Spanish", "Qualitative Results ::: Error Analysis: Zulu", "Qualitative Results ::: Limitations", "Related Work ::: Neural network models for inflection.", "Related Work ::: Cross-lingual transfer in NLP.", "Related Work ::: Acquisition of morphological inflection.", "Conclusion and Future Work", "Acknowledgments" ], "paragraphs": [ [ "A widely agreed-on fact in language acquisition research is that learning of a second language (L2) is influenced by a learner's native language (L1) BIBREF0, BIBREF1. A language's morphosyntax seems to be no exception to this rule BIBREF2, but the exact nature of this influence remains unknown. For instance, it is unclear whether it is constraints imposed by the phonological or by the morphosyntactic attributes of the L1 that are more important during the process of learning an L2's morphosyntax.", "Within the area of natural language processing (NLP) research, experimenting on neural network models just as if they were human subjects has recently been gaining popularity BIBREF3, BIBREF4, BIBREF5. Often, so-called probing tasks are used, which require a specific subset of linguistic knowledge and can, thus, be leveraged for qualitative evaluation. The goal is to answer the question: What do neural networks learn that helps them to succeed in a given task?", "Neural network models, and specifically sequence-to-sequence models, have pushed the state of the art for morphological inflection – the task of learning a mapping from lemmata to their inflected forms – in the last years BIBREF6. Thus, in this work, we experiment on such models, asking not what they learn, but, motivated by the respective research on human subjects, the related question of how what they learn depends on their prior knowledge. We manually investigate the errors made by artificial neural networks for morphological inflection in a target language after pretraining on different source languages. We aim at finding answers to two main questions: (i) Do errors systematically differ between source languages? (ii) Do these differences seem explainable, given the properties of the source and target languages? In other words, we are interested in exploring if and how L2 acquisition of morphological inflection depends on the L1, i.e., the \"native language\", in neural network models.", "To this goal, we select a diverse set of eight source languages from different language families – Basque, French, German, Hungarian, Italian, Navajo, Turkish, and Quechua – and three target languages – English, Spanish and Zulu. We pretrain a neural sequence-to-sequence architecture on each of the source languages and then fine-tune the resulting models on small datasets in each of the target languages. Analyzing the errors made by the systems, we find that (i) source and target language being closely related simplifies the successful learning of inflection in the target language, (ii) the task is harder to learn in a prefixing language if the source language is suffixing – as well as the other way around, and (iii) a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology." ], [ "Many of the world's languages exhibit rich inflectional morphology: the surface form of an individual lexical entry changes in order to express properties such as person, grammatical gender, or case. The citation form of a lexical entry is referred to as the lemma. The set of all possible surface forms or inflections of a lemma is called its paradigm. Each inflection within a paradigm can be associated with a tag, i.e., 3rdSgPres is the morphological tag associated with the inflection dances of the English lemma dance. We display the paradigms of dance and eat in Table TABREF1.", "The presence of rich inflectional morphology is problematic for NLP systems as it increases word form sparsity. For instance, while English verbs can have up to 5 inflected forms, Archi verbs have thousands BIBREF7, even by a conservative count. Thus, an important task in the area of morphology is morphological inflection BIBREF8, BIBREF9, which consists of mapping a lemma to an indicated inflected form. An (irregular) English example would be", "with PAST being the target tag, denoting the past tense form. Additionally, a rich inflectional morphology is also challenging for L2 language learners, since both rules and their exceptions need to be memorized.", "In NLP, morphological inflection has recently frequently been cast as a sequence-to-sequence problem, where the sequence of target (sub-)tags together with the sequence of input characters constitute the input sequence, and the characters of the inflected word form the output. Neural models define the state of the art for the task and obtain high accuracy if an abundance of training data is available. Here, we focus on learning of inflection from limited data if information about another language's morphology is already known. We, thus, loosely simulate an L2 learning setting." ], [ "Let ${\\cal M}$ be the paradigm slots which are being expressed in a language, and $w$ a lemma in that language. We then define the paradigm $\\pi $ of $w$ as:", "$f_k[w]$ denotes an inflected form corresponding to tag $t_{k}$, and $w$ and $f_k[w]$ are strings consisting of letters from an alphabet $\\Sigma $.", "The task of morphological inflection consists of predicting a missing form $f_i[w]$ from a paradigm, given the lemma $w$ together with the tag $t_i$." ], [ "The models we experiment with are based on a pointer–generator network architecture BIBREF10, BIBREF11, i.e., a recurrent neural network (RNN)-based sequence-to-sequence network with attention and a copy mechanism. A standard sequence-to-sequence model BIBREF12 has been shown to perform well for morphological inflection BIBREF13 and has, thus, been subject to cognitively motivated experiments BIBREF14 before. Here, however, we choose the pointer–generator variant of sharma-katrapati-sharma:2018:K18-30, since it performs better in low-resource settings, which we will assume for our target languages. We will explain the model shortly in the following and refer the reader to the original paper for more details." ], [ "Our architecture employs two separate encoders, which are both bi-directional long short-term memory (LSTM) networks BIBREF15: The first processes the morphological tags which describe the desired target form one by one. The second encodes the sequence of characters of the input word." ], [ "Two separate attention mechanisms are used: one per encoder LSTM. Taking all respective encoder hidden states as well as the current decoder hidden state as input, each of them outputs a so-called context vector, which is a weighted sum of all encoder hidden states. The concatenation of the two individual context vectors results in the final context vector $c_t$, which is the input to the decoder at time step $t$." ], [ "Our decoder consists of a uni-directional LSTM. Unlike a standard sequence-to-sequence model, a pointer–generator network is not limited to generating characters from the vocabulary to produce the output. Instead, the model gives certain probability to copying elements from the input over to the output. The probability of a character $y_t$ at time step $t$ is computed as a sum of the probability of $y_t$ given by the decoder and the probability of copying $y_t$, weighted by the probabilities of generating and copying:", "$p_{\\textrm {dec}}(y_t)$ is calculated as an LSTM update and a projection of the decoder state to the vocabulary, followed by a softmax function. $p_{\\textrm {copy}}(y_t)$ corresponds to the attention weights for each input character. The model computes the probability $\\alpha $ with which it generates a new output character as", "for context vector $c_t$, decoder state $s_t$, embedding of the last output $y_{t-1}$, weights $w_c$, $w_s$, $w_y$, and bias vector $b$. It has been shown empirically that the copy mechanism of the pointer–generator network architecture is beneficial for morphological generation in the low-resource setting BIBREF16." ], [ "Pretraining and successive fine-tuning of neural network models is a common approach for handling of low-resource settings in NLP. The idea is that certain properties of language can be learned either from raw text, related tasks, or related languages. Technically, pretraining consists of estimating some or all model parameters on examples which do not necessarily belong to the final target task. Fine-tuning refers to continuing training of such a model on a target task, whose data is often limited. While the sizes of the pretrained model parameters usually remain the same between the two phases, the learning rate or other details of the training regime, e.g., dropout, might differ. Pretraining can be seen as finding a suitable initialization of model parameters, before training on limited amounts of task- or language-specific examples.", "In the context of morphological generation, pretraining in combination with fine-tuning has been used by kann-schutze-2018-neural, which proposes to pretrain a model on general inflection data and fine-tune on examples from a specific paradigm whose remaining forms should be automatically generated. Famous examples for pretraining in the wider area of NLP include BERT BIBREF17 or GPT-2 BIBREF18: there, general properties of language are learned using large unlabeled corpora.", "Here, we are interested in pretraining as a simulation of familiarity with a native language. By investigating a fine-tuned model we ask the question: How does extensive knowledge of one language influence the acquisition of another?" ], [ "We choose three target languages.", "English (ENG) is a morphologically impoverished language, as far as inflectional morphology is concerned. Its verbal paradigm only consists of up to 5 different forms and its nominal paradigm of only up to 2. However, it is one of the most frequently spoken and taught languages in the world, making its acquisition a crucial research topic.", "Spanish (SPA), in contrast, is morphologically rich, and disposes of much larger verbal paradigms than English. Like English, it is a suffixing language, and it additionally makes use of internal stem changes (e.g., o $\\rightarrow $ ue).", "Since English and Spanish are both Indo-European languages, and, thus, relatively similar, we further add a third, unrelated target language. We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing." ], [ "For pretraining, we choose languages with different degrees of relatedness and varying morphological similarity to English, Spanish, and Zulu. We limit our experiments to languages which are written in Latin script.", "As an estimate for morphological similarity we look at the features from the Morphology category mentioned in The World Atlas of Language Structures (WALS). An overview of the available features as well as the respective values for our set of languages is shown in Table TABREF13.", "We decide on Basque (EUS), French (FRA), German (DEU), Hungarian (HUN), Italian (ITA), Navajo (NAV), Turkish (TUR), and Quechua (QVH) as source languages.", "Basque is a language isolate. Its inflectional morphology makes similarly frequent use of prefixes and suffixes, with suffixes mostly being attached to nouns, while prefixes and suffixes can both be employed for verbal inflection.", "French and Italian are Romance languages, and thus belong to the same family as the target language Spanish. Both are suffixing and fusional languages.", "German, like English, belongs to the Germanic language family. It is a fusional, predominantly suffixing language and, similarly to Spanish, makes use of stem changes.", "Hungarian, a Finno-Ugric language, and Turkish, a Turkic language, both exhibit an agglutinative morphology, and are predominantly suffixing. They further have vowel harmony systems.", "Navajo is an Athabaskan language and the only source language which is strongly prefixing. It further exhibits consonant harmony among its sibilants BIBREF19, BIBREF20.", "Finally, Quechua, a Quechuan language spoken in South America, is again predominantly suffixing and unrelated to all of our target languages." ], [ "We mostly use the default hyperparameters by sharma-katrapati-sharma:2018:K18-30. In particular, all RNNs have one hidden layer of size 100, and all input and output embeddings are 300-dimensional.", "For optimization, we use ADAM BIBREF21. Pretraining on the source language is done for exactly 50 epochs. To obtain our final models, we then fine-tune different copies of each pretrained model for 300 additional epochs for each target language. We employ dropout BIBREF22 with a coefficient of 0.3 for pretraining and, since that dataset is smaller, with a coefficient of 0.5 for fine-tuning.", "We make use of the datasets from the CoNLL–SIGMORPHON 2018 shared task BIBREF9. The organizers provided a low, medium, and high setting for each language, with 100, 1000, and 10000 examples, respectively. For all L1 languages, we train our models on the high-resource datasets with 10000 examples. For fine-tuning, we use the low-resource datasets." ], [ "In Table TABREF18, we show the final test accuracy for all models and languages. Pretraining on EUS and NAV results in the weakest target language inflection models for ENG, which might be explained by those two languages being unrelated to ENG and making at least partial use of prefixing, while ENG is a suffixing language (cf. Table TABREF13). In contrast, HUN and ITA yield the best final models for ENG. This is surprising, since DEU is the language in our experiments which is closest related to ENG.", "For SPA, again HUN performs best, followed closely by ITA. While the good performance of HUN as a source language is still unexpected, ITA is closely related to SPA, which could explain the high accuracy of the final model. As for ENG, pretraining on EUS and NAV yields the worst final models – importantly, accuracy is over $15\\%$ lower than for QVH, which is also an unrelated language. This again suggests that the prefixing morphology of EUS and NAV might play a role.", "Lastly, for ZUL, all models perform rather poorly, with a minimum accuracy of 10.7 and 10.8 for the source languages QVH and EUS, respectively, and a maximum accuracy of 24.9 for a model pretrained on Turkish. The latter result hints at the fact that a regular and agglutinative morphology might be beneficial in a source language – something which could also account for the performance of models pretrained on HUN." ], [ "For our qualitative analysis, we make use of the validation set. Therefore, we show validation set accuracies in Table TABREF19 for comparison. As we can see, the results are similar to the test set results for all language combinations. We manually annotate the outputs for the first 75 development examples for each source–target language combination. All found errors are categorized as belonging to one of the following categories." ], [ "SUB(X): This error consists of a wrong substitution of one character with another. SUB(V) and SUB(C) denote this happening with a vowel or a consonant, respectively. Letters that differ from each other by an accent count as different vowels.", "Example: decultared instead of decultured", "DEL(X): This happens when the system ommits a letter from the output. DEL(V) and DEL(C) refer to a missing vowel or consonant, respectively.", "Example: firte instead of firtle", "NO_CHG(X): This error occurs when inflecting the lemma to the gold form requires a change of either a vowel (NO_CHG(V)) or a consonant (NO_CHG(C)), but this is missing in the predicted form.", "Example: verto instead of vierto", "MULT: This describes cases where two or more errors occur in the stem. Errors concerning the affix are counted for separately.", "Example: aconcoonaste instead of acondicionaste", "ADD(X): This error occurs when a letter is mistakenly added to the inflected form. ADD(V) refers to an unnecessary vowel, ADD(C) refers to an unnecessary consonant.", "Example: compillan instead of compilan", "CHG2E(X): This error occurs when inflecting the lemma to the gold form requires a change of either a vowel (CHG2E(V)) or a consonant (CHG2E(C)), and this is done, but the resulting vowel or consonant is incorrect.", "Example: propace instead of propague" ], [ "AFF: This error refers to a wrong affix. This can be either a prefix or a suffix, depending on the correct target form.", "Example: ezoJulayi instead of esikaJulayi", "CUT: This consists of cutting too much of the lemma's prefix or suffix before attaching the inflected form's prefix or suffix, respectively.", "Example: irradiseis instead of irradiaseis" ], [ "REFL: This happens when a reflective pronoun is missing in the generated form.", "Example: doliéramos instead of nos doliéramos", "REFL_LOC: This error occurs if the reflective pronouns appears at an unexpected position within the generated form.", "Example: taparsebais instead of os tapabais", "OVERREG: Overregularization errors occur when the model predicts a form which would be correct if the lemma's inflections were regular but they are not.", "Example: underteach instead of undertaught" ], [ "Table TABREF35 displays the errors found in the 75 first ENG development examples, for each source language. From Table TABREF19, we know that HUN $>$ ITA $>$ TUR $>$ DEU $>$ FRA $>$ QVH $>$ NAV $>$ EUS, and we get a similar picture when analyzing the first examples. Thus, especially keeping HUN and TUR in mind, we cautiously propose a first conclusion: familiarity with languages which exhibit an agglutinative morphology simplifies learning of a new language's morphology.", "Looking at the types of errors, we find that EUS and NAV make the most stem errors. For QVH we find less, but still over 10 more than for the remaining languages. This makes it seem that models pretrained on prefixing or partly prefixing languages indeed have a harder time to learn ENG inflectional morphology, and, in particular, to copy the stem correctly. Thus, our second hypotheses is that familiarity with a prefixing language might lead to suspicion of needed changes to the part of the stem which should remain unaltered in a suffixing language. DEL(X) and ADD(X) errors are particularly frequent for EUS and NAV, which further suggests this conclusion.", "Next, the relatively large amount of stem errors for QVH leads to our second hypothesis: language relatedness does play a role when trying to produce a correct stem of an inflected form. This is also implied by the number of MULT errors for EUS, NAV and QVH, as compared to the other languages.", "Considering errors related to the affixes which have to be generated, we find that DEU, HUN and ITA make the fewest. This further suggests the conclusion that, especially since DEU is the language which is closest related to ENG, language relatedness plays a role for producing suffixes of inflected forms as well.", "Our last observation is that many errors are not found at all in our data sample, e.g., CHG2E(X) or NO_CHG(C). This can be explained by ENG having a relatively poor inflectional morphology, which does not leave much room for mistakes." ], [ "The errors committed for SPA are shown in Table TABREF37, again listed by source language. Together with Table TABREF19 it gets clear that SPA inflectional morphology is more complex than that of ENG: systems for all source languages perform worse.", "Similarly to ENG, however, we find that most stem errors happen for the source languages EUS and NAV, which is further evidence for our previous hypothesis that familiarity with prefixing languages impedes acquisition of a suffixing one. Especially MULT errors are much more frequent for EUS and NAV than for all other languages. ADD(X) happens a lot for EUS, while ADD(C) is also frequent for NAV. Models pretrained on either language have difficulties with vowel changes, which reflects in NO_CHG(V). Thus, we conclude that this phenomenon is generally hard to learn.", "Analyzing next the errors concerning affixes, we find that models pretrained on HUN, ITA, DEU, and FRA (in that order) commit the fewest errors. This supports two of our previous hypotheses: First, given that ITA and FRA are both from the same language family as SPA, relatedness seems to be benficial for learning of the second language. Second, the system pretrained on HUN performing well suggests again that a source language with an agglutinative, as opposed to a fusional, morphology seems to be beneficial as well." ], [ "In Table TABREF39, the errors for Zulu are shown, and Table TABREF19 reveals the relative performance for different source languages: TUR $>$ HUN $>$ DEU $>$ ITA $>$ FRA $>$ NAV $>$ EUS $>$ QVH. Again, TUR and HUN obtain high accuracy, which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language.", "Besides that, results differ from those for ENG and SPA. First of all, more mistakes are made for all source languages. However, there are also several finer differences. For ZUL, the model pretrained on QVH makes the most stem errors, in particular 4 more than the EUS model, which comes second. Given that ZUL is a prefixing language and QVH is suffixing, this relative order seems important. QVH also committs the highest number of MULT errors.", "The next big difference between the results for ZUL and those for ENG and SPA is that DEL(X) and ADD(X) errors, which previously have mostly been found for the prefixing or partially prefixing languages EUS and NAV, are now most present in the outputs of suffixing languages. Namely, DEL(C) occurs most for FRA and ITA, DEL(V) for FRA and QVH, and ADD(C) and ADD(V) for HUN. While some deletion and insertion errors are subsumed in MULT, this does not fully explain this difference. For instance, QVH has both the second most DEL(V) and the most MULT errors.", "The overall number of errors related to the affix seems comparable between models with different source languages. This weakly supports the hypothesis that relatedness reduces affix-related errors, since none of the pretraining languages in our experiments is particularly close to ZUL. However, we do find more CUT errors for HUN and TUR: again, these are suffixing, while CUT for the target language SPA mostly happened for the prefixing languages EUS and NAV." ], [ "A limitation of our work is that we only include languages that are written in Latin script. An interesting question for future work might, thus, regard the effect of disjoint L1 and L2 alphabets.", "Furthermore, none of the languages included in our study exhibits a templatic morphology. We make this choice because data for templatic languages is currently mostly available in non-Latin alphabets. Future work could investigate languages with templatic morphology as source or target languages, if needed by mapping the language's alphabet to Latin characters.", "Finally, while we intend to choose a diverse set of languages for this study, our overall number of languages is still rather small. This affects the generalizability of the results, and future work might want to look at larger samples of languages." ], [ "Most research on inflectional morphology in NLP within the last years has been related to the SIGMORPHON and CoNLL–SIGMORPHON shared tasks on morphological inflection, which have been organized yearly since 2016 BIBREF6. Traditionally being focused on individual languages, the 2019 edition BIBREF23 contained a task which asked for transfer learning from a high-resource to a low-resource language. However, source–target pairs were predefined, and the question of how the source language influences learning besides the final accuracy score was not considered. Similarly to us, kyle performed a manual error analysis of morphological inflection systems for multiple languages. However, they did not investigate transfer learning, but focused on monolingual models.", "Outside the scope of the shared tasks, kann-etal-2017-one investigated cross-lingual transfer for morphological inflection, but was limited to a quantitative analysis. Furthermore, that work experimented with a standard sequence-to-sequence model BIBREF12 in a multi-task training fashion BIBREF24, while we pretrain and fine-tune pointer–generator networks. jin-kann-2017-exploring also investigated cross-lingual transfer in neural sequence-to-sequence models for morphological inflection. However, their experimental setup mimicked kann-etal-2017-one, and the main research questions were different: While jin-kann-2017-exploring asked how cross-lingual knowledge transfer works during multi-task training of neural sequence-to-sequence models on two languages, we investigate if neural inflection models demonstrate interesting differences in production errors depending on the pretraining language. Besides that, we differ in the artificial neural network architecture and language pairs we investigate." ], [ "Cross-lingual transfer learning has been used for a large variety NLP of tasks, e.g., automatic speech recognition BIBREF25, entity recognition BIBREF26, language modeling BIBREF27, or parsing BIBREF28, BIBREF29, BIBREF30. Machine translation has been no exception BIBREF31, BIBREF32, BIBREF33. Recent research asked how to automatically select a suitable source language for a given target language BIBREF34. This is similar to our work in that our findings could potentially be leveraged to find good source languages." ], [ "Finally, a lot of research has focused on human L1 and L2 acquisition of inflectional morphology BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF39, BIBREF40.", "To name some specific examples, marques2011study investigated the effect of a stay abroad on Spanish L2 acquisition, including learning of its verbal morphology in English speakers. jia2003acquisition studied how Mandarin Chinese-speaking children learned the English plural morpheme. nicoladis2012young studied the English past tense acquisition in Chinese–English and French–English bilingual children. They found that, while both groups showed similar production accuracy, they differed slightly in the type of errors they made. Also considering the effect of the native language explicitly, yang2004impact investigated the acquisition of the tense-aspect system in an L2 for speakers of a native language which does not mark tense explicitly.", "Finally, our work has been weakly motivated by bliss2006l2. There, the author asked a question for human subjects which is similar to the one we ask for neural models: How does the native language influence L2 acquisition of inflectional morphology?" ], [ "Motivated by the fact that, in humans, learning of a second language is influenced by a learner's native language, we investigated a similar question in artificial neural network models for morphological inflection: How does pretraining on different languages influence a model's learning of inflection in a target language?", "We performed experiments on eight different source languages and three different target languages. An extensive error analysis of all final models showed that (i) for closely related source and target languages, acquisition of target language inflection gets easier; (ii) knowledge of a prefixing language makes learning of inflection in a suffixing language more challenging, as well as the other way around; and (iii) languages which exhibit an agglutinative morphology facilitate learning of inflection in a second language.", "Future work might leverage those findings to improve neural network models for morphological inflection in low-resource languages, by choosing suitable source languages for pretraining.", "Another interesting next step would be to investigate how the errors made by our models compare to those by human L2 learners with different native languages. If the exhibited patterns resemble each other, computational models could be used to predict errors a person will make, which, in turn, could be leveraged for further research or the development of educational material." ], [ "I would like to thank Samuel R. Bowman and Kyle Gorman for helpful discussions and suggestions. This work has benefited from the support of Samsung Research under the project Improving Deep Learning using Latent Structure and from the donation of a Titan V GPU by NVIDIA Corporation." ] ] }
{ "question": [ "Are agglutinative languages used in the prediction of both prefixing and suffixing languages?", "What is an example of a prefixing language?", "How is the performance on the task evaluated?", "What are the tree target languages studied in the paper?" ], "question_id": [ "fc29bb14f251f18862c100e0d3cd1396e8f2c3a1", "f3e96c5487d87557a661a65395b0162033dc05b3", "74db8301d42c7e7936eb09b2171cd857744c52eb", "587885bc86543b8f8b134c20e2c62f6251195571" ], "nlp_background": [ "two", "two", "two", "two" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "morphology", "morphology", "morphology", "morphology" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Spanish (SPA), in contrast, is morphologically rich, and disposes of much larger verbal paradigms than English. Like English, it is a suffixing language, and it additionally makes use of internal stem changes (e.g., o $\\rightarrow $ ue).", "Since English and Spanish are both Indo-European languages, and, thus, relatively similar, we further add a third, unrelated target language. We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing.", "Analyzing next the errors concerning affixes, we find that models pretrained on HUN, ITA, DEU, and FRA (in that order) commit the fewest errors. This supports two of our previous hypotheses: First, given that ITA and FRA are both from the same language family as SPA, relatedness seems to be benficial for learning of the second language. Second, the system pretrained on HUN performing well suggests again that a source language with an agglutinative, as opposed to a fusional, morphology seems to be beneficial as well.", "In Table TABREF39, the errors for Zulu are shown, and Table TABREF19 reveals the relative performance for different source languages: TUR $>$ HUN $>$ DEU $>$ ITA $>$ FRA $>$ NAV $>$ EUS $>$ QVH. Again, TUR and HUN obtain high accuracy, which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language." ], "highlighted_evidence": [ "Spanish (SPA), in contrast, is morphologically rich, and disposes of much larger verbal paradigms than English. Like English, it is a suffixing language, and it additionally makes use of internal stem changes (e.g., o $\\rightarrow $ ue).", "We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing.", "Second, the system pretrained on HUN performing well suggests again that a source language with an agglutinative, as opposed to a fusional, morphology seems to be beneficial as well.", "Again, TUR and HUN obtain high accuracy, which is an additional indicator for our hypothesis that a source language with an agglutinative morphology facilitates learning of inflection in another language." ] } ], "annotation_id": [ "a38dc2ad92ff5c2cda31f3be4f22daba2e001e98" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Zulu" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Since English and Spanish are both Indo-European languages, and, thus, relatively similar, we further add a third, unrelated target language. We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing." ], "highlighted_evidence": [ "We choose Zulu (ZUL), a Bantoid language. In contrast to the first two, it is strongly prefixing." ] } ], "annotation_id": [ "266852dc68f118fe7f769bd3dbfcb6c1db052e63" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Comparison of test accuracies of neural network models on an inflection task and qualitative analysis of the errors", "evidence": [ "Neural network models, and specifically sequence-to-sequence models, have pushed the state of the art for morphological inflection – the task of learning a mapping from lemmata to their inflected forms – in the last years BIBREF6. Thus, in this work, we experiment on such models, asking not what they learn, but, motivated by the respective research on human subjects, the related question of how what they learn depends on their prior knowledge. We manually investigate the errors made by artificial neural networks for morphological inflection in a target language after pretraining on different source languages. We aim at finding answers to two main questions: (i) Do errors systematically differ between source languages? (ii) Do these differences seem explainable, given the properties of the source and target languages? In other words, we are interested in exploring if and how L2 acquisition of morphological inflection depends on the L1, i.e., the \"native language\", in neural network models.", "For our qualitative analysis, we make use of the validation set. Therefore, we show validation set accuracies in Table TABREF19 for comparison. As we can see, the results are similar to the test set results for all language combinations. We manually annotate the outputs for the first 75 development examples for each source–target language combination. All found errors are categorized as belonging to one of the following categories." ], "highlighted_evidence": [ "We manually investigate the errors made by artificial neural networks for morphological inflection in a target language after pretraining on different source languages.", "For our qualitative analysis, we make use of the validation set. Therefore, we show validation set accuracies in Table TABREF19 for comparison.", "We manually annotate the outputs for the first 75 development examples for each source–target language combination. All found errors are categorized as belonging to one of the following categories." ] } ], "annotation_id": [ "06c8cd73539b38eaffa4705ef799087a155fc99d" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "English, Spanish and Zulu" ], "yes_no": null, "free_form_answer": "", "evidence": [ "To this goal, we select a diverse set of eight source languages from different language families – Basque, French, German, Hungarian, Italian, Navajo, Turkish, and Quechua – and three target languages – English, Spanish and Zulu. We pretrain a neural sequence-to-sequence architecture on each of the source languages and then fine-tune the resulting models on small datasets in each of the target languages. Analyzing the errors made by the systems, we find that (i) source and target language being closely related simplifies the successful learning of inflection in the target language, (ii) the task is harder to learn in a prefixing language if the source language is suffixing – as well as the other way around, and (iii) a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology." ], "highlighted_evidence": [ "To this goal, we select a diverse set of eight source languages from different language families – Basque, French, German, Hungarian, Italian, Navajo, Turkish, and Quechua – and three target languages – English, Spanish and Zulu. " ] } ], "annotation_id": [ "d48c287a47f6b52d11af7fb02494192a5b5e04cb" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] } ] }
{ "caption": [ "Table 1: Paradigms of the English lemmas dance and eat. dance has 4 distinct inflected forms; eat has 5.", "Table 2: WALS features from the Morphology category. 20A: 0=Exclusively concatenative, 1=N/A. 21A: 0=No case, 1=Monoexponential case, 2=Case+number, 3=N/A. 21B: 0=monoexponential TAM, 1=TAM+agreement, 2=N/A. 22A: 0=2-3 categories per word, 1=4-5 categories per word, 2=N/A, 3=6-7 categories per word, 4=8-9 categories per word. 23A: 0=Dependent marking, 1=Double marking, 2=Head marking, 3=No marking, 4=N/A. 24A: 0=Dependent marking, 1=N/A, 2=Double marking. 25A: 0=Dependent-marking, 1=Inconsistent or other, 2=N/A. 25B: 0=Non-zero marking, 1=N/A. 26A: 0=Strongly suffixing, 1=Strong prefixing, 2=Equal prefixing and suffixing. 27A: 0=No productive reduplication, 1=Full reduplication only, 2=Productive full and partial reduplication. 28A: 0=Core cases only, 1=Core and non-core, 2=No case marking, 3=No syncretism, 4=N/A. 29A: 0=Syncretic, 1=Not syncretic, 2=N/A.", "Table 3: Test accuracy.", "Table 4: Validation accuracy.", "Table 5: Error analysis for ENG as the model’s L2.", "Table 7: Error analysis for ZUL as the model’s L2.", "Table 6: Error analysis for SPA as the model’s L2." ], "file": [ "1-Table1-1.png", "4-Table2-1.png", "5-Table3-1.png", "5-Table4-1.png", "6-Table5-1.png", "7-Table7-1.png", "7-Table6-1.png" ] }
1910.05154
How Does Language Influence Documentation Workflow? Unsupervised Word Discovery Using Translations in Multiple Languages
For language documentation initiatives, transcription is an expensive resource: one minute of audio is estimated to take one hour and a half on average of a linguist's work (Austin and Sallabank, 2013). Recently, collecting aligned translations in well-resourced languages became a popular solution for ensuring posterior interpretability of the recordings (Adda et al. 2016). In this paper we investigate language-related impact in automatic approaches for computational language documentation. We translate the bilingual Mboshi-French parallel corpus (Godard et al. 2017) into four other languages, and we perform bilingual-rooted unsupervised word discovery. Our results hint towards an impact of the well-resourced language in the quality of the output. However, by combining the information learned by different bilingual models, we are only able to marginally increase the quality of the segmentation.
{ "section_name": [ "Introduction", "Methodology ::: The Multilingual Mboshi Parallel Corpus:", "Methodology ::: Bilingual Unsupervised Word Segmentation/Discovery Approach:", "Methodology ::: Multilingual Leveraging:", "Experiments", "Conclusion" ], "paragraphs": [ [ "The Cambridge Handbook of Endangered Languages BIBREF3 estimates that at least half of the 7,000 languages currently spoken worldwide will no longer exist by the end of this century. For these endangered languages, data collection campaigns have to accommodate the challenge that many of them are from oral tradition, and producing transcriptions is costly. This transcription bottleneck problem can be handled by translating into a widely spoken language to ensure subsequent interpretability of the collected recordings, and such parallel corpora have been recently created by aligning the collected audio with translations in a well-resourced language BIBREF1, BIBREF2, BIBREF4. Moreover, some linguists suggested that more than one translation should be collected to capture deeper layers of meaning BIBREF5.", "This work is a contribution to the Computational Language Documentation (CLD) research field, that aims to replace part of the manual steps performed by linguists during language documentation initiatives by automatic approaches. Here we investigate the unsupervised word discovery and segmentation task, using the bilingual-rooted approach from BIBREF6. There, words in the well-resourced language are aligned to unsegmented phonemes in the endangered language in order to identify group of phonemes, and to cluster them into word-like units. We experiment with the Mboshi-French parallel corpus, translating the French text into four other well-resourced languages in order to investigate language impact in this CLD approach. Our results hint that this language impact exists, and that models based on different languages will output different word-like units." ], [ "In this work we extend the bilingual Mboshi-French parallel corpus BIBREF2, fruit of the documentation process of Mboshi (Bantu C25), an endangered language spoken in Congo-Brazzaville. The corpus contains 5,130 utterances, for which it provides audio, transcriptions and translations in French. We translate the French into four other well-resourced languages through the use of the $DeepL$ translator. The languages added to the dataset are: English, German, Portuguese and Spanish. Table shows some statistics for the produced Multilingual Mboshi parallel corpus." ], [ "We use the bilingual neural-based Unsupervised Word Segmentation (UWS) approach from BIBREF6 to discover words in Mboshi. In this approach, Neural Machine Translation (NMT) models are trained between language pairs, using as source language the translation (word-level) and as target, the language to document (unsegmented phonemic sequence). Due to the attention mechanism present in these networks BIBREF7, posterior to training, it is possible to retrieve soft-alignment probability matrices between source and target sequences. These matrices give us sentence-level source-to-target alignment information, and by using it for clustering neighbor phonemes aligned to the same translation word, we are able to create segmentation in the target side. The product of this approach is a set of (discovered-units, translation words) pairs." ], [ "In this work we apply two simple methods for including multilingual information into the bilingual models from BIBREF6. The first one, Multilingual Voting, consists of merging the information learned by models trained with different language pairs by performing a voting over the final discovered boundaries. The voting is performed by applying an agreement threshold $T$ over the output boundaries. This threshold balances between accepting all boundaries from all the bilingual models (zero agreement) and accepting only input boundaries discovered by all these models (total agreement). The second method is ANE Selection. For every language pair and aligned sentence in the dataset, a soft-alignment probability matrix is generated. We use Average Normalized Entropy (ANE) BIBREF8 computed over these matrices for selecting the most confident one for segmenting each phoneme sequence. This exploits the idea that models trained on different language pairs will have language-related behavior, thus differing on the resulting alignment and segmentation over the same phoneme sequence." ], [ "The experiment settings from this paper and evaluation protocol for the Mboshi corpus (Boundary F-scores using the ZRC speech reference) are the same from BIBREF8. Table presents the results for bilingual UWS and multilingual leveraging. For the former, we reach our best result by using as aligned information the French, the original aligned language for this dataset. Languages closely related to French (Spanish and Portuguese) ranked better, while our worst result used German. English also performs notably well in our experiments. We believe this is due to the statistics features of the resulting text. We observe in Table that the English portion of the dataset contains the smallest vocabulary among all languages. Since we train our systems in very low-resource settings, vocabulary-related features can impact greatly the system's capacity to language-model, and consequently the final quality of the produced alignments. Even in high-resource settings, it was already attested that some languages are more difficult to model than others BIBREF9.", "For the multilingual selection experiments, we experimented combining the languages from top to bottom as they appear Table (ranked by performance; e.g. 1-3 means the combination of FR(1), EN(2) and PT(3)). We observe that the performance improvement is smaller than the one observed in previous work BIBREF10, which we attribute to the fact that our dataset was artificially augmented. This could result in the available multilingual form of supervision not being as rich as in a manually generated dataset. Finally, the best boundary segmentation result is obtained by performing multilingual voting with all the languages and an agreement of 50%, which indicates that the information learned by different languages will provide additional complementary evidence.", "Lastly, following the methodology from BIBREF8, we extract the most confident alignments (in terms of ANE) discovered by the bilingual models. Table presents the top 10 most confident (discovered type, translation) pairs. Looking at the pairs the bilingual models are most confident about, we observe there are some types discovered by all the bilingual models (e.g. Mboshi word itua, and the concatenation oboá+ngá). However, the models still differ for most of their alignments in the table. This hints that while a portion of the lexicon might be captured independently of the language used, other structures might be more dependent of the chosen language. On this note, BIBREF11 suggests the notion of word cannot always be meaningfully defined cross-linguistically." ], [ "In this work we train bilingual UWS models using the endangered language Mboshi as target and different well-resourced languages as aligned information. Results show that similar languages rank better in terms of segmentation performance, and that by combining the information learned by different models, segmentation is further improved. This might be due to the different language-dependent structures that are captured by using more than one language. Lastly, we extend the bilingual Mboshi-French parallel corpus, creating a multilingual corpus for the endangered language Mboshi that we make available to the community." ] ] }
{ "question": [ "Is the model evaluated against any baseline?", "Does the paper report the accuracy of the model?", "How is the performance of the model evaluated?", "What are the different bilingual models employed?", "How does the well-resourced language impact the quality of the output?" ], "question_id": [ "b72264a73eea36c828e7de778a8b4599a5d02b39", "24cc1586e5411a7f8574796d3c576b7c677d6e21", "db291d734524fa51fb314628b64ebe1bac7f7e1e", "85abd60094c92eb16f39f861c6de8c2064807d02", "50f09a044f0c0795cc636c3e25a4f7c3231fb08d" ], "nlp_background": [ "two", "two", "two", "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no" ], "search_query": [ "word segmentation", "word segmentation", "word segmentation", "word segmentation", "word segmentation" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "The experiment settings from this paper and evaluation protocol for the Mboshi corpus (Boundary F-scores using the ZRC speech reference) are the same from BIBREF8. Table presents the results for bilingual UWS and multilingual leveraging. For the former, we reach our best result by using as aligned information the French, the original aligned language for this dataset. Languages closely related to French (Spanish and Portuguese) ranked better, while our worst result used German. English also performs notably well in our experiments. We believe this is due to the statistics features of the resulting text. We observe in Table that the English portion of the dataset contains the smallest vocabulary among all languages. Since we train our systems in very low-resource settings, vocabulary-related features can impact greatly the system's capacity to language-model, and consequently the final quality of the produced alignments. Even in high-resource settings, it was already attested that some languages are more difficult to model than others BIBREF9." ], "highlighted_evidence": [ "The experiment settings from this paper and evaluation protocol for the Mboshi corpus (Boundary F-scores using the ZRC speech reference) are the same from BIBREF8. Table presents the results for bilingual UWS and multilingual leveraging. For the former, we reach our best result by using as aligned information the French, the original aligned language for this dataset. Languages closely related to French (Spanish and Portuguese) ranked better, while our worst result used German. English also performs notably well in our experiments. We believe this is due to the statistics features of the resulting text. We observe in Table that the English portion of the dataset contains the smallest vocabulary among all languages. Since we train our systems in very low-resource settings, vocabulary-related features can impact greatly the system's capacity to language-model, and consequently the final quality of the produced alignments. Even in high-resource settings, it was already attested that some languages are more difficult to model than others BIBREF9." ] } ], "annotation_id": [ "d24322721753903d77adb5699245e6115e2fa2c1" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "The experiment settings from this paper and evaluation protocol for the Mboshi corpus (Boundary F-scores using the ZRC speech reference) are the same from BIBREF8. Table presents the results for bilingual UWS and multilingual leveraging. For the former, we reach our best result by using as aligned information the French, the original aligned language for this dataset. Languages closely related to French (Spanish and Portuguese) ranked better, while our worst result used German. English also performs notably well in our experiments. We believe this is due to the statistics features of the resulting text. We observe in Table that the English portion of the dataset contains the smallest vocabulary among all languages. Since we train our systems in very low-resource settings, vocabulary-related features can impact greatly the system's capacity to language-model, and consequently the final quality of the produced alignments. Even in high-resource settings, it was already attested that some languages are more difficult to model than others BIBREF9." ], "highlighted_evidence": [ "The experiment settings from this paper and evaluation protocol for the Mboshi corpus (Boundary F-scores using the ZRC speech reference) are the same from BIBREF8. Table presents the results for bilingual UWS and multilingual leveraging. " ] } ], "annotation_id": [ "06f6f7431cb73f90fb9447141fa82374b79b1ee1" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "The experiment settings from this paper and evaluation protocol for the Mboshi corpus (Boundary F-scores using the ZRC speech reference) are the same from BIBREF8.", "For the multilingual selection experiments, we experimented combining the languages from top to bottom as they appear Table (ranked by performance; e.g. 1-3 means the combination of FR(1), EN(2) and PT(3)). ", "Lastly, following the methodology from BIBREF8, we extract the most confident alignments (in terms of ANE) discovered by the bilingual models." ], "yes_no": null, "free_form_answer": "", "evidence": [ "The experiment settings from this paper and evaluation protocol for the Mboshi corpus (Boundary F-scores using the ZRC speech reference) are the same from BIBREF8. Table presents the results for bilingual UWS and multilingual leveraging. For the former, we reach our best result by using as aligned information the French, the original aligned language for this dataset. Languages closely related to French (Spanish and Portuguese) ranked better, while our worst result used German. English also performs notably well in our experiments. We believe this is due to the statistics features of the resulting text. We observe in Table that the English portion of the dataset contains the smallest vocabulary among all languages. Since we train our systems in very low-resource settings, vocabulary-related features can impact greatly the system's capacity to language-model, and consequently the final quality of the produced alignments. Even in high-resource settings, it was already attested that some languages are more difficult to model than others BIBREF9.", "For the multilingual selection experiments, we experimented combining the languages from top to bottom as they appear Table (ranked by performance; e.g. 1-3 means the combination of FR(1), EN(2) and PT(3)). We observe that the performance improvement is smaller than the one observed in previous work BIBREF10, which we attribute to the fact that our dataset was artificially augmented. This could result in the available multilingual form of supervision not being as rich as in a manually generated dataset. Finally, the best boundary segmentation result is obtained by performing multilingual voting with all the languages and an agreement of 50%, which indicates that the information learned by different languages will provide additional complementary evidence.", "Lastly, following the methodology from BIBREF8, we extract the most confident alignments (in terms of ANE) discovered by the bilingual models. Table presents the top 10 most confident (discovered type, translation) pairs. Looking at the pairs the bilingual models are most confident about, we observe there are some types discovered by all the bilingual models (e.g. Mboshi word itua, and the concatenation oboá+ngá). However, the models still differ for most of their alignments in the table. This hints that while a portion of the lexicon might be captured independently of the language used, other structures might be more dependent of the chosen language. On this note, BIBREF11 suggests the notion of word cannot always be meaningfully defined cross-linguistically." ], "highlighted_evidence": [ "The experiment settings from this paper and evaluation protocol for the Mboshi corpus (Boundary F-scores using the ZRC speech reference) are the same from BIBREF8. Table presents the results for bilingual UWS and multilingual leveraging. For the former, we reach our best result by using as aligned information the French, the original aligned language for this dataset. ", "For the multilingual selection experiments, we experimented combining the languages from top to bottom as they appear Table (ranked by performance; e.g. 1-3 means the combination of FR(1), EN(2) and PT(3)). We observe that the performance improvement is smaller than the one observed in previous work BIBREF10, which we attribute to the fact that our dataset was artificially augmented.", "Lastly, following the methodology from BIBREF8, we extract the most confident alignments (in terms of ANE) discovered by the bilingual models. Table presents the top 10 most confident (discovered type, translation) pairs. Looking at the pairs the bilingual models are most confident about, we observe there are some types discovered by all the bilingual models (e.g. Mboshi word itua, and the concatenation oboá+ngá)." ] } ], "annotation_id": [ "4dcce66c1c575d23b0653f64261102753c095f08" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " Neural Machine Translation (NMT) models are trained between language pairs, using as source language the translation (word-level) and as target" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use the bilingual neural-based Unsupervised Word Segmentation (UWS) approach from BIBREF6 to discover words in Mboshi. In this approach, Neural Machine Translation (NMT) models are trained between language pairs, using as source language the translation (word-level) and as target, the language to document (unsegmented phonemic sequence). Due to the attention mechanism present in these networks BIBREF7, posterior to training, it is possible to retrieve soft-alignment probability matrices between source and target sequences. These matrices give us sentence-level source-to-target alignment information, and by using it for clustering neighbor phonemes aligned to the same translation word, we are able to create segmentation in the target side. The product of this approach is a set of (discovered-units, translation words) pairs.", "In this work we apply two simple methods for including multilingual information into the bilingual models from BIBREF6. The first one, Multilingual Voting, consists of merging the information learned by models trained with different language pairs by performing a voting over the final discovered boundaries. The voting is performed by applying an agreement threshold $T$ over the output boundaries. This threshold balances between accepting all boundaries from all the bilingual models (zero agreement) and accepting only input boundaries discovered by all these models (total agreement). The second method is ANE Selection. For every language pair and aligned sentence in the dataset, a soft-alignment probability matrix is generated. We use Average Normalized Entropy (ANE) BIBREF8 computed over these matrices for selecting the most confident one for segmenting each phoneme sequence. This exploits the idea that models trained on different language pairs will have language-related behavior, thus differing on the resulting alignment and segmentation over the same phoneme sequence.", "Lastly, following the methodology from BIBREF8, we extract the most confident alignments (in terms of ANE) discovered by the bilingual models. Table presents the top 10 most confident (discovered type, translation) pairs. Looking at the pairs the bilingual models are most confident about, we observe there are some types discovered by all the bilingual models (e.g. Mboshi word itua, and the concatenation oboá+ngá). However, the models still differ for most of their alignments in the table. This hints that while a portion of the lexicon might be captured independently of the language used, other structures might be more dependent of the chosen language. On this note, BIBREF11 suggests the notion of word cannot always be meaningfully defined cross-linguistically.", "FLOAT SELECTED: Table 3: Top 10 confident (discovered type, translation) pairs for the five bilingual models. The “+” mark means the discovered type is a concatenation of two existing true types." ], "highlighted_evidence": [ "In this approach, Neural Machine Translation (NMT) models are trained between language pairs, using as source language the translation (word-level) and as target, the language to document (unsegmented phonemic sequence).", "The first one, Multilingual Voting, consists of merging the information learned by models trained with different language pairs by performing a voting over the final discovered boundaries. The voting is performed by applying an agreement threshold $T$ over the output boundaries. ", " Table presents the top 10 most confident (discovered type, translation) pairs. Looking at the pairs the bilingual models are most confident about, we observe there are some types discovered by all the bilingual models (e.g. Mboshi word itua, and the concatenation oboá+ngá). However, the models still differ for most of their alignments in the table. ", "FLOAT SELECTED: Table 3: Top 10 confident (discovered type, translation) pairs for the five bilingual models. The “+” mark means the discovered type is a concatenation of two existing true types." ] } ], "annotation_id": [ "802e0e3bd1a48834751cc638e635b059f1dc1f54" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Results show that similar languages rank better in terms of segmentation performance, and that by combining the information learned by different models, segmentation is further improved." ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this work we train bilingual UWS models using the endangered language Mboshi as target and different well-resourced languages as aligned information. Results show that similar languages rank better in terms of segmentation performance, and that by combining the information learned by different models, segmentation is further improved. This might be due to the different language-dependent structures that are captured by using more than one language. Lastly, we extend the bilingual Mboshi-French parallel corpus, creating a multilingual corpus for the endangered language Mboshi that we make available to the community." ], "highlighted_evidence": [ "In this work we train bilingual UWS models using the endangered language Mboshi as target and different well-resourced languages as aligned information. Results show that similar languages rank better in terms of segmentation performance, and that by combining the information learned by different models, segmentation is further improved. This might be due to the different language-dependent structures that are captured by using more than one language. " ] } ], "annotation_id": [ "ebd90c05f054ea84827d8b05d5808f1b8548be75" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ] }
{ "caption": [ "Table 2: From left to right, results for: bilingual UWS, multilingual leveraging by voting, ANE selection.", "Table 1: Statistics for the Multilingual Mboshi parallel corpus. The French text is used for generating translation in the four other languages present in the right side of the table.", "Table 3: Top 10 confident (discovered type, translation) pairs for the five bilingual models. The “+” mark means the discovered type is a concatenation of two existing true types." ], "file": [ "3-Table2-1.png", "3-Table1-1.png", "4-Table3-1.png" ] }
1806.00722
Dense Information Flow for Neural Machine Translation
Recently, neural machine translation has achieved remarkable progress by introducing well-designed deep neural networks into its encoder-decoder framework. From the optimization perspective, residual connections are adopted to improve learning performance for both encoder and decoder in most of these deep architectures, and advanced attention connections are applied as well. Inspired by the success of the DenseNet model in computer vision problems, in this paper, we propose a densely connected NMT architecture (DenseNMT) that is able to train more efficiently for NMT. The proposed DenseNMT not only allows dense connection in creating new features for both encoder and decoder, but also uses the dense attention structure to improve attention quality. Our experiments on multiple datasets show that DenseNMT structure is more competitive and efficient.
{ "section_name": [ "Introduction", "DenseNMT", "Dense encoder and decoder", "Dense attention", "Summary layers", "Analysis of information flow", "Datasets", "Model and architect design", "Training setting", "Training curve", "DenseNMT improves accuracy with similar architectures and model sizes", "DenseNMT with smaller embedding size", "DenseNMT compares with state-of-the-art results", "Conclusion" ], "paragraphs": [ [ "Neural machine translation (NMT) is a challenging task that attracts lots of attention in recent years. Starting from the encoder-decoder framework BIBREF0 , NMT starts to show promising results in many language pairs. The evolving structures of NMT models in recent years have made them achieve higher scores and become more favorable. The attention mechanism BIBREF1 added on top of encoder-decoder framework is shown to be very useful to automatically find alignment structure, and single-layer RNN-based structure has evolved into deeper models with more efficient transformation functions BIBREF2 , BIBREF3 , BIBREF4 .", "One major challenge of NMT is that its models are hard to train in general due to the complexity of both the deep models and languages. From the optimization perspective, deeper models are hard to efficiently back-propagate the gradients, and this phenomenon as well as its solution is better explored in the computer vision society. Residual networks (ResNet) BIBREF5 achieve great performance in a wide range of tasks, including image classification and image segmentation. Residual connections allow features from previous layers to be accumulated to the next layer easily, and make the optimization of the model efficiently focus on refining upper layer features.", "NMT is considered as a challenging problem due to its sequence-to-sequence generation framework, and the goal of comprehension and reorganizing from one language to the other. Apart from the encoder block that works as a feature generator, the decoder network combining with the attention mechanism bring new challenges to the optimization of the models. While nowadays best-performing NMT systems use residual connections, we question whether this is the most efficient way to propagate information through deep models. In this paper, inspired by the idea of using dense connections for training computer vision tasks BIBREF6 , we propose a densely connected NMT framework (DenseNMT) that efficiently propagates information from the encoder to the decoder through the attention component. Taking the CNN-based deep architecture as an example, we verify the efficiency of DenseNMT. Our contributions in this work include: (i) by comparing the loss curve, we show that DenseNMT allows the model to pass information more efficiently, and speeds up training; (ii) we show through ablation study that dense connections in all three blocks altogether help improve the performance, while not increasing the number of parameters; (iii) DenseNMT allows the models to achieve similar performance with much smaller embedding size; (iv) DenseNMT on IWSLT14 German-English and Turkish-English translation tasks achieves new benchmark BLEU scores, and the result on WMT14 English-German task is more competitive than the residual connections based baseline model." ], [ "In this section, we introduce our DenseNMT architecture. In general, compared with residual connected NMT models, DenseNMT allows each layer to provide its information to all subsequent layers directly. Figure FIGREF9 - FIGREF15 show the design of our model structure by parts.", "We start with the formulation of a regular NMT model. Given a set of sentence pairs INLINEFORM0 , an NMT model learns parameter INLINEFORM1 by maximizing the log-likelihood function: DISPLAYFORM0 ", "For every sentence pair INLINEFORM0 , INLINEFORM1 is calculated based on the decomposition: DISPLAYFORM0 ", "where INLINEFORM0 is the length of sentence INLINEFORM1 . Typically, NMT models use the encoder-attention-decoder framework BIBREF1 , and potentially use multi-layer structure for both encoder and decoder. Given a source sentence INLINEFORM2 with length INLINEFORM3 , the encoder calculates hidden representations by layer. We denote the representation in the INLINEFORM4 -th layer as INLINEFORM5 , with dimension INLINEFORM6 , where INLINEFORM7 is the dimension of features in layer INLINEFORM8 . The hidden representation at each position INLINEFORM9 is either calculated by: DISPLAYFORM0 ", "for recurrent transformation INLINEFORM0 such as LSTM and GRU, or by: DISPLAYFORM0 ", "for parallel transformation INLINEFORM0 . On the other hand, the decoder layers INLINEFORM1 follow similar structure, while getting extra representations from the encoder side. These extra representations are also called attention, and are especially useful for capturing alignment information.", "In our experiments, we use convolution based transformation for INLINEFORM0 due to both its efficiency and high performance, more formally, DISPLAYFORM0 ", " INLINEFORM0 is the gated linear unit proposed in BIBREF11 and the kernel size is INLINEFORM1 . DenseNMT is agnostic to the transformation function, and we expect it to also work well combining with other transformations, such as LSTM, self-attention and depthwise separable convolution." ], [ "Different from residual connections, later layers in the dense encoder are able to use features from all previous layers by concatenating them: DISPLAYFORM0 ", "Here, INLINEFORM0 is defined in Eq. ( EQREF10 ), INLINEFORM1 represents concatenation operation. Although this brings extra connections to the network, with smaller number of features per layer, the architecture encourages feature reuse, and can be more compact and expressive. As shown in Figure FIGREF9 , when designing the model, the hidden size in each layer is much smaller than the hidden size of the corresponding layer in the residual-connected model.", "While each encoder layer perceives information from its previous layers, each decoder layer INLINEFORM0 has two information sources: previous layers INLINEFORM1 , and attention values INLINEFORM2 . Therefore, in order to allow dense information flow, we redefine the generation of INLINEFORM3 -th layer as a nonlinear function over all its previous decoder layers and previous attentions. This can be written as: DISPLAYFORM0 ", "where INLINEFORM0 is the attention value using INLINEFORM1 -th decoder layer and information from encoder side, which will be specified later. Figure FIGREF13 shows the comparison of a dense decoder with a regular residual decoder. The dimensions of both attention values and hidden layers are chosen with smaller values, yet the perceived information for each layer consists of a higher dimension vector with more representation power. The output of the decoder is a linear transformation of the concatenation of all layers by default. To compromise to the increment of dimensions, we use summary layers, which will be introduced in Section 3.3. With summary layers, the output of the decoder is only a linear transformation of the concatenation of the upper few layers." ], [ "Prior works show a trend of designing more expressive attention mechanisms (as discussed in Section 2). However, most of them only use the last encoder layer. In order to pass more abundant information from the encoder side to the decoder side, the attention block needs to be more expressive. Following the recent development of designing attention architectures, we propose DenseAtt as the dense attention block, which serves for the dense connection between the encoder and the decoder side. More specifically, two options are proposed accordingly. For each decoding step in the corresponding decoder layer, the two options both calculate attention using multiple encoder layers. The first option is more compressed, while the second option is more expressive and flexible. We name them as DenseAtt-1 and DenseAtt-2 respectively. Figure FIGREF15 shows the architecture of (a) multi-step attention BIBREF2 , (b) DenseAtt-1, and (c) DenseAtt-2 in order. In general, a popular multiplicative attention module can be written as: DISPLAYFORM0 ", "where INLINEFORM0 represent query, key, value respectively. We will use this function INLINEFORM1 in the following descriptions.", "In the decoding phase, we use a layer-wise attention mechanism, such that each decoder layer absorbs different attention information to adjust its output. Instead of treating the last hidden layer as the encoder's output, we treat the concatenation of all hidden layers from encoder side as the output. The decoder layer multiplies with the encoder output to obtain the attention weights, which is then multiplied by a linear combination of the encoder output and the sentence embedding. The attention output of each layer INLINEFORM0 can be formally written as: DISPLAYFORM0 ", "where INLINEFORM0 is the multiplicative attention function, INLINEFORM1 is a concatenation operation that combines all features, and INLINEFORM2 is a linear transformation function that maps each variable to a fixed dimension in order to calculate the attention value. Notice that we explicitly write the INLINEFORM3 term in ( EQREF19 ) to keep consistent with the multi-step attention mechanism, as pictorially shown in Figure FIGREF15 (a).", "Notice that the transformation INLINEFORM0 in DenseAtt-1 forces the encoder layers to be mixed before doing attention. Since we use multiple hidden layers from the encoder side to get an attention value, we can alternatively calculate multiple attention values before concatenating them. In another word, the decoder layer can get different attention values from different encoder layers. This can be formally expressed as: DISPLAYFORM0 ", "where the only difference from Eq. ( EQREF19 ) is that the concatenation operation is substituted by a summation operation, and is put after the attention function INLINEFORM0 . This method further increases the representation power in the attention block, while maintaining the same number of parameters in the model." ], [ "Since the number of features fed into nonlinear operation is accumulated along the path, the parameter size increases accordingly. For example, for the INLINEFORM0 -th encoder layer, the input dimension of features is INLINEFORM1 , where INLINEFORM2 is the feature dimension in previous layers, INLINEFORM3 is the embedding size. In order to avoid the calculation bottleneck for later layers due to large INLINEFORM4 , we introduce the summary layer for deeper models. It summarizes the features for all previous layers and projects back to the embedding size, so that later layers of both the encoder and the decoder side do not need to look back further. The summary layers can be considered as contextualized word vectors in a given sentence BIBREF12 . We add one summary layer after every INLINEFORM5 layers, where INLINEFORM6 is the hyperparameter we introduce. Accordingly, the input dimension of features is at most INLINEFORM7 for the last layer of the encoder. Moreover, combined with the summary layer setting, our DenseAtt mechanism allows each decoder layer to calculate the attention value focusing on the last few encoder layers, which consists of the last contextual embedding layer and several dense connected layers with low dimension. In practice, we set INLINEFORM8 as 5 or 6." ], [ "Figure FIGREF9 and Figure FIGREF13 show the difference of information flow compared with a residual-based encoder/decoder. For residual-based models, each layer can absorb a single high-dimensional vector from its previous layer as the only information, while for DenseNMT, each layer can utilize several low-dimensional vectors from its previous layers and a high-dimensional vector from the first layer (embedding layer) as its information. In DenseNMT, each layer directly provides information to its later layers. Therefore, the structure allows feature reuse, and encourages upper layers to focus on creating new features. Furthermore, the attention block allows the embedding vectors (as well as other hidden layers) to guide the decoder's generation more directly; therefore, during back-propagation, the gradient information can be passed directly to all encoder layers simultaneously." ], [ "We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German.", "We preprocess the IWSLT14 German-English dataset following byte-pair-encoding (BPE) method BIBREF13 . We learn 25k BPE codes using the joint corpus of source and target languages. We randomly select 7k from IWSLT14 German-English as the development set , and the test set is a concatenation of dev2010, tst2010, tst2011 and tst2012, which is widely used in prior works BIBREF14 , BIBREF15 , BIBREF16 .", "For the Turkish-English translation task, we use the data provided by IWSLT14 BIBREF17 and the SETimes corpus BIBREF17 following BIBREF18 . After removing sentence pairs with length ratio over 9, we obtain 360k sentence pairs. Since there is little commonality between the two languages, we learn 30k size BPE codes separately for Turkish and English. In addition to this, we give another preprocessing for Turkish sentences and use word-level English corpus. For Turkish sentences, following BIBREF19 , BIBREF18 , we use the morphology tool Zemberek with disambiguation by the morphological analysis BIBREF20 and removal of non-surface tokens. Following BIBREF18 , we concatenate tst2011, tst2012, tst2013, tst2014 as our test set. We concatenate dev2010 and tst2010 as the development set.", "We preprocess the WMT14 English-German dataset using a BPE code size of 40k. We use the concatenation of newstest2013 and newstest2012 as the development set." ], [ "As the baseline model (BASE-4L) for IWSLT14 German-English and Turkish-English, we use a 4-layer encoder, 4-layer decoder, residual-connected model, with embedding and hidden size set as 256 by default. As a comparison, we design a densely connected model with same number of layers, but the hidden size is set as 128 in order to keep the model size consistent. The models adopting DenseAtt-1, DenseAtt-2 are named as DenseNMT-4L-1 and DenseNMT-4L-2 respectively. In order to check the effect of dense connections on deeper models, we also construct a series of 8-layer models. We set the hidden number to be 192, such that both 4-layer models and 8-layer models have similar number of parameters. For dense structured models, we set the dimension of hidden states to be 96.", "Since NMT model usually allocates a large proportion of its parameters to the source/target sentence embedding and softmax matrix, we explore in our experiments to what extent decreasing the dimensions of the three parts would harm the BLEU score. We change the dimensions of the source embedding, the target embedding as well as the softmax matrix simultaneously to smaller values, and then project each word back to the original embedding dimension through a linear transformation. This significantly reduces the number of total parameters, while not influencing the upper layer structure of the model.", "We also introduce three additional models we use for ablation study, all using 4-layer structure. Based on the residual connected BASE-4L model, (1) DenseENC-4L only makes encoder side dense, (2) DenseDEC-4L only makes decoder side dense, and (3) DenseAtt-4L only makes the attention dense using DenseAtt-2. There is no summary layer in the models, and both DenseENC-4L and DenseDEC-4L use hidden size 128. Again, by reducing the hidden size, we ensure that different 4-layer models have similar model sizes.", "Our design for the WMT14 English-German model follows the best performance model provided in BIBREF2 . The construction of our model is straightforward: our 15-layer model DenseNMT-En-De-15 uses dense connection with DenseAtt-2, INLINEFORM0 . The hidden number in each layer is INLINEFORM1 that of the original model, while the kernel size maintains the same." ], [ "We use Nesterov Accelerated Gradient (NAG) BIBREF21 as our optimizer, and the initial learning rate is set to INLINEFORM0 . For German-English and Turkish-English experiments, the learning rate will shrink by 10 every time the validation loss increases. For the English-German dataset, in consistent with BIBREF2 , the learning rate will shrink by 10 every epoch since the first increment of validation loss. The system stops training until the learning rate is less than INLINEFORM1 . All models are trained end-to-end without any warmstart techniques. We set our batch size for the WMT14 English-German dataset to be 48, and additionally tune the length penalty parameter, in consistent with BIBREF2 . For other datasets, we set batch size to be 32. During inference, we use a beam size of 5." ], [ "We first show that DenseNMT helps information flow more efficiently by presenting the training loss curve. All hyperparameters are fixed in each plot, only the models are different. In Figure FIGREF30 , the loss curves for both training and dev sets (before entering the finetuning period) are provided for De-En, Tr-En and Tr-En-morph. For clarity, we compare DenseNMT-4L-2 with BASE-4L. We observe that DenseNMT models are consistently better than residual-connected models, since their loss curves are always below those of the baseline models. The effect is more obvious on the WMT14 English-German dataset. We rerun the best model provided by BIBREF2 and compare with our model. In Figure FIGREF33 , where train/test loss curve are provided, DenseNMT-En-De-15 reaches the same level of loss and starts finetuning (validation loss starts to increase) at epoch 13, which is 35% faster than the baseline.", "Adding dense connections changes the architecture, and would slightly influence training speed. For the WMT14 En-De experiments, the computing time for both DenseNMT and the baseline (with similar number of parameters and same batch size) tested on single M40 GPU card are 1571 and 1710 word/s, respectively. While adding dense connections influences the per-iteration training slightly (8.1% reduction of speed), it uses many fewer epochs, and achieves a better BLEU score. In terms of training time, DenseNMT uses 29.3%(before finetuning)/22.9%(total) less time than the baseline." ], [ "Table TABREF32 shows the results for De-En, Tr-En, Tr-En-morph datasets, where the best accuracy for models with the same depth and of similar sizes are marked in boldface. In almost all genres, DenseNMT models are significantly better than the baselines. With embedding size 256, where all models achieve their best scores, DenseNMT outperforms baselines by 0.7-1.0 BLEU on De-En, 0.5-1.3 BLEU on Tr-En, 0.8-1.5 BLEU on Tr-En-morph. We observe significant gain using other embedding sizes as well.", "Furthermore, in Table TABREF36 , we investigate DenseNMT models through ablation study. In order to make the comparison fair, six models listed have roughly the same number of parameters. On De-En, Tr-En and Tr-En-morph, we see improvement by making the encoder dense, making the decoder dense, and making the attention dense. Fully dense-connected model DenseNMT-4L-1 further improves the translation accuracy. By allowing more flexibility in dense attention, DenseNMT-4L-2 provides the highest BLEU scores for all three experiments.", "From the experiments, we have seen that enlarging the information flow in the attention block benefits the models. The dense attention block provides multi-layer information transmission from the encoder to the decoder, and to the output as well. Meanwhile, as shown by the ablation study, the dense-connected encoder and decoder both give more powerful representations than the residual-connected counterparts. As a result, the integration of the three parts improve the accuracy significantly." ], [ "From Table TABREF32 , we also observe that DenseNMT performs better with small embedding sizes compared to residual-connected models with regular embedding size. For example, on Tr-En model, the 8-layer DenseNMT-8L-2 model with embedding size 64 matches the BLEU score of the 8-layer BASE model with embedding size 256, while the number of parameter of the former one is only INLINEFORM0 of the later one. In all genres, DenseNMT model with embedding size 128 is comparable or even better than the baseline model with embedding size 256.", "While overlarge embedding sizes hurt accuracy because of overfitting issues, smaller sizes are not preferable because of insufficient representation power. However, our dense models show that with better model design, the embedding information can be well concentrated on fewer dimensions, e.g., 64. This is extremely helpful when building models on mobile and small devices where the model size is critical. While there are other works that stress the efficiency issue by using techniques such as separable convolution BIBREF3 , and shared embedding BIBREF4 , our DenseNMT framework is orthogonal to those approaches. We believe that other techniques would produce more efficient models through combining with our DenseNMT framework." ], [ "For the IWSLT14 German-English dataset, we compare with the best results reported from literatures. To be consistent with prior works, we also provide results using our model directly on the dataset without BPE preprocessing. As shown in Table TABREF39 , DenseNMT outperforms the phrase-structure based network NPMT BIBREF16 (with beam size 10) by 1.2 BLEU, using a smaller beam size, and outperforms the actor-critic method based algorithm BIBREF15 by 2.8 BLEU. For reference, our model trained on the BPE preprocessed dataset achieves 32.26 BLEU, which is 1.93 BLEU higher than our word-based model. For Turkish-English task, we compare with BIBREF19 which uses the same morphology preprocessing as our Tr-En-morph. As shown in Table TABREF37 , our baseline is higher than the previous result, and we further achieve new benchmark result with 24.36 BLEU average score. For WMT14 English-German, from Table TABREF41 , we can see that DenseNMT outperforms ConvS2S model by 0.36 BLEU score using 35% fewer training iterations and 20% fewer parameters. We also compare with another convolution based NMT model: SliceNet BIBREF3 , which explores depthwise separable convolution architectures. SliceNet-Full matches our result, and SliceNet-Super outperforms by 0.58 BLEU score. However, both models have 2.2x more parameters than our model. We expect DenseNMT structure could help improve their performance as well." ], [ "In this work, we have proposed DenseNMT as a dense-connection framework for translation tasks, which uses the information from embeddings more efficiently, and passes abundant information from the encoder side to the decoder side. Our experiments have shown that DenseNMT is able to speed up the information flow and improve translation accuracy. For the future work, we will combine dense connections with other deep architectures, such as RNNs BIBREF7 and self-attention networks BIBREF4 ." ] ] }
{ "question": [ "what are the baselines?", "did they outperform previous methods?", "what language pairs are explored?", "what datasets were used?" ], "question_id": [ "26b5c090f72f6d51e5d9af2e470d06b2d7fc4a98", "8c0621016e96d86a7063cb0c9ec20c76a2dba678", "f1214a05cc0e6d870c789aed24a8d4c768e1db2f", "41d3ab045ef8e52e4bbe5418096551a22c5e9c43" ], "nlp_background": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "search_query": [ "", "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ " 4-layer encoder, 4-layer decoder, residual-connected model, with embedding and hidden size set as 256" ], "yes_no": null, "free_form_answer": "", "evidence": [ "As the baseline model (BASE-4L) for IWSLT14 German-English and Turkish-English, we use a 4-layer encoder, 4-layer decoder, residual-connected model, with embedding and hidden size set as 256 by default. As a comparison, we design a densely connected model with same number of layers, but the hidden size is set as 128 in order to keep the model size consistent. The models adopting DenseAtt-1, DenseAtt-2 are named as DenseNMT-4L-1 and DenseNMT-4L-2 respectively. In order to check the effect of dense connections on deeper models, we also construct a series of 8-layer models. We set the hidden number to be 192, such that both 4-layer models and 8-layer models have similar number of parameters. For dense structured models, we set the dimension of hidden states to be 96." ], "highlighted_evidence": [ "As the baseline model (BASE-4L) for IWSLT14 German-English and Turkish-English, we use a 4-layer encoder, 4-layer decoder, residual-connected model, with embedding and hidden size set as 256 by default." ] } ], "annotation_id": [ "99949e192d00f333149953b64edf7e6a9477fb4a" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Table TABREF32 shows the results for De-En, Tr-En, Tr-En-morph datasets, where the best accuracy for models with the same depth and of similar sizes are marked in boldface. In almost all genres, DenseNMT models are significantly better than the baselines. With embedding size 256, where all models achieve their best scores, DenseNMT outperforms baselines by 0.7-1.0 BLEU on De-En, 0.5-1.3 BLEU on Tr-En, 0.8-1.5 BLEU on Tr-En-morph. We observe significant gain using other embedding sizes as well." ], "highlighted_evidence": [ " In almost all genres, DenseNMT models are significantly better than the baselines." ] } ], "annotation_id": [ "8d4cbe2a29b96fd4828148a9dcbc3eda632727fc" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "German-English", "Turkish-English", "English-German" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German." ], "highlighted_evidence": [ "We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German." ] } ], "annotation_id": [ "f082601cbeac77ac91a9ffc5f67f60793490f945" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "IWSLT14 German-English, IWSLT14 Turkish-English, WMT14 English-German", "evidence": [ "We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German." ], "highlighted_evidence": [ "We use three datasets for our experiments: IWSLT14 German-English, Turkish-English, and WMT14 English-German." ] } ], "annotation_id": [ "0713fba151dd43c9169a7711fbe85a986e201788" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ] }
{ "caption": [ "Figure 1: Comparison of dense-connected encoder and residual-connected encoder. Left: regular residual-connected encoder. Right: dense-connected encoder. Information is directly passed from blue blocks to the green block.", "Figure 2: Comparison of dense-connected decoder and residual-connected decoder. Left: regular residual-connected decoder. Right: dense-connected decoder. Ellipsoid stands for attention block. Information is directly passed from blue blocks to the green block.", "Figure 3: Illustration of DenseAtt mechanisms. For clarity, We only plot the attention block for a single decoder layer. (a): multi-step attention (Gehring et al., 2017), (b): DenseAtt-1, (c): DenseAtt-2. L(·) is the linear projection function. The ellipsoid stands for the core attention operation as shown in Eq. (8).", "Figure 4: Training curve (T) and validation curve (V) comparison. Left: IWSLT14 German-English (De-En). Middle: Turkish-English, BPE encoding (Tr-En). Right: TurkishEnglish, morphology encoding (Tr-En-morph).", "Figure 5: Training curve and test curve comparison on WMT14 English-German translation task.", "Table 1: BLEU score on IWSLT German-English and Turkish-English translation tasks. We compare models using different embedding sizes, and keep the model size consistent within each column.", "Table 2: Ablation study for encoder block, decoder block, and attention block in DenseNMT.", "Table 3: Accuracy on Turkish-English translation task in terms of BLEU score.", "Table 4: Accuracy on IWSLT14 German-English translation task in terms of BLEU score.", "Table 5: Accuracy on WMT14 English-German translation task in terms of BLEU score." ], "file": [ "3-Figure1-1.png", "3-Figure2-1.png", "5-Figure3-1.png", "6-Figure4-1.png", "6-Figure5-1.png", "7-Table1-1.png", "7-Table2-1.png", "8-Table3-1.png", "8-Table4-1.png", "8-Table5-1.png" ] }
2003.03612
Frozen Binomials on the Web: Word Ordering and Language Conventions in Online Text
There is inherent information captured in the order in which we write words in a list. The orderings of binomials --- lists of two words separated by `and' or `or' --- has been studied for more than a century. These binomials are common across many areas of speech, in both formal and informal text. In the last century, numerous explanations have been given to describe what order people use for these binomials, from differences in semantics to differences in phonology. These rules describe primarily `frozen' binomials that exist in exactly one ordering and have lacked large-scale trials to determine efficacy. Online text provides a unique opportunity to study these lists in the context of informal text at a very large scale. In this work, we expand the view of binomials to include a large-scale analysis of both frozen and non-frozen binomials in a quantitative way. Using this data, we then demonstrate that most previously proposed rules are ineffective at predicting binomial ordering. By tracking the order of these binomials across time and communities we are able to establish additional, unexplored dimensions central to these predictions. Expanding beyond the question of individual binomials, we also explore the global structure of binomials in various communities, establishing a new model for these lists and analyzing this structure for non-frozen and frozen binomials. Additionally, novel analysis of trinomials --- lists of length three --- suggests that none of the binomials analysis applies in these cases. Finally, we demonstrate how large data sets gleaned from the web can be used in conjunction with older theories to expand and improve on old questions.
{ "section_name": [ "Introduction", "Introduction ::: Related Work", "Data", "Dimensions of Binomials", "Dimensions of Binomials ::: Definitions", "Dimensions of Binomials ::: Dimensions", "Models And Predictions", "Models And Predictions ::: Stability of Asymmetry", "Models And Predictions ::: Prediction Results", "Proper Nouns and the Proximity Principle", "Proper Nouns and the Proximity Principle ::: NBA Names", "Proper Nouns and the Proximity Principle ::: Subreddit and team names", "Proper Nouns and the Proximity Principle ::: Political Names", "Formal Text", "Formal Text ::: Wine", "Formal Text ::: News", "Global Structure", "Multinomials", "Discussion", "Acknowledgements" ], "paragraphs": [ [ "Lists are extremely common in text and speech, and the ordering of items in a list can often reveal information. For instance, orderings can denote relative importance, such as on a to-do list, or signal status, as is the case for author lists of scholarly publications. In other cases, orderings might come from cultural or historical conventions. For example, `red, white, and blue' is a specific ordering of colors that is recognizable to those familiar with American culture.", "The orderings of lists in text and speech is a subject that has been repeatedly touched upon for more than a century. By far the most frequently studied aspect of list ordering is the binomial, a list of two words usually separated by a conjunction such as `and' or `or', which is the focus of our paper. The academic treatment of binomial orderings dates back more than a century to Jespersen BIBREF0, who proposed in 1905 that the ordering of many common English binomials could be predicted by the rhythm of the words. In the case of a binomial consisting of a monosyllable and a disyllable, the prediction was that the monosyllable would appear first followed by the conjunction `and'. The idea was that this would give a much more standard and familiar syllable stress to the overall phrase, e.g., the binomial `bread and butter' would have the preferable rhythm compared to `butter and bread.'", "This type of analysis is meaningful when the two words in the binomial nearly always appear in the same ordering. Binomials like this that appear in strictly one order (perhaps within the confines of some text corpus), are commonly termed frozen binomials BIBREF1, BIBREF2. Examples of frozen binomials include `salt and pepper' and `pros and cons', and explanations for their ordering in English and other languages have become increasingly complex. Early work focused almost exclusively on common frozen binomials, often drawn from everyday speech. More recent work has expanded this view to include nearly frozen binomials, binomials from large data sets such as books, and binomials of particular types such as food, names, and descriptors BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. Additionally, explanations have increasingly focused on meaning rather than just sound, implying value systems inherent to the speaker or the culture of the language's speakers (one such example is that men are usually listed before women in English BIBREF9). The fact that purely phonetic explanations have been insufficient suggests that list orderings rely at least partially on semantics, and it has previously been suggested that these semantics could be revealing about the culture in which the speech takes place BIBREF3. Thus, it is possible that understanding these orderings could reveal biases or values held by the speaker.", "Overall, this prior research has largely been confined to pristine examples, often relying on small samples of lists to form conclusions. Many early studies simply drew a small sample of what the author(s) considered some of the more representative or prominent binomials in whatever language they were studying BIBREF10, BIBREF1, BIBREF11, BIBREF0, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF3. Other researchers have used books or news articles BIBREF2, BIBREF4, or small samples from the Web (web search results and Google books) BIBREF5. Many of these have lacked a large-scale text corpus and have relied on a focused set of statistics about word orderings.", "Thus, despite the long history of this line of inquiry, there is an opportunity to extend it significantly by examining a broad range of questions about binomials coming from a large corpus of online text data produced organically by many people. Such an analysis could produce at least two types of benefits. First, such a study could help us learn about cultural phenomena embedded in word orderings and how they vary across communities and over time. Second, such an analysis could become a case study for the extension of theories developed at small scales in this domain to a much larger context.", "The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means.", "We begin our analysis by introducing several new key measures for the study of binomials, including a quantity we call asymmetry that measures how frequently a given binomial appears in some ordering. By looking at the distribution of asymmetries across a wide range of binomials, we find that most binomials are not frozen, barring a few strong exceptions. At the same time, there may still be an ordering preference. For example, `10 and 20' is not a frozen binomial; instead, the binomial ordering `10 and 20' appears 60% of the time and `20 and 10' appears 40% of time.", "We also address temporal and community structure in collections of binomials. While it has been recognized that the orderings of binomials may change over time or between communities BIBREF5, BIBREF10, BIBREF1, BIBREF13, BIBREF14, BIBREF15, there has been little analysis of this change. We develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time. Using subreddits as communities, these metrics reveal variations in orderings, some of which suggest cultural change influencing language. For example, in one community, we find that over a period of 10 years, the binomial `son and daughter' went from nearly frozen to appearing in that order only 64% of the time.", "While these changes do happen, they are generally quite rare. Most binomials — frozen or not — are ordered in one way about the same percentage of the time, regardless of community or the year. We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values. We find that there is less variation across time and communities in the data compared to this model, implying that binomial orderings are indeed remarkably stable.", "Given this stability, one might expect that the dominant ordinality of a given binomial is still predictable, even if the binomial is not frozen. For example, one might expect that the global frequency of a single word or the number of syllables in a word would predict ordering in many cases. However, we find that these simple predictors are quite poor at determining binomial ordering.", "On the other hand, we find that a notion of `proximity' is robust at predicting ordering in some cases. Here, the idea is that the person producing the text will list the word that is conceptually “closer” to them first — a phenomenon related to a “Me First” principle of binomial orderings suggested by Cooper and Ross BIBREF3. One way in which we study this notion of proximity is through sports team subreddits. For example, we find that when two NBA team names form a binomial on a specific team's subreddit, the team that is the subject of the subreddit tends to appear first.", "The other source of improved predictions comes from using word embeddings BIBREF16: we find that a model based on the positions of words in a standard pre-trained word embedding can be a remarkably reliable predictor of binomial orderings. While not applicable to all words, such as names, this type of model is strongly predictive in most cases.", "Since binomial orderings are in general difficult to predict individually, we explore a new way of representing the global binomial ordering structure, we form a directed graph where an edge from $i$ to $j$ means that $i$ tends to come before $j$ in binomials. These graphs show tendencies across the English language and also reveal peculiarities in the language of particular communities. For instance, in a graph formed from the binomials in a sports community, the names of sports teams and cities are closely clustered, showing that they are often used together in binomials. Similarly, we identify clusters of names, numbers, and years. The presence of cycles in these graphs are also informative. For example, cycles are rare in graphs formed from proper names in politics, suggesting a possible hierarchy of names, and at the same time very common for other binomials. This suggests that no such hierarchy exists for most of the English language, further complicating attempts to predict binomial order.", "Finally, we expand our work to include multinomials, which are lists of more than two words. There already appears to be more structure in trinomials (lists of three) compared to binomials. Trinomials are likely to appear in exactly one order, and when they appear in more than one order the last word is almost always the same across all instances. For instance, in one section of our Reddit data, `Fraud, Waste, and Abuse' appears 34 times, and `Waste, Fraud, and Abuse' appears 20 times. This could point to, for example, recency principles being more important in lists of three than in lists of two. While multinomials were in principle part of the scope of past research in this area, they were difficult to study in smaller corpora, suggesting another benefit of working at our current scale." ], [ "Interest in list orderings spans the last century BIBREF10, BIBREF1, with a focus almost exclusively on binomials. This research has primarily investigated frozen binomials, also called irreversible binomials, fixed coordinates, and fixed conjuncts BIBREF11, although some work has also looked at non-coordinate freezes where the individual words are nonsensical by themselves (e.g., `dribs and drabs') BIBREF11. One study has directly addressed mostly frozen binomials BIBREF5, and we expand the scope of this paper by exploring the general question of how frequently binomials appear in a particular order. Early research investigated languages other than English BIBREF1, BIBREF10, but most recent research has worked almost exclusively with English. Overall, this prior research can be separated into three basic categories — phonological rules, semantic rules, and metadata rules.", "Phonology. The earliest research on binomial orderings proposed mostly phonological explanations, particularly rhythm BIBREF0, BIBREF12. Another highly supported proposal is Panini's Law, which claims that words with fewer syllables come first BIBREF17; we find only very mild preference for this type of ordering. Cooper and Ross's work expands these to a large list of rules, many overlapping, and suggests that they can compound BIBREF3; a number of subsequent papers have expanded on their work BIBREF11, BIBREF15, BIBREF9, BIBREF17.", "Semantics. There have also been a number of semantic explanations, mostly in the form of categorical tendencies (such as `desirable before undesirable') that may have cultural differences BIBREF10, BIBREF1. The most influential of these may be the `Me First' principle codified by Cooper and Ross. This suggests that the first word of a binomial tends to follow a hierarchy that favors `here', `now', present generation, adult, male, and positive. Additional hierarchies also include a hierarchy of food, plants vs. animals, etc. BIBREF3.", "Frequency. More recently, it has been proposed that the more cognitively accessible word might come first, which often means the word the author sees or uses most frequently BIBREF18. There has also been debate on whether frequency may encompass most phonological and semantic rules that have been previously proposed BIBREF13, BIBREF4. We find that frequency is in general a poor predictor of word ordering.", "Combinations. Given the number of theories, there have also been attempts to give a hierarchy of rules and study their interactions BIBREF4, BIBREF5. This research has complemented the proposals of Cooper and Ross BIBREF3. These types of hierarchies are also presented as explanations for the likelihood of a binomial becoming frozen BIBREF5.", "Names. Work on the orderings of names has been dominated by a single phenomenon: men's names usually come before women's names. Explanations range from a power differential, to men being more `agentic' within `Me First', to men's names being more common or even exhibiting more of the phonological features of words that usually come first BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF19, BIBREF6. However, it has also been demonstrated that this preference may be affected by the author's own gender and relationship with the people named BIBREF6, BIBREF19, as well as context more generally BIBREF20.", "Orderings on the Web. List orderings have also been explored in other Web data, specifically on the ordering of tags applied to images BIBREF21. There is evidence that these tags are ordered intentionally by users, and that a bias to order tag A before tag B may be influenced by historical precedent in that environment but also by the relative importance of A and B BIBREF21. Further work also demonstrates that exploiting the order of tags on images can improve models that rank those images BIBREF22." ], [ "We take our data mostly from Reddit, a large social media website divided into subcommunities called `subreddits' or `subs'. Each subreddit has a theme (usually clearly expressed in its name), and we have focused our study on subreddits primarily in sports and politics, in part because of the richness of proper names in these domains: r/nba, r/nfl, r/politics, r/Conservative, r/Libertarian, r/The_Donald, r/food, along with a variety of NBA team subreddits (e.g., r/rockets for the Houston Rockets). Apart from the team-specific and food subreddits, these are among the largest and most heavily used subreddits BIBREF23. We gather text data from comments made by users in discussion threads. In all cases, we have data from when the subreddit started until mid-2018. (Data was contributed by Cristian Danescu-Niculescu-Mizil.) Reddit in general, and the subreddits we examined in particular, are rapidly growing, both in terms of number of users and number of comments.", "Some of the subreddits we looked at (particularly sports subreddits) exhibited very distinctive `seasons', where commenting spikes (Fig. FIGREF2). These align with, e.g., the season of the given sport. When studying data across time, our convention is to bin the data by year, but we adjust the starting point of a year based on these seasons. Specifically, a year starts in May for r/nfl, August for r/nba, and February for all politics subreddits.", "We use two methods to identify lists from user comments: `All Words' and `Names Only', with the latter focusing on proper names. In both cases, we collect a number of lists and discard lists for any pair of words that appear fewer than 30 times within the time frame that we examined (see Table TABREF3 for summary statistics).", "The All Words method simply searches for two words $A$ and $B$ separated by `and' or `or', where a word is merely a series of characters separated by a space or punctuation. This process only captures lists of length two, or binomials. We then filter out lists containing words from a collection of stop-words that, by their grammatical role or formatting structure, are almost exclusively involved in false positive lists. No metadata is captured for these lists beyond the month and year of posting.", "The Names Only method uses a curated list of full names relevant to the subreddit, focusing on sports and politics. For sports, we collected names of all NBA and NFL player active during 1980–2019 from basketball-reference.com and pro-football-reference.com. For politics, we collected the names of congresspeople from the @unitedstates project BIBREF24. To form lists, we search for any combination of any part of these names such that at least two partial names are separated by `and', `or', `v.s.', `vs', or `/' and the rest are separated by `,'. While we included a variety of separators, about 83% of lists include only `and', about 17% include `or' and the rest of the separators are negligible. Most lists that we retrieve in this way are of length 2, but we also found lists up to length 40 (Fig. FIGREF5). Finally, we also captured full metadata for these lists, including a timestamp, the user, any flairs attributed to the user (short custom text that appears next to the username), and other information.", "We additionally used wine reviews and a variety of news paper articles for additional analysis. The wine data gives reviews of wine from WineEnthusiast and is hosted on Kaggle BIBREF25. While not specifically dated, the reviews were scraped between June and November of 2017. There are 20 different reviewers included, but the amount of reviews each has ranges from tens to thousands. The news data consists of news articles pulled from a variety of sources, including (in random order) the New York Times, Breitbart, CNN, the Atlantic, Buzzfeed News, National Review, New York Post, NPR, Reuters, and the Washington Post. The articles are primarily from 2016 and early 2017 with a few from 2015. The articles are scraped from home-page headline and RSS feeds BIBREF26. Metadata was limited for both of these data sets." ], [ "In this paper we introduce a new framework to interpret binomials, based on three properties: asymmetry (how frozen a binomial is), movement (how binomial orderings change over time), and agreement (how consistent binomial orderings are between communities), which we will visualize as a cube with three dimensions. Again, prior work has focused essentially entirely on asymmetry, and we argue that this can only really be understood in the context of the other two dimensions.", "For this paper we will use the convention {A,B} to refer to an unordered pair of words, and [A,B] to refer to an ordered pair where A comes before B. We say that [A,B] and [B,A] are the two possible orientations of {A,B}." ], [ "Previous work has one main measure of binomials — their `frozen-ness'. A binomial is `frozen' if it always appears with a particular order. For example, if the pair {`arrow', `bow'} always occurs as [`bow', `arrow'] and never as [`arrow', `bow'], then it is frozen. This leaves open the question of how describe the large number of binomials that are not frozen. To address this point, we instead consider the ordinality of a list, or how often the list is `in order' according to some arbitrary underlying reference order. Unless otherwise specified, the underlying order is assumed to be alphabetical. If the list [`cat', `dog'] appears 40 times and the list [`dog', `cat'] 10 times, then the list {`cat', `dog'} would have an ordinality of 0.8.", "Let $n_{x,y}$ be the number of times the ordered list $[x,y]$ appears, and let $f_{x,y} = n_{x,y} / (n_{x,y} + n_{y,x})$ be the fraction of times that the unordered version of the list appears in that order. We formalize ordinality as follows. [Ordinality] Given an ordering $<$ on words (by default, we assume alphabetical ordering), the ordinality $o_{x,y}$ of the pair $\\lbrace x,y\\rbrace $ is equal to $f_{x,y}$ if $x < y$ and $f_{y,x}$ otherwise.", "Similarly, we introduce the concept of asymmetry in the context of binomials, which is how often the word appears in its dominant order. In our framework, a `frozen' list is one with ordinality 0 or 1 and would be considered a high asymmetry list, with asymmetry of 1. A list that appears as [`A', `B'] half of the time and [`B', `A'] half of the time (or with ordinality 0.5) would be considered a low asymmetry list, with asymmetry of 0.", "[Asymmetry] The asymmetry of an unordered list $\\lbrace x,y\\rbrace $ is $A_{x,y} = 2 \\cdot \\vert o_{x,y} - 0.5 \\vert $.", "The Reddit data described above gives us access to new dimensions of binomials not previously addressed. We define movement as how the ordinality of a list changes over time [Movement] Let $o_{x,y,t}$ be the ordinality of an unordered list $\\lbrace x,y\\rbrace $ for data in year $t \\in T$. The movement of $\\lbrace x,y\\rbrace $ is $M_{x,y} = \\max _{t \\in T} o_{x,y,t} - \\min _{t \\in T} o_{x,y,t}$. And agreement describes how the ordinality of a list differs between different communities. [Agreement] Let $o_{x,y,c}$ be the ordinality of an unordered list ${x,y}$ for data in community (subreddit) $c \\in C$. The agreement of $\\lbrace x,y\\rbrace $ is $A_{x,y} = 1 - (\\max _{c \\in C} o_{x,y,c} - \\min _{c \\in C} o_{x,y,c})$." ], [ "Let the point $(A,M,G)_{x,y}$ be a vector of the asymmetry, movement, and agreement for some unordered list $\\lbrace x,y\\rbrace $. These vectors then define a 3-dimensional space in which each list occupies a point. Since our measures for asymmetry, agreement, and movement are all defined from 0 to 1, their domains form a unit cube (Fig. FIGREF8). The corners of this cube correspond to points with coordinates are entirely made up of 0s or 1s. By examining points near the corners of this cube, we can get a better understanding of the range of binomials. Some corners are natural — it is easy to imagine a high asymmetry, low movement, high agreement binomial — such as {`arrow', `bow'} from earlier. On the other hand, we have found no good examples of a high asymmetry, low movement, low agreement binomial. There are a few unusual examples, such as {10, 20}, which has 0.4 asymmetry, 0.2 movement, and 0.1 agreement and is clearly visible as an isolated point in Fig. FIGREF8.", "Asymmetry. While a majority of binomials have low asymmetry, almost all previous work has focused exclusively on high-asymmetry binomials. In fact, asymmetry is roughly normally distributed across binomials with an additional increase of highly asymmetric binomials (Fig. FIGREF9). This implies that previous work has overlooked the vast majority of binomials, and an investigation into whether rules proposed for highly asymmetric binomials also functions for other binomials is a core piece of our analysis.", "Movement. The vast majority of binomials have low movement. However, the exceptions to this can be very informative. Within r/nba a few of these pairs show clear change in linguistics and/or culture. The binomial [`rpm', `vorp'] (a pair of basketball statistics) started at 0.74 ordinality and within three years dropped to 0.32 ordinality, showing a potential change in users' representation of how these statistics relate to each other. In r/politics, [`daughter', `son'] moved from 0.07 ordinality to 0.36 ordinality over ten years. This may represent a cultural shift in how users refer to children, or a shift in topics discussed relating to children. And in r/politics, ['dems', 'obama'] went from 0.75 ordinality to 0.43 ordinality from 2009–2018, potentially reflecting changes in Obama's role as a defining feature of the Democratic Party. Meanwhile the ratio of unigram frequency of `dems' to `obama' actually increased from 10% to 20% from 2010 to 2017. Similarly, [`fdr', `lincoln'] moved from 0.49 ordinality to 0.17 ordinality from 2015–2018. This is particularly interesting, since in 2016 `fdr' had a unigram frequency 20% higher than `lincoln', but in 2017 they are almost the same. This suggests that movement could be unrelated to unigram frequency changes. Note also that the covariance for movement across subreddits is quite low TABREF10, and movement in one subreddit is not necessarily reflected by movement in another.", "Agreement. Most binomials have high agreement (Table TABREF11) but again the counterexamples are informative. For instance, [`score', `kick'] has ordinality of 0.921 in r/nba and 0.204 in r/nfl. This likely points to the fact that American football includes field goals. A less obvious example is the list [`ceiling', `floor']. In r/nba and r/nfl, it has ordinality 0.44, and in r/politics, it has ordinality 0.27.", "There are also differences among proper nouns. One example is [`france', `israel'], which has ordinality 0.6 in r/politics, 0.16 in r/Libertarian, and 0.51 in r/The_Donald (and the list does not appear in r/Conservative). And the list [`romney', `trump'] has ordinality 0.48 in r/poltics, 0.55 in r/The_Donald, and 0.73 in r/Conservative." ], [ "In this section, we establish a null model under which different communities or time slices have the same probability of ordering a binomial in a particular way. With this, we would expect to see variation in binomial asymmetry. We find that our data shows smaller variation than this null model predicts, suggesting that binomial orderings are extremely stable across communities and time. From this, we might also expect that orderings are predictable; but we find that standard predictors in fact have limited success." ], [ "Recall that the asymmetry of binomials with respect to alphabetic order (excluding frozen binomials) is roughly normal centered around $0.5$ (Fig. FIGREF9). One way of seeing this type of distribution would be if binomials are ordered randomly, with $p=0.5$ for each order. In this case, if each instance $l$ of a binomial $\\lbrace x,y\\rbrace $ takes value 0 (non-alphabetical ordering) or 1 (alphabetical ordering), then $l \\sim \\text{Bernoulli}(0.5)$. If $\\lbrace x,y\\rbrace $ appears $n$ times, then the number of instances of value 1 is distributed by $W \\sim \\text{Bin}(n, 0.5)$, and $W / n$ is approximately normally distributed with mean 0.5.", "One way to test this behavior is to first estimate $p$ for each list within each community. If the differences in these estimates are not normal, then the above model is incorrect. We first omit frozen binomials before any analysis. Let $L$ be a set of unordered lists and $C$ be a set of communities. We estimate $p$ for list $l \\in L$ in community $c \\in C$ by $\\hat{p}_{l,c} = o_{l,c}$, the ordinality of $l$ in $C$. Next, for all $l \\in L$ let $p^*_{l} = \\max _{c \\in C}(\\hat{p}_{l, c}) - \\min _{ c \\in C}(\\hat{p}_{l, c})$. The distribution of $p^*_{l}$ over $l \\in L$ has median 0, mean 0.0145, and standard deviation 0.0344. We can perform a similar analysis over time. Define $Y$ as our set of years, and $\\hat{p}_{l, y} = o_{l,y}$ for $y \\in Y$ our estimates. The distribution of $p^{\\prime }_{l} = \\max _{y \\in Y}(\\hat{p}_{l, y}) - \\min _{y \\in Y}(\\hat{p}_{l, y})$ over $l \\in L$ has median 0.0216, mean 0.0685, and standard deviation 0.0856. The fact that $p$ varies very little across both time and communities suggests that there is some $p_l$ for each $l \\in L$ that is consistent across time and communities, which is not the case in the null model, where these values would be normally distributed.", "We also used a bootstrapping technique to understand the mean variance in ordinality for lists over communities and years. Specifically, let $o_{l, c, y}$ be the ordinality of list $l$ in community $c$ and year $y$, $O_l$ be the set of $o_{l,c,y}$ for a given list $l$, and $s_l$ be the standard deviation of $O_l$. Finally, let $\\bar{s}$ be the average of the $s_l$. We re-sample data by randomizing the order of each binomial instance, sampling its orderings by a binomial random variable with success probability equal to its ordinality across all seasons and communities ($p_l$). We repeated this process to get samples estimates $\\lbrace \\bar{s}_1, \\ldots , \\bar{s}_{k}\\rbrace $, where $k$ is the size of the set of seasons and communities. These averages range from 0.0277 to 0.0278 and are approximately normally distributed (each is a mean over an approximately normal scaled Binomial random variable). However, $\\bar{s} = 0.0253$ for our non-randomized data. This is significantly smaller than the randomized data and implies that the true variation in $p_l$ across time and communities is even smaller than a binomial distribution would predict. One possible explanation for this is that each instance of $l$ is not actually independent, but is in fact anti-correlated, violating one of the conditions of the binomial distribution. An explanation for that could be that users attempt to draw attention by intentionally going against the typical ordering BIBREF1, but it is an open question what the true model is and why the variation is so low. Regardless, it is clear that the orientation of binomials varies very little across years and communities (Fig. FIGREF13)." ], [ "Given the stability of binomials within our data, we now try to predict their ordering. We consider deterministic or rule-based methods that predict the order for a given binomial. We use two classes of evaluation measures for success on this task: (i) by token — judging each instance of a binomial separately; and (ii) by type — judging all instances of a particular binomial together. We further characterize these into weighted and unweighted.", "To formalize these notions, first consider any unordered list $\\lbrace x,y\\rbrace $ that appears $n_{x,y}$ times in the orientation $[x,y]$ and $n_{y,x}$ times in the orientation $[y,x]$. Since we can only guess one order, we will have either $n_{x,y}$ or $n_{y,x}$ successful guesses for $\\lbrace x,y\\rbrace $ when guessing by token. The unweighted token score (UO) and weighted token score (WO) are the macro and micro averages of this accuracy.", "If predicting by type, let $S$ be the lists such that the by-token prediction is successful at least half of the time. Then the unweighted type score (UT) and weighted type score (WT) are the macro and micro averages of $S$.", "Basic Features. We first use predictors based on rules that have previously been proposed in the literature: word length, number of phonemes, number of syllables, alphabetical order, and frequency. We collect all binomials but make predictions only on binomials appearing at least 30 times total, stratified by subreddit. However, none of these features appear to be particularly predictive across the board (Table TABREF15). A simple linear regression model predicts close to random, which bolsters the evidence that these classical rules for frozen binomials are not predictive for general binomials.", "Perhaps the oldest suggestion to explain binomial orderings is that if there are two words A and B, and A is monosyllabic and B is disyllabic, then A comes before B BIBREF0. Within r/politics, we gathered an estimate of number of syllables for each word as given by a variation on the CMU Pronouncing Dictionary BIBREF27 (Tables TABREF16 and TABREF17). In a weak sense, Jespersen was correct that monosyllabic words come before disyllabic words more often than not; and more generally, shorter words come before longer words more often than not. However, as predictors, these principles are close to random guessing.", "Paired Predictions. Another measure of predictive power is predicting which of two binomials has higher asymmetry. In this case, we take two binomials with very different asymmetry and try to predict which has higher asymmetry by our measures (we use the top-1000 and bottom-1000 binomials in terms of asymmetry for these tasks). For instance, we may predict that [`red', `turquoise'] is more asymmetric than [`red', `blue'] because the differences in lengths is more extreme. Overall, the basic predictors from the literature are not very successful (Table TABREF18).", "Word Embeddings. If we turn to more modern approaches to text analysis, one of the most common is word embeddings BIBREF16. Word embeddings assign a vector $x_i$ to each word $i$ in the corpus, such that the relative position of these vectors in space encode information lingustically relevant relationships among the words. Using the Google News word embeddings, via a simple logistic model, we produce a vector $v^*$ and predict the ordering of a binomial on words $i$ and $j$ from $v^* \\cdot (x_i - x_j)$. In this sense, $v^*$ can be thought of as a “sweep-line” direction through the space containing the word vectors, such that the ordering along this sweep-line is the predicted ordering of all binomials in the corpus. This yields surprisingly accurate results, with accuracy ranging from 70% to 85% across various subreddits (Table TABREF20), and 80-100% accuracy on frozen binomials. This is by far the best prediction method we tested. It is important to note that not all words in our binomials could be associated with an embedding, so it was necessary to remove binomials containing words such as names or slang. However, retesting our basic features on this data set did not show any improvement, implying that the drastic change in predictive power is not due to the changed data set." ], [ "Proper nouns, and names in particular, have been a focus within the literature on frozen binomials BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20, BIBREF28, but these studies have largely concentrated on the effect of gender in ordering BIBREF8, BIBREF5, BIBREF18, BIBREF3, BIBREF13, BIBREF9, BIBREF6, BIBREF19, BIBREF20. With Reddit data, however, we have many conversations about large numbers of celebrities, with significant background information on each. As such, we can investigate proper nouns in three subreddits: r/nba, r/nfl, and r/politics. The names we used are from NBA and NFL players (1970–2019) and congresspeople (pre-1800 and 2000–2019) respectively. We also investigated names of entities for which users might feel a strong sense of identification, such as a team or political group they support, or a subreddit to which they subscribe. We hypothesized that the group with which the user identifies the most would come first in binomial orderings. Inspired by the `Me First Principle', we call this the Proximity Principle." ], [ "First, we examined names in r/nba. One advantage of using NBA players is that we have detailed statistics for ever player in every year. We tested a number of these statistics, and while all of them predicted statistically significant numbers ($p <$ 1e-6) of binomials, they were still not very predictive in a practical sense (Table TABREF23). The best predictor was actually how often the player's team was mentioned. Interestingly, the unigram frequency (number of times the player's name was mentioned overall) was not a good predictor. It is relevant to these observations that some team subreddits (and thus, presumably, fanbases) are significantly larger than others." ], [ "Additionally, we also investigated lists of names of sports teams and subreddits as proper nouns. In this case we exploit an interesting structure of the r/nba subreddit which is not evident at scale in other subreddits we examined. In addition to r/nba, there exists a number of subreddits that are affiliated with a particular NBA team, with the purpose of allowing discussion between fans of that team. This implies that most users in a team subreddit are fans of that team. We are then able to look for lists of NBA teams by name, city, and abbreviation. We found 2520 instances of the subreddit team coming first, and 1894 instances of the subreddit team coming second. While this is not a particularly strong predictor, correctly predicting 57% of lists, it is one of the strongest we found, and a clear illustration of the Proximity Principle.", "We can do a similar calculation with subreddit names, by looking between subreddits. While the team subreddits are not large enough for this calculation, many of the other subreddits are. We find that lists of subreddits in r/nba that include `r/nba' often start with `r/nba', and a similar result holds for r/nfl (Table TABREF25).", "While NBA team subreddits show a fairly strong preference to name themselves first, this preference is slightly less strong among sport subreddits, and even less strong among politics subreddits. One potential factor here is that r/politics is a more general subreddit, while the rest are more specific — perhaps akin to r/nba and the team subreddits." ], [ "In our case, political names are drawn from every congressperson (and their nicknames) in both houses of the US Congress through the 2018 election. It is worth noting that one of these people is Philadelph Van Trump. It is presumed that most references to `trump' refer to Donald Trump. There may be additional instances of mistaken identities. We restrict the names to only congresspeople that served before 1801 or after 1999, also including `trump'.", "One might guess that political subreddits refer to politicians of their preferred party first. However, this was not the case, as Republicans are mentioned first only about 43%–46% of the time in all subreddits (Table TABREF27). On the other hand, the Proximity Principle does seem to come into play when discussing ideology. For instance, r/politics — a left-leaning subreddit — is more likely to say `democrats and republicans' while the other political subreddits in our study — which are right-leaning — are more likely to say `republicans and democrats'.", "Another relevant measure for lists of proper nouns is the ratio of the number of list instances containing a name to the unigram frequency of that name. We restrict our investigation to names that are not also English words, and only names that have a unigram frequency of at least 30. The average ratio is 0.0535, but there is significant variation across names. It is conceivable that this list ratio is revealing about how often people are talked about alone instead of in company." ], [ "While Reddit provides a very large corpus of informal text, McGuire and McGuire make a distinct separation between informal and formal text BIBREF28. As such, we briefly analyze highly stylized wine reviews and news articles from a diverse set of publications. Both data sets follow the same basic principles outlined above." ], [ "Wine reviews are a highly stylized form of text. In this case reviews are often just a few sentences, and they use a specialized vocabulary meant for wine tasting. While one might hypothesize that such stylized text exhibits more frozen binomials, this is not the case (Tab TABREF28). There is some evidence of an additional freezing effect in binomials such as ('aromas', 'flavors') and ('scents', 'flavors') which both are frozen in the wine reviews, but are not frozen on Reddit. However, this does not seem to have a more general effect. Additionally, there are a number of binomials which appear frozen on Reddit, but have low asymmetry in the wine reviews, such as ['lemon', 'lime']." ], [ "We focused our analysis on NYT, Buzzfeed, Reuters, CNN, the Washington Post, NPR, Breitbart, and the Atlantic. Much like in political subreddits, one might expect to see a split between various publications based upon ideology. However, this is not obviously the case. While there are certainly examples of binomials that seem to differ significantly for one publication or for a group of publications (Buzzfeed, in particular, frequently goes against the grain), there does not seem to be a sharp divide. Individual examples are difficult to draw conclusions from, but can suggest trends. (`China', `Russia') is a particularly controversial binomial. While the publications vary quite a bit, only Breitbart has an ordinality of above 0.5. In fact, country pairs are among the most controversial binomials within the publications (e.g. (`iraq', `syria'), (`afghanisatan', `iraq')), while most other highly controversial binomials reflect other political structures, such as (`house', `senate'), (`migrants', 'refugees'), and (`left', `right'). That so many controversial binomials reflect politics could point to subtle political or ideological differences between the publications. Additionally, the close similarity between Breitbart and more mainstream publications could be due to a similar effect we saw with r/The_Donald - mainly large amounts of quoted text." ], [ "We can discover new structure in binomial orderings by taking a more global view. We do this by building directed graphs based on ordinality. In these graphs, nodes are words and an arrow from A to B indicates that there are at least 30 lists containing A and B and that those lists have order [A,B] at least 50% of the time. For our visualizations, the size of the node indicates how many distinct lists the word appears in,and color indicates how many list instances contain the word in total.", "If we examine the global structure for r/nba, we can pinpoint a number of patterns (Fig. FIGREF31). First, most nodes within the purple circle correspond to names, while most nodes outside of it are not names. The cluster of circles in the lower left are a combination of numbers and years, where dark green corresponds to numbers, purple corresponds to years, and pink corresponds years represented as two-digit numbers (e.g., `96'). On the right, the brown circle contains adjectives, while above the blue circle contains heights (e.g., 6'5\"), and in the two circles in the lower middle, the left contains cities while the right contains team names. The darkest red node in the center of the graph corresponds to `lebron'.", "Constructing a similar graph for our wines dataset, we can see clusters of words. In Fig FIGREF32, the colors represent clusters as formed through modularity. These clusters are quite distinct. Green nodes mostly refer to the structure or body of a wine, red are adjectives describing taste, teal and purple are fruits, dark green is wine varietals, gold is senses, and light blue is time (e.g. `year', `decade', etc.)", "We can also consider the graph as we change the threshold of asymmetry for which an edge is included. If the asymmetry is large enough, the graph is acyclic, and we can consider how small the ordinality threshold must be in order to introduce a cycle. These cycles reveal the non-global ordering of binomials. The graph for r/nba begins to show cycles with a threshold asymmetry of 0.97. Three cycles exist at this threshold: [`ball', `catch', `shooter'], [`court', `pass', `set', `athleticism'], and [`court', `plays', `set', `athleticism'].", "Restricting the nodes to be names is also revealing. Acyclic graphs in this context suggest a global partial hierarchy of individuals. For r/nba, the graph is no longer acyclic at an asymmetry threshold of 0.76, with the cycle [`blake', `jordan', `bryant', `kobe']. Similarly, the graph for r/nfl (only including names) is acyclic until the threshold reaches 0.73 with cycles [`tannehill', `miller', `jj watt', `aaron rodgers', `brady'], and [`hoyer', `savage', `watson', `hopkins', `miller', `jj watt', `aaron rodgers', `brady'].", "Figure FIGREF33 shows these graphs for the three political subreddits, where the nodes are the 30 most common politician names. The graph visualizations immediately show that these communities view politicians differently. We can also consider cycles in these graphs and find that the graph is completely acyclic when the asymmetry threshold is at least 0.9. Again, this suggests that, at least among frozen binomials, there is in fact a global partial order of names that might signal hierarchy. (Including non-names, though, causes the r/politics graph to never be acyclic for any asymmetry threshold, since the cycle [`furious', `benghazi', `fast'] consists of completely frozen binomials.) We find similar results for r/Conservative and r/Libertarian, which are acyclic with thresholds of 0.58 and 0.66, respectively. Some of these cycles at high asymmetry might be due to English words that are also names (e.g. `law'), but one particularly notable cycle from r/Conservative is [`rubio', `bush', `obama', `trump', `cruz']." ], [ "Binomials are the most studied type of list, but trinomials — lists of three — are also common enough in our dataset to analyze. Studying trinomials adds new aspects to the set of questions: for example, while binomials have only two possible orderings, trinomials have six possible orderings. However, very few trinomials show up in all six orderings. In fact, many trinomials show up in exactly one ordering: about 36% of trinomials being completely frozen amongst trinomials appearing at least 30 times in the data. To get a baseline comparison, we found an equal number of the most common binomials, and then subsampled instances of those binomials to equate the number of instances with the trinomials. In this case, only 21% of binomials are frozen. For trinomials that show up in at least two orderings, it is most common for the last word to keep the same position (e.g., [a, b, c] and [b, a, c]). For example, in our data, [`fraud', `waste', `abuse'] appears 34 times, and [`waste', `fraud', `abuse'] appears 20 times. This may partially be explained by many lists that contain words such as `other', `whatever', or `more'; for instance, [`smarter', `better', `more'] and [`better', `smarter', `more'] are the only two orderings we observe for this set of three words.", "Additionally, each trinomial [a, b, c] contains three binomials within it: [a, b], [b, c], and [a, c]. It is natural to compare orderings of {a, b} in general with orderings of occurrences of {a, b} that lie inside trinomials. We use this comparison to define the compatibility of {a, b}, as follows.", "Compatibility Let {a, b} be a binomial with dominant ordering [a, b]; that is, [a, b] is at least as frequent as [b, a]. We define the compatibility of {a, b} to be the fraction of instances of {a, b} occurring inside trinomials that have the order [a,b].", "There are only a few cases where binomials have compatibility less than 0.5, and for most binomials, the asymmetry is remarkably consistent between binomials and trinomials (Fig. FIGREF37). In general, asymmetry is larger than compatibility — this occurs for 4569 binomials, compared to 3575 where compatibility was greater and 690 where the two values are the same. An extreme example is the binomial {`fairness', `accuracy'}, which has asymmetry 0.77 and compatibility 0.22. It would be natural to consider these questions for tetranomials and longer lists, but these are rarer in our data and correspondingly harder to draw conclusions from." ], [ "Analyzing binomial orderings on a large scale has led to surprising results. Although most binomials are not frozen in the traditional sense, there is little movement in their ordinality across time or communities. A list that appears in the order [A, B] 60% of the time in one subreddit in one year is likely to show up as [A, B] very close to 60% of the time in all subreddits in all years. This suggests that binomial order should be predictable, but there is evidence that this is difficult: the most common theories on frozen binomial ordering were largely ineffective at predicting binomial ordering in general.", "Given the challenge in predicting orderings, we searched for methods or principles that could yield better performance, and identified two promising approaches. First, models built on standard word embeddings produce predictions of binomial orders that are much more effective than simpler existing theories. Second, we established the Proximity Principle: the proper noun with which a speaker identifies more will tend to come first. This is evidenced when commenters refer to their sports team first, or politicians refer to their party first. Further analysis of the global structure of binomials reveals interesting patterns and a surprising acyclic nature in names. Analysis of longer lists in the form of multinomials suggests that the rules governing their orders may be different.", "We have also found promising results in some special cases. We expect that more domain-specific studies will offer rich structure.", "It is a challenge to adapt the long history of work on the question of frozen binomials to the large, messy environment of online text and social media. However, such data sources offer a unique opportunity to re-explore and redefine these questions. It seems that binomial orderings offer new insights into language, culture, and human cognition. Understanding what changes in these highly stable conventions mean — and whether or not they can be predicted — is an interesting avenue for future research." ], [ "The authors thank members of the Cornell AI, Policy, and Practice Group, and (alphabetically by first name) Cristian Danescu-Niculescu-Mizil, Ian Lomeli, Justine Zhang, and Kate Donahue for aid in accessing data and their thoughtful insight. This research was supported by NSF Award DMS-1830274, ARO Award W911NF19-1-0057, a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, and ARO MURI." ] ] }
{ "question": [ "How is order of binomials tracked across time?", "What types of various community texts have been investigated for exploring global structure of binomials?", "Are there any new finding in analasys of trinomials that was not present binomials?", "What new model is proposed for binomial lists?", "How was performance of previously proposed rules at very large scale?", "What previously proposed rules for predicting binoial ordering are used?", "What online text resources are used to test binomial lists?" ], "question_id": [ "62736ad71c76a20aee8e003c462869bab4ab4b1e", "aaf50a6a9f449389ef212d25d0fae59c10b0df92", "a1917232441890a89b9a268ad8f987184fa50f7a", "574f17134e4dd041c357ebb75a7ef590da294d22", "41fd359b8c1402b31b6f5efd660143d1414783a0", "d216d715ec27ee2d4949f9e908895a18fb3238e2", "ba973b13f26cd5eb1da54663c0a72842681d5bf5" ], "nlp_background": [ "zero", "zero", "zero", "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "draw our data from news publications, wine reviews, and Reddit", "develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time", " develop a null model to determine how much variation in binomial orderings we might expect across communities and across time" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means.", "We also address temporal and community structure in collections of binomials. While it has been recognized that the orderings of binomials may change over time or between communities BIBREF5, BIBREF10, BIBREF1, BIBREF13, BIBREF14, BIBREF15, there has been little analysis of this change. We develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time. Using subreddits as communities, these metrics reveal variations in orderings, some of which suggest cultural change influencing language. For example, in one community, we find that over a period of 10 years, the binomial `son and daughter' went from nearly frozen to appearing in that order only 64% of the time.", "While these changes do happen, they are generally quite rare. Most binomials — frozen or not — are ordered in one way about the same percentage of the time, regardless of community or the year. We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values. We find that there is less variation across time and communities in the data compared to this model, implying that binomial orderings are indeed remarkably stable." ], "highlighted_evidence": [ " We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. ", "We develop new metrics for the agreement of binomial orderings across communities and the movement of binomial orderings over time. Using subreddits as communities, these metrics reveal variations in orderings, some of which suggest cultural change influencing language.", "We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values. " ] } ], "annotation_id": [ "65a5a170ba79e7bab0d3d824da5de4ce311e8d75" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "news publications, wine reviews, and Reddit" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means." ], "highlighted_evidence": [ "We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time" ] } ], "annotation_id": [ "071407114a5d8102f0ad0283acef6de947c039b4" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Trinomials are likely to appear in exactly one order" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Finally, we expand our work to include multinomials, which are lists of more than two words. There already appears to be more structure in trinomials (lists of three) compared to binomials. Trinomials are likely to appear in exactly one order, and when they appear in more than one order the last word is almost always the same across all instances. For instance, in one section of our Reddit data, `Fraud, Waste, and Abuse' appears 34 times, and `Waste, Fraud, and Abuse' appears 20 times. This could point to, for example, recency principles being more important in lists of three than in lists of two. While multinomials were in principle part of the scope of past research in this area, they were difficult to study in smaller corpora, suggesting another benefit of working at our current scale." ], "highlighted_evidence": [ "Trinomials are likely to appear in exactly one order, and when they appear in more than one order the last word is almost always the same across all instances. " ] } ], "annotation_id": [ "0f4e0f61be03d73a6d50e805aa571ef59f50e865" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "null model " ], "yes_no": null, "free_form_answer": "", "evidence": [ "While these changes do happen, they are generally quite rare. Most binomials — frozen or not — are ordered in one way about the same percentage of the time, regardless of community or the year. We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values. We find that there is less variation across time and communities in the data compared to this model, implying that binomial orderings are indeed remarkably stable." ], "highlighted_evidence": [ "We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values. " ] } ], "annotation_id": [ "b9b421c58ee80dc1f9029311af759d9407f8222a" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " close to random," ], "yes_no": null, "free_form_answer": "", "evidence": [ "Basic Features. We first use predictors based on rules that have previously been proposed in the literature: word length, number of phonemes, number of syllables, alphabetical order, and frequency. We collect all binomials but make predictions only on binomials appearing at least 30 times total, stratified by subreddit. However, none of these features appear to be particularly predictive across the board (Table TABREF15). A simple linear regression model predicts close to random, which bolsters the evidence that these classical rules for frozen binomials are not predictive for general binomials." ], "highlighted_evidence": [ "We first use predictors based on rules that have previously been proposed in the literature: word length, number of phonemes, number of syllables, alphabetical order, and frequency. We collect all binomials but make predictions only on binomials appearing at least 30 times total, stratified by subreddit. However, none of these features appear to be particularly predictive across the board (Table TABREF15). A simple linear regression model predicts close to random, which bolsters the evidence that these classical rules for frozen binomials are not predictive for general binomials." ] } ], "annotation_id": [ "1a5165a650ad7c47f5b78bd801f26acf2d4144a3" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "word length, number of phonemes, number of syllables, alphabetical order, and frequency" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Basic Features. We first use predictors based on rules that have previously been proposed in the literature: word length, number of phonemes, number of syllables, alphabetical order, and frequency. We collect all binomials but make predictions only on binomials appearing at least 30 times total, stratified by subreddit. However, none of these features appear to be particularly predictive across the board (Table TABREF15). A simple linear regression model predicts close to random, which bolsters the evidence that these classical rules for frozen binomials are not predictive for general binomials." ], "highlighted_evidence": [ "We first use predictors based on rules that have previously been proposed in the literature: word length, number of phonemes, number of syllables, alphabetical order, and frequency. " ] } ], "annotation_id": [ "2acf3577a77bde41a1da2844f07876b1300ce3f9" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "news publications, wine reviews, and Reddit" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means." ], "highlighted_evidence": [ " We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. " ] } ], "annotation_id": [ "0971b1e3ee95fa2e08f8208f1246900d5b33da37" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ] }
{ "caption": [ "Figure 1: Histogram of comment timestamps for r/nba and r/nfl. Both subreddits exhibit a seasonal structure. The number of comments is increasing for all subreddits.", "Table 1: Summary statistics of subreddit list data that we investigate in this paper.", "Figure 2: A histogram of the log frequency of lists of various lengths, wherewe use name lists for r/nba. In this case, there is no filtering applied, but we cap list length at 50.", "Table 2: Covariance table for movement in r/nba, r/nfl and r/politics.", "Figure 3: 309 binomials that occur at least 30 times per year in r/politics, r/nba, and r/nfl mapped on to the 3- dimensional cube. The point on the bottom left is {‘10’, ‘20’}.", "Figure 4: Histograms of the alphabetical orientation of the 14920 most common binomials within r/nba, r/nfl and r/politics. Note that while there are many frozen binomials (with orientation of 0 or 1), the rest of the binomials appear to be roughly normally distributed around 0.5.", "Table 3: The average difference in asymmetry between the same binomial in various subreddits. The difference between r/nba and r/nfl is 0.062.", "Figure 5: Histogram of the maximum difference in pl for all lists l across communities and years, on a log-log scale. We add 0.01 to all differences to show cases with a difference of 0, which is represented as the bar on the left of the graph (mostly due to frozen binomials). We sampled 40000 instances for this graph, since there was variation in the number of binomials across years and communities.", "Table 4: Accuracy of binomial orientation predictions using a number of basic rules. The scoring was done based on “unweighted type” scoring, and statistics are given based on the scores across the subreddits.", "Figure 6: Histogram of asymmetry for lists of names in r/nfl, r/nba and r/politics.", "Table 5: Count for number of syllables in first and second word of all binomials in r/politics. First word is rows, second word is columns. Overall, shorter words are significantly more likely to come before longer words (see also Table 6).", "Table 7: Paired prediction results.", "Table 11: If two sports subreddits are listed in a sports subreddit, the subreddit of origin (r/nba in top row, r/nfl in bottom row) usually comes first, in terms of the weighted token evaluation (number of occurrences in parentheses). A ‘-’ means that there are fewer than 30 such lists.", "Table 8: The accuracy using \"unweighted type\" for only frozen binomials, here defined as binomials with asymmetry above 0.97. The results suggest that these rules are equally ineffective for frozen and non-frozen binomials.", "Table 9: Results of logistic regression based on word embeddings. This is by far our most successful model. Note that not all words in our binomials were found in the word embeddings, leaving about 70–97% of the binomials usable.", "Table 12: Political name ordering by party across political subreddits. Note that r/politics is left-leaning.", "Figure 7: The r/nba binomial graph, where nodes are words and directed edges indicate binomial orientation.", "Figure 8: The wines binomial graph, where nodes are words and directed edges indicate binomial orientation.", "Table 13: Number of total lists (log scale) and percent of lists that are frozen. There is no correlation between size and frozenness, but note that news is far more frozen than any other data source.", "Figure 9: Graphs of some of the 30 most common names in r/Conservative, r/Libertarian, and r/politics. Nodes are names, and an edge from A to B represents a list where the dominant order is [A,B]. Node size is the number of lists the word comes first in, node color is the total number of lists the node shows up in, edge color is the asymmetry of the list.", "Figure 10: Histogram of difference in asymmetry and compatibility for binomials within trinomials on r/politics." ], "file": [ "3-Figure1-1.png", "4-Table1-1.png", "4-Figure2-1.png", "5-Table2-1.png", "5-Figure3-1.png", "5-Figure4-1.png", "6-Table3-1.png", "6-Figure5-1.png", "6-Table4-1.png", "7-Figure6-1.png", "7-Table5-1.png", "8-Table7-1.png", "8-Table11-1.png", "8-Table8-1.png", "8-Table9-1.png", "8-Table12-1.png", "9-Figure7-1.png", "9-Figure8-1.png", "9-Table13-1.png", "11-Figure9-1.png", "11-Figure10-1.png" ] }
1904.08386
Casting Light on Invisible Cities: Computationally Engaging with Literary Criticism
Literary critics often attempt to uncover meaning in a single work of literature through careful reading and analysis. Applying natural language processing methods to aid in such literary analyses remains a challenge in digital humanities. While most previous work focuses on"distant reading"by algorithmically discovering high-level patterns from large collections of literary works, here we sharpen the focus of our methods to a single literary theory about Italo Calvino's postmodern novel Invisible Cities, which consists of 55 short descriptions of imaginary cities. Calvino has provided a classification of these cities into eleven thematic groups, but literary scholars disagree as to how trustworthy his categorization is. Due to the unique structure of this novel, we can computationally weigh in on this debate: we leverage pretrained contextualized representations to embed each city's description and use unsupervised methods to cluster these embeddings. Additionally, we compare results of our computational approach to similarity judgments generated by human readers. Our work is a first step towards incorporating natural language processing into literary criticism.
{ "section_name": [ "Introduction", "Literary analyses of Invisible Cities", "A Computational Analysis", "Embedding city descriptions", "Clustering city representations", "Evaluating clustering assignments", "Quantitative comparison", "Examining the learned clusters", "Related work", "Conclusion", "Acknowledgement" ], "paragraphs": [ [ "Literary critics form interpretations of meaning in works of literature. Building computational models that can help form and test these interpretations is a fundamental goal of digital humanities research BIBREF0 . Within natural language processing, most previous work that engages with literature relies on “distant reading” BIBREF1 , which involves discovering high-level patterns from large collections of stories BIBREF2 , BIBREF3 . We depart from this trend by showing that computational techniques can also engage with literary criticism at a closer distance: concretely, we use recent advances in text representation learning to test a single literary theory about the novel Invisible Cities by Italo Calvino.", "Framed as a dialogue between the traveler Marco Polo and the emperor Kublai Khan, Invisible Cities consists of 55 prose poems, each of which describes an imaginary city. Calvino categorizes these cities into eleven thematic groups that deal with human emotions (e.g., desires, memories), general objects (eyes, sky, signs), and unusual properties (continuous, hidden, thin). Many critics argue that Calvino's labels are not meaningful, while others believe that there is a distinct thematic separation between the groups, including the author himself BIBREF4 . The unique structure of this novel — each city's description is short and self-contained (Figure FIGREF1 ) — allows us to computationally examine this debate.", "As the book is too small to train any models, we leverage recent advances in large-scale language model-based representations BIBREF5 , BIBREF6 to compute a representation of each city. We feed these representations into a clustering algorithm that produces exactly eleven clusters of five cities each and evaluate them against both Calvino's original labels and crowdsourced human judgments. While the overall correlation with Calvino's labels is low, both computers and humans can reliably identify some thematic groups associated with concrete objects.", "While prior work has computationally analyzed a single book BIBREF7 , our work goes beyond simple word frequency or n-gram counts by leveraging the power of pretrained language models to engage with literary criticism. Admittedly, our approach and evaluations are specific to Invisible Cities, but we believe that similar analyses of more conventionally-structured novels could become possible as text representation methods improve. We also highlight two challenges of applying computational methods to literary criticisms: (1) text representation methods are imperfect, especially when given writing as complex as Calvino's; and (2) evaluation is difficult because there is no consensus among literary critics on a single “correct” interpretation." ], [ "Before describing our method and results, we first review critical opinions on both sides of whether Calvino's thematic groups meaningfully characterize his city descriptions." ], [ "We focus on measuring to what extent computers can recover Calvino's thematic groupings when given just raw text of the city descriptions. At a high level, our approach (Figure FIGREF4 ) involves (1) computing a vector representation for every city and (2) performing unsupervised clustering of these representations. The rest of this section describes both of these steps in more detail." ], [ "While each of the city descriptions is relatively short, Calvino's writing is filled with rare words, complex syntactic structures, and figurative language. Capturing the essential components of each city in a single vector is thus not as simple as it is with more standard forms of text. Nevertheless, we hope that representations from language models trained over billions of words of text can extract some meaningful semantics from these descriptions. We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm." ], [ "Given 55 city representations, how do we group them into eleven clusters of five cities each? Initially, we experimented with a graph-based community detection algorithm that maximizes cluster modularity BIBREF20 , but we found no simple way to constrain this method to produce a specific number of equally-sized clusters. The brute force approach of enumerating all possible cluster assignments is intractable given the large search space ( INLINEFORM0 possible assignments). We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence. To evaluate the quality of the computationally-derived clusters against those of Calvino, we measure cluster purity BIBREF21 : given a set of predicted clusters INLINEFORM1 and ground-truth clusters INLINEFORM2 that both partition a set of INLINEFORM3 data points, INLINEFORM4 " ], [ "While the results from the above section allow us to compare our three computational methods against each other, we additionally collect human judgments to further ground our results. In this section, we first describe our human experiment before quantitatively analyzing our results." ], [ "We compare clusters computed on different representations using community purity; additionally, we compare these computational methods to humans by their accuracy on the odd-one-out task.", "City representations computed using language model-based representation (ELMo and BERT) achieve significantly higher purity than a clustering induced from random representations, indicating that there is at least some meaningful coherence to Calvino's thematic groups (first row of Table TABREF11 ). ELMo representations yield the highest purity among the three methods, which is surprising as BERT is a bigger model trained on data from books (among other domains). Both ELMo and BERT outperform GloVe, which intuitively makes sense because the latter do not model the order or structure of the words in each description.", "While the purity of our methods is higher than that of a random clustering, it is still far below 1. To provide additional context to these results, we now switch to our “odd-one-out” task and compare directly to human performance. For each triplet of cities, we identify the intruder as the city with the maximum Euclidean distance from the other two. Interestingly, crowd workers achieve only slightly higher accuracy than ELMo city representations; their interannotator agreement is also low, which indicates that close reading to analyze literary coherence between multiple texts is a difficult task, even for human annotators. Overall, results from both computational and human approaches suggests that the author-assigned labels are not entirely arbitrary, as we can reliably recover some of the thematic groups." ], [ "Our quantitative results suggest that while vector-based city representations capture some thematic similarities, there is much room for improvement. In this section, we first investigate whether the learned clusters provide evidence for any arguments put forth by literary critics on the novel. Then, we explore possible reasons that the learned clusters deviate from Calvino's." ], [ "Most previous work within the NLP community applies distant reading BIBREF1 to large collections of books, focusing on modeling different aspects of narratives such as plots and event sequences BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , characters BIBREF2 , BIBREF26 , BIBREF27 , BIBREF28 , and narrative similarity BIBREF3 . In the same vein, researchers in computational literary analysis have combined statistical techniques and linguistics theories to perform quantitative analysis on large narrative texts BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , but these attempts largely rely on techniques such as word counting, topic modeling, and naive Bayes classifiers and are therefore not able to capture the meaning of sentences or paragraphs BIBREF34 . While these works discover general patterns from multiple literary works, we are the first to use cutting-edge NLP techniques to engage with specific literary criticism about a single narrative.", "There has been other computational work that focuses on just a single book or a small number of books, much of it focused on network analysis: BIBREF35 extract character social networks from Alice in Wonderland, while BIBREF36 recover social networks from 19th century British novels. BIBREF37 disentangles multiple narrative threads within the novel Infinite Jest, while BIBREF7 provides several automated statistical methods for close reading and test them on the award-winning novel Cloud Atlas (2004). Compared to this work, we push further on modeling the content of the narrative by leveraging pretrained language models." ], [ "Our work takes a first step towards computationally engaging with literary criticism on a single book using state-of-the-art text representation methods. While we demonstrate that NLP techniques can be used to support literary analyses and obtain new insights, they also have clear limitations (e.g., in understanding abstract themes). As text representation methods become more powerful, we hope that (1) computational tools will become useful for analyzing novels with more conventional structures, and (2) literary criticism will be used as a testbed for evaluating representations." ], [ "We thank the anonymous reviewers for their insightful comments. Additionally, we thank Nader Akoury, Garrett Bernstein, Chenghao Lv, Ari Kobren, Kalpesh Krishna, Saumya Lal, Tu Vu, Zhichao Yang, Mengxue Zhang and the UMass NLP group for suggestions that improved the paper's clarity, coverage of related work, and analysis experiments." ] ] }
{ "question": [ "How do they model a city description using embeddings?", "How do they obtain human judgements?", "Which clustering method do they use to cluster city description embeddings?" ], "question_id": [ "508580af51483b5fb0df2630e8ea726ff08d537b", "89d1687270654979c53d0d0e6a845cdc89414c67", "fc6cfac99636adda28654e1e19931c7394d76c7c" ], "nlp_background": [ "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm." ], "yes_no": null, "free_form_answer": "", "evidence": [ "While each of the city descriptions is relatively short, Calvino's writing is filled with rare words, complex syntactic structures, and figurative language. Capturing the essential components of each city in a single vector is thus not as simple as it is with more standard forms of text. Nevertheless, we hope that representations from language models trained over billions of words of text can extract some meaningful semantics from these descriptions. We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm." ], "highlighted_evidence": [ "While each of the city descriptions is relatively short, Calvino's writing is filled with rare words, complex syntactic structures, and figurative language. Capturing the essential components of each city in a single vector is thus not as simple as it is with more standard forms of text. Nevertheless, we hope that representations from language models trained over billions of words of text can extract some meaningful semantics from these descriptions. We experiment with three different pretrained representations: ELMo BIBREF5 , BERT BIBREF6 , and GloVe BIBREF18 . To produce a single city embedding, we compute the TF-IDF weighted element-wise mean of the token-level representations. For all pretrained methods, we additionally reduce the dimensionality of the city embeddings to 40 using PCA for increased compatibility with our clustering algorithm." ] } ], "annotation_id": [ "e922a0f6eac0005885474470b7736de70242bb0e" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Using crowdsourcing ", "evidence": [ "As the book is too small to train any models, we leverage recent advances in large-scale language model-based representations BIBREF5 , BIBREF6 to compute a representation of each city. We feed these representations into a clustering algorithm that produces exactly eleven clusters of five cities each and evaluate them against both Calvino's original labels and crowdsourced human judgments. While the overall correlation with Calvino's labels is low, both computers and humans can reliably identify some thematic groups associated with concrete objects." ], "highlighted_evidence": [ "We feed these representations into a clustering algorithm that produces exactly eleven clusters of five cities each and evaluate them against both Calvino's original labels and crowdsourced human judgments. While the overall correlation with Calvino's labels is low, both computers and humans can reliably identify some thematic groups associated with concrete objects." ] } ], "annotation_id": [ "0e5c9c260e8ca6a68b18fb79abfb55a275eca5ba" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence. " ], "yes_no": null, "free_form_answer": "", "evidence": [ "Given 55 city representations, how do we group them into eleven clusters of five cities each? Initially, we experimented with a graph-based community detection algorithm that maximizes cluster modularity BIBREF20 , but we found no simple way to constrain this method to produce a specific number of equally-sized clusters. The brute force approach of enumerating all possible cluster assignments is intractable given the large search space ( INLINEFORM0 possible assignments). We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence. To evaluate the quality of the computationally-derived clusters against those of Calvino, we measure cluster purity BIBREF21 : given a set of predicted clusters INLINEFORM1 and ground-truth clusters INLINEFORM2 that both partition a set of INLINEFORM3 data points, INLINEFORM4" ], "highlighted_evidence": [ "Initially, we experimented with a graph-based community detection algorithm that maximizes cluster modularity BIBREF20 , but we found no simple way to constrain this method to produce a specific number of equally-sized clusters. The brute force approach of enumerating all possible cluster assignments is intractable given the large search space ( INLINEFORM0 possible assignments). We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence. To evaluate the quality of the computationally-derived clusters against those of Calvino, we measure cluster purity BIBREF21 : given a set of predicted clusters INLINEFORM1 and ground-truth clusters INLINEFORM2 that both partition a set of INLINEFORM3 data points, INLINEFORM4" ] } ], "annotation_id": [ "071a9ef44d77bb5d6274e45217df6ecb1025fe8d" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ] }
{ "caption": [ "Figure 1: Calvino labels the thematically-similar cities in the top row as cities & the dead. However, although the bottom two cities share a theme of desire, he assigns them to different groups.", "Figure 2: We first embed each city by averaging token representations derived from a pretrained model such as ELMo. Then, we feed the city embeddings to a clustering algorithm and analyze the learned clusters.", "Table 1: Results from cluster purity and accuracy on the “odd-one-out” task suggests that Calvino’s thematic groups are not completely arbitrary." ], "file": [ "1-Figure1-1.png", "2-Figure2-1.png", "3-Table1-1.png" ] }
1909.00754
Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation
Existing approaches to dialogue state tracking rely on pre-defined ontologies consisting of a set of all possible slot types and values. Though such approaches exhibit promising performance on single-domain benchmarks, they suffer from computational complexity that increases proportionally to the number of pre-defined slots that need tracking. This issue becomes more severe when it comes to multi-domain dialogues which include larger numbers of slots. In this paper, we investigate how to approach DST using a generation framework without the pre-defined ontology list. Given each turn of user utterance and system response, we directly generate a sequence of belief states by applying a hierarchical encoder-decoder structure. In this way, the computational complexity of our model will be a constant regardless of the number of pre-defined slots. Experiments on both the multi-domain and the single domain dialogue state tracking dataset show that our model not only scales easily with the increasing number of pre-defined domains and slots but also reaches the state-of-the-art performance.
{ "section_name": [ "Introduction", "Motivation", "Hierarchical Sequence Generation for DST", "Encoding Module", "Conditional Memory Relation Decoder", "Experimental Setting", "Implementation Details", "Results", "Ablation Study", "Qualitative Analysis", "Related Work", "Conclusion" ], "paragraphs": [ [ "A Dialogue State Tracker (DST) is a core component of a modular task-oriented dialogue system BIBREF7 . For each dialogue turn, a DST module takes a user utterance and the dialogue history as input, and outputs a belief estimate of the dialogue state. Then a machine action is decided based on the dialogue state according to a dialogue policy module, after which a machine response is generated.", "Traditionally, a dialogue state consists of a set of requests and joint goals, both of which are represented by a set of slot-value pairs (e.g. (request, phone), (area, north), (food, Japanese)) BIBREF8 . In a recently proposed multi-domain dialogue state tracking dataset, MultiWoZ BIBREF9 , a representation of dialogue state consists of a hierarchical structure of domain, slot, and value is proposed. This is a more practical scenario since dialogues often include multiple domains simultaneously.", "Many recently proposed DSTs BIBREF2 , BIBREF10 are based on pre-defined ontology lists that specify all possible slot values in advance. To generate a distribution over the candidate set, previous works often take each of the slot-value pairs as input for scoring. However, in real-world scenarios, it is often not practical to enumerate all possible slot value pairs and perform scoring from a large dynamically changing knowledge base BIBREF11 . To tackle this problem, a popular direction is to build a fixed-length candidate set that is dynamically updated throughout the dialogue development. cpt briefly summaries the inference time complexity of multiple state-of-the-art DST models following this direction. Since the inference complexity of all of previous model is at least proportional to the number of the slots, these models will struggle to scale to multi-domain datasets with much larger numbers of pre-defined slots.", "In this work, we formulate the dialogue state tracking task as a sequence generation problem, instead of formulating the task as a pair-wise prediction problem as in existing work. We propose the COnditional MEmory Relation Network (COMER), a scalable and accurate dialogue state tracker that has a constant inference time complexity. ", "Specifically, our model consists of an encoder-decoder network with a hierarchically stacked decoder to first generate the slot sequences in the belief state and then for each slot generate the corresponding value sequences. The parameters are shared among all of our decoders for the scalability of the depth of the hierarchical structure of the belief states. COMER applies BERT contextualized word embeddings BIBREF12 and BPE BIBREF13 for sequence encoding to ensure the uniqueness of the representations of the unseen words. The word embeddings for sequence generation are initialized and fixed with the static word embeddings generated from BERT to have the potential of generating unseen words." ], [ "f1 shows a multi-domain dialogue in which the user wants the system to first help book a train and then reserve a hotel. For each turn, the DST will need to track the slot-value pairs (e.g. (arrive by, 20:45)) representing the user goals as well as the domain that the slot-value pairs belongs to (e.g. train, hotel). Instead of representing the belief state via a hierarchical structure, one can also combine the domain and slot together to form a combined slot-value pair (e.g. (train; arrive by, 20:45) where the combined slot is “train; arrive by\"), which ignores the subordination relationship between the domain and the slots.", "A typical fallacy in dialogue state tracking datasets is that they make an assumption that the slot in a belief state can only be mapped to a single value in a dialogue turn. We call this the single value assumption. Figure 2 shows an example of this fallacy from the WoZ2.0 dataset: Based on the belief state label (food, seafood), it will be impossible for the downstream module in the dialogue system to generate sample responses that return information about Chinese restaurants. A correct representation of the belief state could be (food, seafood $>$ chinese). This would tell the system to first search the database for information about seafood and then Chinese restaurants. The logical operator “ $>$ \" indicates which retrieved information should have a higher priority to be returned to the user. Thus we are interested in building DST modules capable of generating structured sequences, since this kind of sequence representation of the value is critical for accurately capturing the belief states of a dialogue." ], [ "Given a dialogue $D$ which consists of $T$ turns of user utterances and system actions, our target is to predict the state at each turn. Different from previous methods which formulate multi-label state prediction as a collection of binary prediction problems, COMER adapts the task into a sequence generation problem via a Seq2Seq framework.", "As shown in f3, COMER consists of three encoders and three hierarchically stacked decoders. We propose a novel Conditional Memory Relation Decoder (CMRD) for sequence decoding. Each encoder includes an embedding layer and a BiLSTM. The encoders take in the user utterance, the previous system actions, and the previous belief states at the current turn, and encodes them into the embedding space. The user encoder and the system encoder use the fixed BERT model as the embedding layer.", "Since the slot value pairs are un-ordered set elements of a domain in the belief states, we first order the sequence of domain according to their frequencies as they appear in the training set BIBREF14 , and then order the slot value pairs in the domain according to the slot's frequencies of as they appear in a domain. After the sorting of the state elements, We represent the belief states following the paradigm: (Domain1- Slot1, Value1; Slot2, Value2; ... Domain2- Slot1, Value1; ...) for a more concise representation compared with the nested tuple representation.", "All the CMRDs take the same representations from the system encoder, user encoder and the belief encoder as part of the input. In the procedure of hierarchical sequence generation, the first CMRD takes a zero vector for its condition input $\\mathbf {c}$ , and generates a sequence of the domains, $D$ , as well as the hidden representation of domains $H_D$ . For each $d$ in $D$ , the second CMRD then takes the corresponding $h_d$ as the condition input and generates the slot sequence $S_d$ , and representations, $H_{S,d}$ . Then for each $s$ in $S$ , the third CMRD generates the value sequence $D$0 based on the corresponding $D$1 . We update the belief state with the new $D$2 pairs and perform the procedure iteratively until a dialogue is completed. All the CMR decoders share all of their parameters.", "Since our model generates domains and slots instead of taking pre-defined slots as inputs, and the number of domains and slots generated each turn is only related to the complexity of the contents covered in a specific dialogue, the inference time complexity of COMER is $O(1)$ with respect to the number of pre-defined slots and values." ], [ "Let $X$ represent a user utterance or system transcript consisting of a sequence of words $\\lbrace w_1,\\ldots ,w_T\\rbrace $ . The encoder first passes the sequence $\\lbrace \\mathit {[CLS]},w_1,\\ldots ,w_T,\\mathit {[SEP]}\\rbrace $ into a pre-trained BERT model and obtains its contextual embeddings $E_{X}$ . Specifically, we leverage the output of all layers of BERT and take the average to obtain the contextual embeddings.", "For each domain/slot appeared in the training set, if it has more than one word, such as `price range', `leave at', etc., we feed it into BERT and take the average of the word vectors to form the extra slot embedding $E_{s}$ . In this way, we map each domain/slot to a fixed embedding, which allows us to generate a domain/slot as a whole instead of a token at each time step of domain/slot sequence decoding. We also construct a static vocabulary embedding $E_{v}$ by feeding each token in the BERT vocabulary into BERT. The final static word embedding $E$ is the concatenation of the $E_{v}$ and $E_{s}$ .", "After we obtain the contextual embeddings for the user utterance, system action, and the static embeddings for the previous belief state, we feed each of them into a Bidirectional LSTM BIBREF15 . ", "$$\\begin{aligned}\n\\mathbf {h}_{a_t} & = \\textrm {BiLSTM}(\\mathbf {e}_{X_{a_t}}, \\mathbf {h}_{a_{t-1}}) \\\\\n\\mathbf {h}_{u_t} & = \\textrm {BiLSTM}(\\mathbf {e}_{X_{u_t}}, \\mathbf {h}_{u_{t-1}}) \\\\\n\\mathbf {h}_{b_t} & = \\textrm {BiLSTM}(\\mathbf {e}_{X_{b_t}}, \\mathbf {h}_{b_{t-1}}) \\\\\n\\mathbf {h}_{a_0} & = \\mathbf {h}_{u_0} = \\mathbf {h}_{b_0} = c_{0}, \\\\\n\\end{aligned}$$ (Eq. 7) ", "where $c_{0}$ is the zero-initialized hidden state for the BiLSTM. The hidden size of the BiLSTM is $d_m/2$ . We concatenate the forward and the backward hidden representations of each token from the BiLSTM to obtain the token representation $\\mathbf {h}_{k_t}\\in R^{d_m}$ , $k\\in \\lbrace a,u,b\\rbrace $ at each time step $t$ . The hidden states of all time steps are concatenated to obtain the final representation of $H_{k}\\in R^{T \\times d_m}, k \\in \\lbrace a,u,B\\rbrace $ . The parameters are shared between all of the BiLSTMs." ], [ "Inspired by Residual Dense Networks BIBREF16 , End-to-End Memory Networks BIBREF17 and Relation Networks BIBREF18 , we here propose the Conditional Memory Relation Decoder (CMRD). Given a token embedding, $\\mathbf {e}_x$ , CMRD outputs the next token, $s$ , and the hidden representation, $h_s$ , with the hierarchical memory access of different encoded information sources, $H_B$ , $H_a$ , $H_u$ , and the relation reasoning under a certain given condition $\\mathbf {c}$ , $\n\\mathbf {s}, \\mathbf {h}_s= \\textrm {CMRD}(\\mathbf {e}_x, \\mathbf {c}, H_B, H_a, H_u),\n$ ", "the final output matrices $S,H_s \\in R^{l_s\\times d_m}$ are concatenations of all generated $\\mathbf {s}$ and $\\mathbf {h}_s$ (respectively) along the sequence length dimension, where $d_m$ is the model size, and $l_s$ is the generated sequence length. The general structure of the CMR decoder is shown in Figure 4 . Note that the CMR decoder can support additional memory sources by adding the residual connection and the attention block, but here we only show the structure with three sources: belief state representation ( $H_B$ ), system transcript representation ( $H_a$ ), and user utterance representation ( $H_u$ ), corresponding to a dialogue state tracking scenario. Since we share the parameters between all of the decoders, thus CMRD is actually a 2-dimensional auto-regressive model with respect to both the condition generation and the sequence generation task.", "At each time step $t$ , the CMR decoder first embeds the token $x_t$ with a fixed token embedding $E\\in R^{d_e\\times d_v}$ , where $d_e$ is the embedding size and $d_v$ is the vocabulary size. The initial token $x_0$ is “[CLS]\". The embedded vector $\\textbf {e}_{x_t}$ is then encoded with an LSTM, which emits a hidden representation $\\textbf {h}_0 \\in R^{d_m}$ , $\n\\textbf {h}_0= \\textrm {LSTM}(\\textbf {e}_{x_t},\\textbf {q}_{t-1}).\n$ ", "where $\\textbf {q}_t$ is the hidden state of the LSTM. $\\textbf {q}_0$ is initialized with an average of the hidden states of the belief encoder, the system encoder and the user encoder which produces $H_B$ , $H_a$ , $H_u$ respectively.", " $\\mathbf {h}_0$ is then summed (element-wise) with the condition representation $\\mathbf {c}\\in R^{d_m}$ to produce $\\mathbf {h}_1$ , which is (1) fed into the attention module; (2) used for residual connection; and (3) concatenated with other $\\mathbf {h}_i$ , ( $i>1$ ) to produce the concatenated working memory, $\\mathbf {r_0}$ , for relation reasoning, $\n\\mathbf {h}_1 & =\\mathbf {h}_0+\\mathbf {c},\\\\\n\\mathbf {h}_2 & =\\mathbf {h}_1+\\text{Attn}_{\\text{belief}}(\\mathbf {h}_1,H_e),\\\\\n\\mathbf {h}_3 & = \\mathbf {h}_2+\\text{Attn}_{\\text{sys}}(\\mathbf {h}_2,H_a),\\\\\n\\mathbf {h}_4 & = \\mathbf {h}_3+\\text{Attn}_{\\text{usr}}(\\mathbf {h}_3,H_u),\\\\\n\\mathbf {r} & = \\mathbf {h}_1\\oplus \\mathbf {h}_2\\oplus \\mathbf {h}_3\\oplus \\mathbf {h}_4 \\in R^{4d_m},\n$ ", " where $\\text{Attn}_k$ ( $k\\in \\lbrace \\text{belief}, \\text{sys},\\text{usr}\\rbrace $ ) are the attention modules applied respectively to $H_B$ , $H_a$ , $H_u$ , and $\\oplus $ means the concatenation operator. The gradients are blocked for $ \\mathbf {h}_1,\\mathbf {h}_2,\\mathbf {h}_3$ during the back-propagation stage, since we only need them to work as the supplementary memories for the relation reasoning followed.", "The attention module takes a vector, $\\mathbf {h}\\in R^{d_m}$ , and a matrix, $H\\in R^{d_m\\times l}$ as input, where $l$ is the sequence length of the representation, and outputs $\\mathbf {h}_a$ , a weighted sum of the column vectors in $H$ . $\n\\mathbf {a} & =W_1^T\\mathbf {h}+\\mathbf {b}_1& &\\in R^{d_m},\\\\\n\\mathbf {c} &=\\text{softmax}(H^Ta)& &\\in R^l,\\\\\n\\mathbf {h} &=H\\mathbf {c}& &\\in R^{d_m},\\\\\n\\mathbf {h}_a &=W_2^T\\mathbf {h}+\\mathbf {b}_2& &\\in R^{d_m},\n$ ", " where the weights $W_1\\in R^{d_m \\times d_m}$ , $W_2\\in R^{d_m \\times d_m}$ and the bias $b_1\\in R^{d_m}$ , $b_2\\in R^{d_m}$ are the learnable parameters.", "The order of the attention modules, i.e., first attend to the system and the user and then the belief, is decided empirically. We can interpret this hierarchical structure as the internal order for the memory processing, since from the daily life experience, people tend to attend to the most contemporary memories (system/user utterance) first and then attend to the older history (belief states). All of the parameters are shared between the attention modules.", "The concatenated working memory, $\\mathbf {r}_0$ , is then fed into a Multi-Layer Perceptron (MLP) with four layers, $\n\\mathbf {r}_1 & =\\sigma (W_1^T\\mathbf {r}_0+\\mathbf {b}_1),\\\\\n\\mathbf {r}_2 & =\\sigma (W_2^T\\mathbf {r}_1+\\mathbf {b}_2),\\\\\n\\mathbf {r}_3 & = \\sigma (W_3^T\\mathbf {r}_2+\\mathbf {b}_3),\\\\\n\\mathbf {h}_s & = \\sigma (W_4^T\\mathbf {r}_3+\\mathbf {b}_4),\n$ ", " where $\\sigma $ is a non-linear activation, and the weights $W_1 \\in R^{4d_m \\times d_m}$ , $W_i \\in R^{d_m \\times d_m}$ and the bias $b_1 \\in R^{d_m}$ , $b_i \\in R^{d_m}$ are learnable parameters, and $2\\le i\\le 4$ . The number of layers for the MLP is decided by the grid search.", "The hidden representation of the next token, $\\mathbf {h}_s$ , is then (1) emitted out of the decoder as a representation; and (2) fed into a dropout layer with drop rate $p$ , and a linear layer to generate the next token, $\n\\mathbf {h}_k & =\\text{dropout}(\\mathbf {h}_s)& &\\in R^{d_m},\\\\\n\\mathbf {h}_o & =W_k^T\\mathbf {h}_k+\\mathbf {b}_k& &\\in R^{d_e},\\\\\n\\mathbf {p}_s & =\\text{softmax}(E^T\\mathbf {h}_o)& &\\in R^{d_v},\\\\\ns & =\\text{argmax}(\\mathbf {p}_s)& &\\in R,\n$ ", " where the weight $W_k\\in R^{d_m \\times d_e}$ and the bias $b_k\\in R^{d_e}$ are learnable parameters. Since $d_e$ is the embedding size and the model parameters are independent of the vocabulary size, the CMR decoder can make predictions on a dynamic vocabulary and implicitly supports the generation of unseen words. When training the model, we minimize the cross-entropy loss between the output probabilities, $\\mathbf {p}_s$ , and the given labels." ], [ "We first test our model on the single domain dataset, WoZ2.0 BIBREF19 . It consists of 1,200 dialogues from the restaurant reservation domain with three pre-defined slots: food, price range, and area. Since the name slot rarely occurs in the dataset, it is not included in our experiments, following previous literature BIBREF3 , BIBREF20 . Our model is also tested on the multi-domain dataset, MultiWoZ BIBREF9 . It has a more complex ontology with 7 domains and 25 predefined slots. Since the combined slot-value pairs representation of the belief states has to be applied for the model with $O(n)$ ITC, the total number of slots is 35. The statistics of these two datsets are shown in Table 2 .", "Based on the statistics from these two datasets, we can calculate the theoretical Inference Time Multiplier (ITM), $K$ , as a metric of scalability. Given the inference time complexity, ITM measures how many times a model will be slower when being transferred from the WoZ2.0 dataset, $d_1$ , to the MultiWoZ dataset, $d_2$ , $\nK= h(t)h(s)h(n)h(m)\\\\\n$ $\nh(x)=\\left\\lbrace \n\\begin{array}{lcl}\n1 & &O(x)=O(1),\\\\\n\\frac{x_{d_2}}{x_{d_1}}& & \\text{otherwise},\\\\\n\\end{array}\\right.\n\n$ ", "where $O(x)$ means the Inference Time Complexity (ITC) of the variable $x$ . For a model having an ITC of $O(1)$ with respect to the number of slots $n$ , and values $m$ , the ITM will be a multiplier of 2.15x, while for an ITC of $O(n)$ , it will be a multiplier of 25.1, and 1,143 for $O(mn)$ .", "As a convention, the metric of joint goal accuracy is used to compare our model to previous work. The joint goal accuracy only regards the model making a successful belief state prediction if all of the slots and values predicted are exactly matched with the labels provided. This metric gives a strict measurement that tells how often the DST module will not propagate errors to the downstream modules in a dialogue system. In this work, the model with the highest joint accuracy on the validation set is evaluated on the test set for the test joint accuracy measurement." ], [ "We use the $\\text{BERT}_\\text{large}$ model for both contextual and static embedding generation. All LSTMs in the model are stacked with 2 layers, and only the output of the last layer is taken as a hidden representation. ReLU non-linearity is used for the activation function, $\\sigma $ .", "The hyper-parameters of our model are identical for both the WoZ2.0 and the MultiwoZ datasets: dropout rate $p=0.5$ , model size $d_m=512$ , embedding size $d_e=1024$ . For training on WoZ2.0, the model is trained with a batch size of 32 and the ADAM optimizer BIBREF21 for 150 epochs, while for MultiWoZ, the AMSGrad optimizer BIBREF22 and a batch size of 16 is adopted for 15 epochs of training. For both optimizers, we use a learning rate of 0.0005 with a gradient clip of 2.0. We initialize all weights in our model with Kaiming initialization BIBREF23 and adopt zero initialization for the bias. All experiments are conducted on a single NVIDIA GTX 1080Ti GPU." ], [ "To measure the actual inference time multiplier of our model, we evaluate the runtime of the best-performing models on the validation sets of both the WoZ2.0 and MultiWoZ datasets. During evaluation, we set the batch size to 1 to avoid the influence of data parallelism and sequence padding. On the validation set of WoZ2.0, we obtain a runtime of 65.6 seconds, while on MultiWoZ, the runtime is 835.2 seconds. Results are averaged across 5 runs. Considering that the validation set of MultiWoZ is 5 times larger than that of WoZ2.0, the actual inference time multiplier is 2.54 for our model. Since the actual inference time multiplier roughly of the same magnitude as the theoretical value of 2.15, we can confirm empirically that we have the $O(1)$ inference time complexity and thus obtain full scalability to the number of slots and values pre-defined in an ontology.", "c compares our model with the previous state-of-the-art on both the WoZ2.0 test set and the MultiWoZ test set. For the WoZ2.0 dataset, we maintain performance at the level of the state-of-the-art, with a marginal drop of 0.3% compared with previous work. Considering the fact that WoZ2.0 is a relatively small dataset, this small difference does not represent a significant big performance drop. On the muli-domain dataset, MultiWoZ, our model achieves a joint goal accuracy of 45.72%, which is significant better than most of the previous models other than TRADE which applies the copy mechanism and gains better generalization ability on named entity coping." ], [ "To prove the effectiveness of our structure of the Conditional Memory Relation Decoder (CMRD), we conduct ablation experiments on the WoZ2.0 dataset. We observe an accuracy drop of 1.95% after removing residual connections and the hierarchical stack of our attention modules. This proves the effectiveness of our hierarchical attention design. After the MLP is replaced with a linear layer of hidden size 512 and the ReLU activation function, the accuracy further drops by 3.45%. This drop is partly due to the reduction of the number of the model parameters, but it also proves that stacking more layers in an MLP can improve the relational reasoning performance given a concatenation of multiple representations from different sources.", "We also conduct the ablation study on the MultiWoZ dataset for a more precise analysis on the hierarchical generation process. For joint domain accuracy, we calculate the probability that all domains generated in each turn are exactly matched with the labels provided. The joint domain-slot accuracy further calculate the probability that all domains and slots generated are correct, while the joint goal accuracy requires all the domains, slots and values generated are exactly matched with the labels. From abm, We can further calculate that given the correct slot prediction COMER has 83.52% chance to make the correct value prediction. While COMER has done great job on domain prediction (95.53%) and value prediction (83.52%), the accuracy of the slot prediction given the correct domain is only 57.30%. We suspect that this is because we only use the previous belief state to represent the dialogue history, and the inter-turn reasoning ability on the slot prediction suffers from the limited context and the accuracy is harmed due to the multi-turn mapping problem BIBREF4 . We can also see that the JDS Acc. has an absolute boost of 5.48% when we switch from the combined slot representation to the nested tuple representation. This is because the subordinate relationship between the domains and the slots can be captured by the hierarchical sequence generation, while this relationship is missed when generating the domain and slot together via the combined slot representation." ], [ "f5 shows an example of the belief state prediction result in one turn of a dialogue on the MultiWoZ test set. The visualization includes the CMRD attention scores over the belief states, system transcript and user utterance during the decoding stage of the slot sequence.", "From the system attention (top right), since it is the first attention module and no previous context information is given, it can only find the information indicating the slot “departure” from the system utterance under the domain condition, and attend to the evidence “leaving” correctly during the generation step of “departure”. From the user attention, we can see that it captures the most helpful keywords that are necessary for correct prediction, such as “after\" for “day\" and “leave at”, “to\" for “destination\". Moreover, during the generation step of “departure”, the user attention successfully discerns that, based on the context, the word “leave” is not the evidence that need to be accumulated and choose to attend nothing in this step. For the belief attention, we can see that the belief attention module correctly attends to a previous slot for each generation step of a slot that has been presented in the previous state. For the generation step of the new slot “destination\", since the previous state does not have the “destination\" slot, the belief attention module only attends to the `-' mark after the `train' domain to indicate that the generated word should belong to this domain." ], [ "Semi-scalable Belief Tracker BIBREF1 proposed an approach that can generate fixed-length candidate sets for each of the slots from the dialogue history. Although they only need to perform inference for a fixed number of values, they still need to iterate over all slots defined in the ontology to make a prediction for a given dialogue turn. In addition, their method needs an external language understanding module to extract the exact entities from a dialogue to form candidates, which will not work if the label value is an abstraction and does not have the exact match with the words in the dialogue.", "StateNet BIBREF3 achieves state-of-the-art performance with the property that its parameters are independent of the number of slot values in the candidate set, and it also supports online training or inference with dynamically changing slots and values. Given a slot that needs tracking, it only needs to perform inference once to make the prediction for a turn, but this also means that its inference time complexity is proportional to the number of slots.", "TRADE BIBREF4 achieves state-of-the-art performance on the MultiWoZ dataset by applying the copy mechanism for the value sequence generation. Since TRADE takes $n$ combinations of the domains and slots as the input, the inference time complexity of TRADE is $O(n)$ . The performance improvement achieved by TRADE is mainly due to the fact that it incorporates the copy mechanism that can boost the accuracy on the ‘name’ slot, which mainly needs the ability in copying names from the dialogue history. However, TRADE does not report its performance on the WoZ2.0 dataset which does not have the ‘name’ slot.", "DSTRead BIBREF6 formulate the dialogue state tracking task as a reading comprehension problem by asking slot specified questions to the BERT model and find the answer span in the dialogue history for each of the pre-defined combined slot. Thus its inference time complexity is still $O(n)$ . This method suffers from the fact that its generation vocabulary is limited to the words occurred in the dialogue history, and it has to do a manual combination strategy with another joint state tracking model on the development set to achieve better performance.", "Contextualized Word Embedding (CWE) was first proposed by BIBREF25 . Based on the intuition that the meaning of a word is highly correlated with its context, CWE takes the complete context (sentences, passages, etc.) as the input, and outputs the corresponding word vectors that are unique under the given context. Recently, with the success of language models (e.g. BIBREF12 ) that are trained on large scale data, contextualizeds word embedding have been further improved and can achieve the same performance compared to (less flexible) finely-tuned pipelines.", "Sequence Generation Models. Recently, sequence generation models have been successfully applied in the realm of multi-label classification (MLC) BIBREF14 . Different from traditional binary relevance methods, they proposed a sequence generation model for MLC tasks which takes into consideration the correlations between labels. Specifically, the model follows the encoder-decoder structure with an attention mechanism BIBREF26 , where the decoder generates a sequence of labels. Similar to language modeling tasks, the decoder output at each time step will be conditioned on the previous predictions during generation. Therefore the correlation between generated labels is captured by the decoder." ], [ "In this work, we proposed the Conditional Memory Relation Network (COMER), the first dialogue state tracking model that has a constant inference time complexity with respect to the number of domains, slots and values pre-defined in an ontology. Besides its scalability, the joint goal accuracy of our model also achieve the similar performance compared with the state-of-the-arts on both the MultiWoZ dataset and the WoZ dataset. Due to the flexibility of our hierarchical encoder-decoder framework and the CMR decoder, abundant future research direction remains as applying the transformer structure, incorporating open vocabulary and copy mechanism for explicit unseen words generation, and inventing better dialogue history access mechanism to accommodate efficient inter-turn reasoning.", "Acknowledgements. This work is partly supported by NSF #1750063. We thank all the reviewers for their constructive suggestions. We also want to thank Zhuowen Tu and Shengnan Zhang for the early discussions of the project." ] ] }
{ "question": [ "Does this approach perform better in the multi-domain or single-domain setting?", "What are the performance metrics used?", "Which datasets are used to evaluate performance?" ], "question_id": [ "ed7a3e7fc1672f85a768613e7d1b419475950ab4", "72ceeb58e783e3981055c70a3483ea706511fac3", "9bfa46ad55136f2a365e090ce585fc012495393c" ], "nlp_background": [ "two", "two", "two" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "single-domain setting", "evidence": [ "FLOAT SELECTED: Table 3: The joint goal accuracy of the DST models on the WoZ2.0 test set and the MultiWoZ test set. We also include the Inference Time Complexity (ITC) for each model as a metric for scalability. The baseline accuracy for the WoZ2.0 dataset is the Delexicalisation-Based (DB) Model (Mrksic et al., 2017), while the baseline for the MultiWoZ dataset is taken from the official website of MultiWoZ (Budzianowski et al., 2018)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: The joint goal accuracy of the DST models on the WoZ2.0 test set and the MultiWoZ test set. We also include the Inference Time Complexity (ITC) for each model as a metric for scalability. The baseline accuracy for the WoZ2.0 dataset is the Delexicalisation-Based (DB) Model (Mrksic et al., 2017), while the baseline for the MultiWoZ dataset is taken from the official website of MultiWoZ (Budzianowski et al., 2018)." ] } ], "annotation_id": [ "1719244c479765727dd6d5390c98e27c6542dcf3" ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "joint goal accuracy" ], "yes_no": null, "free_form_answer": "", "evidence": [ "As a convention, the metric of joint goal accuracy is used to compare our model to previous work. The joint goal accuracy only regards the model making a successful belief state prediction if all of the slots and values predicted are exactly matched with the labels provided. This metric gives a strict measurement that tells how often the DST module will not propagate errors to the downstream modules in a dialogue system. In this work, the model with the highest joint accuracy on the validation set is evaluated on the test set for the test joint accuracy measurement." ], "highlighted_evidence": [ "As a convention, the metric of joint goal accuracy is used to compare our model to previous work." ] } ], "annotation_id": [ "6bb60dc60817a1c2173999d45e505239c8d445c6" ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "the single domain dataset, WoZ2.0 ", "the multi-domain dataset, MultiWoZ" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We first test our model on the single domain dataset, WoZ2.0 BIBREF19 . It consists of 1,200 dialogues from the restaurant reservation domain with three pre-defined slots: food, price range, and area. Since the name slot rarely occurs in the dataset, it is not included in our experiments, following previous literature BIBREF3 , BIBREF20 . Our model is also tested on the multi-domain dataset, MultiWoZ BIBREF9 . It has a more complex ontology with 7 domains and 25 predefined slots. Since the combined slot-value pairs representation of the belief states has to be applied for the model with $O(n)$ ITC, the total number of slots is 35. The statistics of these two datsets are shown in Table 2 ." ], "highlighted_evidence": [ "We first test our model on the single domain dataset, WoZ2.0 BIBREF19 . It consists of 1,200 dialogues from the restaurant reservation domain with three pre-defined slots: food, price range, and area. Since the name slot rarely occurs in the dataset, it is not included in our experiments, following previous literature BIBREF3 , BIBREF20 . Our model is also tested on the multi-domain dataset, MultiWoZ BIBREF9 . It has a more complex ontology with 7 domains and 25 predefined slots. Since the combined slot-value pairs representation of the belief states has to be applied for the model with $O(n)$ ITC, the total number of slots is 35. " ] } ], "annotation_id": [ "072d9a6fe27796947c3aeae2420eccb567a8da36" ], "worker_id": [ "5d0eb97e8e840e171f73b7642c2c89dd3984157b" ] } ] }
{ "caption": [ "Table 1: The Inference Time Complexity (ITC) of previous DST models. The ITC is calculated based on how many times inference must be performed to complete a prediction of the belief state in a dialogue turn, where m is the number of values in a pre-defined ontology list and n is the number of slots.", "Figure 1: An example dialogue from the multi-domain dataset, MultiWOZ. At each turn, the DST needs to output the belief state, a nested tuple of (DOMAIN, (SLOT, VALUE)), immediately after the user utterance ends. The belief state is accumulated as the dialogue proceeds. Turns are separated by black lines.", "Figure 2: An example in the WoZ2.0 dataset that invalidates the single value assumption. It is impossible for the system to generate the sample response about the Chinese restaurant with the original belief state (food, seafood). A correction could be made as (food, seafood > chinese) which has multiple values and a logical operator “>”.", "Figure 3: The general model architecture of the Hierarchical Sequence Generation Network. The Conditional Memory Relation (CMR) decoders (gray) share all of their parameters.", "Figure 4: The general structure of the Conditional Memory Relation Decoder. The decoder output, s (e.g. “food”), is refilled to the LSTM for the decoding of the next step. The blue lines in the figure means that the gradients are blocked during the back propagation stage.", "Table 2: The statistics of the WoZ2.0 and the MultiWoZ datasets.", "Table 3: The joint goal accuracy of the DST models on the WoZ2.0 test set and the MultiWoZ test set. We also include the Inference Time Complexity (ITC) for each model as a metric for scalability. The baseline accuracy for the WoZ2.0 dataset is the Delexicalisation-Based (DB) Model (Mrksic et al., 2017), while the baseline for the MultiWoZ dataset is taken from the official website of MultiWoZ (Budzianowski et al., 2018).", "Table 4: The ablation study on the WoZ2.0 dataset with the joint goal accuracy on the test set. For “- Hierachical-Attn”, we remove the residual connections between the attention modules in the CMR decoders and all the attention memory access are based on the output from the LSTM. For “- MLP”, we further replace the MLP with a single linear layer with the nonlinear activation.", "Table 5: The ablation study on the MultiWoZ dataset with the joint domain accuracy (JD Acc.), joint domain-slot accuracy (JDS Acc.) and joint goal accuracy (JG Acc.) on the test set. For “- ShareParam”, we remove the parameter sharing mechanism on the encoders and the attention module. For “- Order”, we further arrange the order of the slots according to its global frequencies in the training set instead of the local frequencies given the domain it belongs to. For “- Nested”, we do not generate domain sequences but generate combined slot sequences which combines the domain and the slot together. For “- BlockGrad”, we further remove the gradient blocking mechanism in the CMR decoder.", "Figure 5: An example belief prediction of our model on the MultiWoZ test set. The attention scores for belief states, system transcript and user utterance in CMRD is visualized on the right. Each row corresponds to the attention score of the generation step of a particular slot under the ‘train’ domain." ], "file": [ "1-Table1-1.png", "2-Figure1-1.png", "2-Figure2-1.png", "3-Figure3-1.png", "5-Figure4-1.png", "6-Table2-1.png", "7-Table3-1.png", "7-Table4-1.png", "7-Table5-1.png", "9-Figure5-1.png" ] }
1906.00180
Siamese recurrent networks learn first-order logic reasoning and exhibit zero-shot compositional generalization
Can neural nets learn logic? We approach this classic question with current methods, and demonstrate that recurrent neural networks can learn to recognize first order logical entailment relations between expressions. We define an artificial language in first-order predicate logic, generate a large dataset of sample 'sentences', and use an automatic theorem prover to infer the relation between random pairs of such sentences. We describe a Siamese neural architecture trained to predict the logical relation, and experiment with recurrent and recursive networks. Siamese Recurrent Networks are surprisingly successful at the entailment recognition task, reaching near perfect performance on novel sentences (consisting of known words), and even outperforming recursive networks. We report a series of experiments to test the ability of the models to perform compositional generalization. In particular, we study how they deal with sentences of unseen length, and sentences containing unseen words. We show that set-ups using LSTMs and GRUs obtain high scores on these tests, demonstrating a form of compositionality.
{ "section_name": [ "Introduction & related work", "Task definition & data generation", "Learning models", "Results", "Zero-shot, compositional generalization", "Unseen lengths", "Unseen words", "Discussion & Conclusions" ], "paragraphs": [ [ "State-of-the-art models for almost all popular natural language processing tasks are based on deep neural networks, trained on massive amounts of data. A key question that has been raised in many different forms is to what extent these models have learned the compositional generalizations that characterize language, and to what extent they rely on storing massive amounts of exemplars and only make `local' generalizations BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . This question has led to (sometimes heated) debates between deep learning enthusiasts that are convinced neural networks can do almost anything, and skeptics that are convinced some types of generalization are fundamentally beyond reach for deep learning systems, pointing out that crucial tests distinguishing between generalization and memorization have not been applied.", "In this paper, we take a pragmatic perspective on these issues. As the target for learning we use entailment relations in an artificial language, defined using first order logic (FOL), that is unambiguously compositional. We ask whether popular deep learning methods are capable in principle of acquiring the compositional rules that characterize it, and focus in particular on recurrent neural networks that are unambiguously `connectionist': trained recurrent nets do not rely on symbolic data and control structures such as trees and global variable binding, and can straightforwardly be implemented in biological networks BIBREF8 or neuromorphic hardware BIBREF9 . We report positive results on this challenge, and in the process develop a series of tests for compositional generalization that address the concerns of deep learning skeptics.", "The paper makes three main contributions. First, we develop a protocol for automatically generating data that can be used in entailment recognition tasks. Second, we demonstrate that several deep learning architectures succeed at one such task. Third, we present and apply a number of experiments to test whether models are capable of compositional generalization." ], [ "The data generation process is inspired by BIBREF13 : an artificial language is defined, sentences are generated according to its grammar and the entailment relation between pairs of such sentences is established according to a fixed background logic. However, our language is significantly more complex, and instead of natural logic we use FOL." ], [ "Our main model is a recurrent network, sketched in Figure 4 . It is a so-called `Siamese' network because it uses the same parameters to process the left and the right sentence. The upper part of the model is identical to BIBREF13 's recursive networks. It consists of a comparison layer and a classification layer, after which a softmax function is applied to determine the most probable target class. The comparison layer takes the concatenation of two sentence vectors as input. The number of cells equals the number of words, so it differs per sentence.", "Our set-up resembles the Siamese architecture for learning sentence similarity of BIBREF25 and the LSTM classifier described in BIBREF18 . In the diagram, the dashed box indicates the location of an arbitrary recurrent unit. We consider SRN BIBREF26 , GRU BIBREF27 and LSTM BIBREF28 ." ], [ "Training and testing accuracies after 50 training epochs, averaged over five different model runs, are shown in Table UID18 . All recurrent models outperform the summing baseline. Even the simplest recurrent network, the SRN, achieves higher training and testing accuracy scores than the tree-shaped matrix model. The GRU and LSTM even beat the tensor model. The LSTM obtains slightly lower scores than the GRU, which is unexpected given its more complex design, but perhaps the current challenge does not require separate forget and input gates. For more insight into the types of errors made by the best-performing (GRU-based) model, we refer to the confusion matrices in Appendix \"Error statistics\" .", "The consistently higher testing accuracy provides evidence that the recurrent networks are not only capable of recognizing FOL entailment relations between unseen sentences. They can also outperform the tree-shaped models on this task, although they do not use any of the symbolic structure that seemed to explain the success of their recursive predecessors. The recurrent classifiers have learned to apply their own strategies, which we will investigate in the remainder of this paper." ], [ "Compositionality is the ability to interpret and generate a possibly infinite number of constructions from known constituents, and is commonly understood as one of the fundamental aspects of human learning and reasoning ( BIBREF30 , BIBREF31 ). It has often been claimed that neural networks operate on a merely associative basis, lacking the compositional capacities to develop systematicity without an abundance of training data. See e.g. BIBREF1 , BIBREF2 , BIBREF32 . Especially recurrent models have recently been regarded quite sceptically in this respect, following the negative results established by BIBREF3 and BIBREF4 . Their research suggests that recurrent networks only perform well provided that there are no systematic discrepancies between train and test data, whereas human learning is robust with respect to such differences thanks to compositionality.", "In this section, we report more positive results on compositional reasoning of our Siamese networks. We focus on zero-shot generalization: correct classification of examples of a type that has not been observed before. Provided that atomic constituents and production rules are understood, compositionality does not require that abundantly many instances embodying a semantic category are observed. We will consider in turn what set-up is required to demonstrate zero-shot generalization to unseen lengths, and to generalization to sentences composed of novel words." ], [ "We test if our recurrent models are capable of generalization to unseen lengths. Neural models are often considered incapable of such generalization, allegedly because they are limited to the training space BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 . We want to test if this is the case for the recurrent models studied in this paper. The language $\\mathcal {L}$ licenses a heavily constrained set of grammatical configurations, but it does allow the sentence length to vary according to the number of included negations. A perfectly compositional model should be able to interpret statements containing any number of negations, on condition that it has seen an instantiation at least once at each position where this is allowed.", "In a new experiment, we train the models on pairs of sentences with length 5, 7 or 8, and test on pairs of sentences with lengths 6 or 9. As before, the training and test sets contain some 30,000 and 5,000 sentence pairs, respectively. Results are shown in Table UID19 .", "All recurrent models obtain (near-)perfect training accuracy scores. What happens on the test set is interesting. It turns out that the GRU and LSTM can generalize from lengths 5, 7 and 8 to 6 and 9 very well, while the SRN faces serious difficulties. It seems that training on lengths 5, 7 and 8, and thereby skipping length 6, enables the GRU and LSTM to generalize to unseen sentence lengths 6 and 9. Training on lengths 5-7 and testing on lengths 8-9 yields low test scores for all models. The GRU and LSTM gates appear to play a crucial role, because the results show that the SRN does not have this capacity at all." ], [ "In the next experiment, we assess whether our GRU-based model, which performed best in the preceding experiments, is capable of zero-shot generalization to sentences with novel words. The current set-up cannot deal with unknown words, so instead of randomly initializing an embedding matrix that is updated during training, we use pretrained, 50-dimensional GloVe embeddings BIBREF37 that are kept constant. Using GloVe embeddings, the GRU model obtains a mean training accuracy of 100.0% and a testing accuracy of 95.9% (averaged over five runs). The best-performing model (with 100.0% training and 97.1% testing accuracy) is used in the following zero-shot experiments.", "One of the most basic relations on the level of lexical semantics is synonymy, which holds between words with equivalent meanings. In the language $\\mathcal {L}$ , a word can be substituted with one of its synonyms without altering the entailment relation assigned to the sentence pairs that contain it. If the GRU manages to perform well on such a modified data set after receiving the pretrained GloVe embedding of the unseen word, this is a first piece of evidence for its zero-shot generalization skills. We test this for several pairs of synonymous words. The best-performing GRU is first evaluated with respect to the fragment of the test data containing the original word $w$ , and consequently with respect to that same fragment after replacing the original word with its synonym $s(w)$ . The pairs of words, the cosine distance $cos\\_dist(w,s(w))$ between their GloVe embeddings and the obtained results are listed in Table 6 .", "For the first three examples in Table 6 , substitution only decreases testing accuracy by a few percentage points. Apparently, the word embeddings of the synonyms encode the lexical properties that the GRU needs to recognize that the same entailment relations apply to the sentence pairs. This does not prove that the model has distilled essential information about hyponymy from the GloVe embeddings. It could also be that the word embeddings of the replacement words are geometrically very similar to the originals, so that it is an algebraic necessity that the same results arise. However, this suspicion is inconsistent with the result of changing `hate' into `detest'. The cosine distance between these words is 0.56, so according to this measure their vectors are more similar than those representing `love' and `adore' (which have a cosine distance of 0.57). Nonetheless, replacing `hate' with `detest' confuses the model, whereas substitution of `love' into `adore' only decreases testing accuracy by 4.5 percentage points. This illustrates that robustness of the GRU in this respect is not a matter of simple vector similarity. In those cases where substitution into synonyms does not confuse the model it must have recognized a non-trivial property of the new word embedding that licenses particular inferences.", "In our next experiment, we replace a word not by its synonym, but by a word that has the same semantics in the context of artificial language $\\mathcal {L}$ . We thus consider pairs of words that can be substituted with each other without affecting the entailment relation between any pair of sentences in which they feature. We call such terms `ontological twins'. Technically, if $\\odot $ is an arbitrary lexical entailment relation and $\\mathcal {O}$ is an ontology, then $w$ and $v$ are ontological twins if and only if $w, v \\in \\mathcal {O}$ and for all $u \\in \\mathcal {O}$ , if $u \\notin \\lbrace w,v \\rbrace $ then $w \\odot u \\Leftrightarrow v \\odot u$ . This trivially applies to self-identical terms or synonyms, but in the strictly defined hierarchy of $\\mathcal {L}$ it is also the case for pairs of terms $\\odot $0 that maintain the same lexical entailment relations to all other terms in the taxonomy.", "Examples of ontological twins in the taxonomy of nouns $\\mathcal {N}^{\\mathcal {L}}$ are `Romans' and `Venetians' . This can easily be verified in the Venn diagram of Figure 1 by replacing `Romans' with `Venetians' and observing that the same hierarchy applies. The same holds for e.g. `Germans' and `Polish' or for `children' and `students'. For several such word-twin pairs the GRU is evaluated with respect to the fragment of the test data containing the original word $w$ , and with respect to that same fragment after replacing the original word with ontological twin $t(w)$ . Results are shown in Table 7 .", "The examples in Table 7 suggest that the best-performing GRU is largely robust with respect to substitution into ontological twins. Replacing `Romans' with other urban Italian demonyms hardly affects model accuracy on the modified fragment of the test data. As before, there appears to be no correlation with vector similarity because the cosine distance between the different twin pairs has a much higher variation than the corresponding accuracy scores. `Germans' can be changed into `Polish' without significant deterioration, but substitution with `Dutch' greatly decreases testing accuracy. The situation is even worse for `Spanish'. Again, cosine similarity provides no explanation - `Spanish' is still closer to `Germans' than `Neapolitans' to `Romans'. Rather, the accuracy appears to be negatively correlated with the geographical distance between the national demonyms. After replacing `children' with `students', `women' or `linguists', testing scores are still decent.", "So far, we replaced individual words in order to assess whether the GRU can generalize from the vocabulary to new notions that have comparable semantics in the context of this entailment recognition task. The examples have illustrated that the model tends to do this quite well. In the last zero-shot learning experiment, we replace sets of nouns instead of single words, in order to assess the flexibility of the relational semantics that our networks have learned. Formally, the replacement can be regarded as a function $r$ , mapping words $w$ to substitutes $r(w)$ . Not all items have to be replaced. For an ontology $\\mathcal {O}$ , the function $r$ must be such that for any $w, v \\in \\mathcal {O}$ and lexical entailment relation $\\odot $ , $w \\odot v \\Leftrightarrow r(w) \\odot r(v)$ . The result of applying $r$ can be called an `alternative hierarchy'.", "An example of an alternative hierarchy is the result of the replacement function $r_1$ that maps `Romans' to `Parisians' and `Italians' to `French'. Performing this substitution in the Venn diagram of Figure 1 shows that the taxonomy remains structurally intact. The best-performing GRU is evaluated on the fragment of the test data containing `Romans' or `Italians', and consequently on the same fragment after implementing replacement $r_1$ and providing the model with the GloVe embeddings of the unseen words. Replacement $r_1$ is incrementally modified up until replacement $r_4$ , which substitutes all nouns in $\\mathcal {N}^{\\mathcal {L}}$ . The results of applying $r_1$ to $r_4$ are shown in Table 8 .", "The results are positive: the GRU obtains 86.7% accuracy even after applying $r_4$ , which substitutes the entire ontology $\\mathcal {N}^{\\mathcal {L}}$ so that no previously encountered nouns are present in the test set anymore, although the sentences remain thematically somewhat similar to the original sentences. Testing scores are above 87% for the intermediate substitutions $r_1$ to $r_3$ . This outcome clearly shows that the classifier does not depend on a strongly customized word vector distribution in order to recognize higher-level entailment relations. Even if all nouns are replaced by alternatives with embeddings that have not been witnessed or optimized beforehand, the model obtains a high testing accuracy. This establishes obvious compositional capacities, because familiarity with structure and information about lexical semantics in the form of word embeddings are enough for the model to accommodate configurations of unseen words.", "What happens when we consider ontologies that have the same structure, but are thematically very different from the original ontology? Three such alternative hierarchies are considered: $r_{animals}$ , $r_{religion}$ and $r_{America}$ . Each of these functions relocalizes the noun ontology in a totally different domain of discourse, as indicated by their names. Table 9 specifies the functions and their effect.", "Testing accuracy decreases drastically, which indicates that the model is sensitive to the changing topic. Variation between the scores obtained after the three transformations is limited. Although they are much lower than before, they are still far above chance level for a seven-class problem. This suggests that the model is not at a complete loss as to the alternative noun hierarchies. Possibly, including a few relevant instances during training could already improve the results." ], [ "We established that our Siamese recurrent networks (with SRN, GRU or LSTM cells) are able to recognize logical entailment relations without any a priori cues about syntax or semantics of the input expressions. Indeed, some of the recurrent set-ups even outperform tree-shaped networks, whose topology is specifically designed to deal with such tasks. This indicates that recurrent networks can develop representations that can adequately process a formal language with a nontrivial hierarchical structure. The formal language we defined did not exploit the full expressive power of first-order predicate logic; nevertheless by using standard first-order predicate logic, a standard theorem prover, and a set-up where the training set only covers a tiny fraction of the space of possible logical expressions, our experiments avoid the problems observed in earlier attempts to demonstrate logical reasoning in recurrent networks.", "The experiments performed in the last few sections moreover show that the GRU and LSTM architectures exhibit at least basic forms of compositional generalization. In particular, the results of the zero-shot generalization experiments with novel lengths and novel words cannot be explained with a `memorize-and-interpolate' account, i.e. an account of the working of deep neural networks that assumes all they do is store enormous training sets and generalize only locally. These results are relevant pieces of evidence in the decades-long debate on whether or not connectionist networks are fundamentally able to learn compositional solutions. Although we do not have the illusion that our work will put this debate to an end, we hope that it will help bring deep learning enthusiasts and skeptics a small step closer." ] ] }
{ "question": [ "How does the automatic theorem prover infer the relation?", "If these model can learn the first-order logic on artificial language, why can't it lear for natural language?", "How many samples did they generate for the artificial language?" ], "question_id": [ "42812113ec720b560eb9463ff5e74df8764d1bff", "4f4892f753b1d9c5e5e74c7c94d8c9b6ef523e7b", "f258ada8577bb71873581820a94695f4a2c223b3" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "fbf076324c189bbfe7b495126bb96ec2d2615877" ], "worker_id": [ "efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "6d770b8b216014237faef17fcf6724d7bec052d4" ], "worker_id": [ "efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "70,000", "evidence": [ "In a new experiment, we train the models on pairs of sentences with length 5, 7 or 8, and test on pairs of sentences with lengths 6 or 9. As before, the training and test sets contain some 30,000 and 5,000 sentence pairs, respectively. Results are shown in Table UID19 ." ], "highlighted_evidence": [ "As before, the training and test sets contain some 30,000 and 5,000 sentence pairs, respectively" ] } ], "annotation_id": [ "07490d0181eb9040b4d19a9a8180db5dfb790df3" ], "worker_id": [ "efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86" ] } ] }
{ "caption": [ "Figure 1: Venn diagrams visualizing the taxonomy of (a) nouns NL and (b) verbs VL in L.", "Table 3: FOL axiom representations of lexical entailment relations. For definition of relations, see Table 2.", "Figure 3: Visualization of the general recurrent model. The region in the dashed box represents any recurrent cell, which is repeatedly applied until the final sentence vector is returned.", "Table 5: Accuracy scores on the FOL inference task for models trained on pairs of sentences with lengths 5, 7 or 8 and tested on pairs of sentences with lengths 6 or 9. Mean and standard deviation over five runs.", "Table 6: Effect on best-performing GRU of replacing words w by unseen synonyms s(w) in the test set and providing the model with the corresponding GloVe embedding.", "Table 7: Effect on best-performing GRU of replacing words w by unseen ontological twins t(w) in the test set and providing the model with the corresponding GloVe embedding.", "Table 8: Effect on best-performing GRU of replacing noun ontology NL with alternative hierarchies as per the replacement functions r1 to r4. Vertical dots indicate that cell entries do not change on the next row.", "Table 9: Effect on best-performing GRU of replacing noun ontology NL with alternative hierarchies as per the replacement functions ranimals, rreligion and rAmerica. Accuracy is measured on the test set after applying the respective replacement functions.", "Figure 4: Histogram showing the relative frequency of each entailment relation in the train and test set.", "Figure 5: Confusion matrices of the best-performing GRU with respect to the test set. Rows represent targets, columns predictions. (a) row-normalized results for all test instances. (b) unnormalized results for misclassified test instances. Clearly, most errors are due to unrecognized or wrongly attributed independence." ], "file": [ "2-Figure1-1.png", "3-Table3-1.png", "4-Figure3-1.png", "5-Table5-1.png", "6-Table6-1.png", "7-Table7-1.png", "8-Table8-1.png", "8-Table9-1.png", "12-Figure4-1.png", "12-Figure5-1.png" ] }
1806.02847
A Simple Method for Commonsense Reasoning
Commonsense reasoning is a long-standing challenge for deep learning. For example, it is difficult to use neural networks to tackle the Winograd Schema dataset~\cite{levesque2011winograd}. In this paper, we present a simple method for commonsense reasoning with neural networks, using unsupervised learning. Key to our method is the use of language models, trained on a massive amount of unlabled data, to score multiple choice questions posed by commonsense reasoning tests. On both Pronoun Disambiguation and Winograd Schema challenges, our models outperform previous state-of-the-art methods by a large margin, without using expensive annotated knowledge bases or hand-engineered features. We train an array of large RNN language models that operate at word or character level on LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and a customized corpus for this task and show that diversity of training data plays an important role in test performance. Further analysis also shows that our system successfully discovers important features of the context that decide the correct answer, indicating a good grasp of commonsense knowledge.
{ "section_name": [ "Introduction", "Related Work", "Methods", "Experimental settings", "Main results", "The first challenge in 2016: PDP-60", "Winograd Schema Challenge", "Customized training data for Winograd Schema Challenge", "Discovery of special words in Winograd Schema", "Partial scoring is better than full scoring.", "Importance of training corpus", "Conclusion", "Recurrent language models", "Data contamination in CommonCrawl" ], "paragraphs": [ [ "Although deep neural networks have achieved remarkable successes (e.g., BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 ), their dependence on supervised learning has been challenged as a significant weakness. This dependence prevents deep neural networks from being applied to problems where labeled data is scarce. An example of such problems is common sense reasoning, such as the Winograd Schema Challenge BIBREF0 , where the labeled set is typically very small, on the order of hundreds of examples. Below is an example question from this dataset:", "Although it is straightforward for us to choose the answer to be \"the trophy\" according to our common sense, answering this type of question is a great challenge for machines because there is no training data, or very little of it.", "In this paper, we present a surprisingly simple method for common sense reasoning with Winograd schema multiple choice questions. Key to our method is th e use of language models (LMs), trained on a large amount of unlabeled data, to score multiple choice questions posed by the challenge and similar datasets. More concretely, in the above example, we will first substitute the pronoun (\"it\") with the candidates (\"the trophy\" and \"the suitcase\"), and then use LMs to compute the probability of the two resulting sentences (\"The trophy doesn’t fit in the suitcase because the trophy is too big.\" and \"The trophy doesn’t fit in the suitcase because the suitcase is too big.\"). The substitution that results in a more probable sentence will be the correct answer.", "A unique feature of Winograd Schema questions is the presence of a special word that decides the correct reference choice. In the above example, \"big\" is this special word. When \"big\" is replaced by \"small\", the correct answer switches to \"the suitcase\". Although detecting this feature is not part of the challenge, further analysis shows that our system successfully discovers this special word to make its decisions in many cases, indicating a good grasp of commonsense knowledge." ], [ "Unsupervised learning has been used to discover simple commonsense relationships. For example, Mikolov et al. BIBREF15 , BIBREF16 show that by learning to predict adjacent words in a sentence, word vectors can be used to answer analogy questions such as: Man:King::Woman:?. Our work uses a similar intuition that language modeling can naturally capture common sense knowledge. The difference is that Winograd Schema questions require more contextual information, hence our use of LMs instead of just word vectors.", "Neural LMs have also been applied successfully to improve downstream applications BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . In BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , researchers have shown that pre-trained LMs can be used as feature representations for a sentence, or a paragraph to improve NLP applications such as document classification, machine translation, question answering, etc. The combined evidence suggests that LMs trained on a massive amount of unlabeled data can capture many aspects of natural language and the world's knowledge, especially commonsense information.", "Previous attempts on solving the Winograd Schema Challenge usually involve heavy utilization of annotated knowledge bases, rule-based reasoning, or hand-crafted features BIBREF21 , BIBREF22 , BIBREF23 . In particular, Rahman and Ng BIBREF24 employ human annotators to build more supervised training data. Their model utilizes nearly 70K hand-crafted features, including querying data from Google Search API. Sharma et al. BIBREF25 rely on a semantic parser to understand the question, query texts through Google Search, and reason on the graph produced by the parser. Similarly, Schüller BIBREF23 formalizes the knowledge-graph data structure and a reasoning process based on cognitive linguistics theories. Bailey et al. BIBREF22 introduces a framework for reasoning, using expensive annotated knowledge bases as axioms.", "The current best approach makes use of the skip-gram model to learn word representations BIBREF26 . The model incorporates several knowledge bases to regularize its training process, resulting in Knowledge Enhanced Embeddings (KEE). A semantic similarity scorer and a deep neural network classifier are then combined on top of KEE to predict the answers. The final system, therefore, includes both supervised and unsupervised models, besides three different knowledge bases. In contrast, our unsupervised method is simpler while having significantly higher accuracy. Unsupervised training is done on text corpora which can be cheaply curated.", "Using language models in reading comprehension tests also produced many great successes. Namely Chu et al. BIBREF27 used bi-directional RNNs to predict the last word of a passage in the LAMBADA challenge. Similarly, LMs are also used to produce features for a classifier in the Store Close Test 2017, giving best accuracy against other methods BIBREF28 . In a broader context, LMs are used to produce good word embeddings, significantly improved a wide variety of downstream tasks, including the general problem of question answering BIBREF19 , BIBREF29 ." ], [ "We first substitute the pronoun in the original sentence with each of the candidate choices. The problem of coreference resolution then reduces to identifying which substitution results in a more probable sentence. By reframing the problem this way, language modeling becomes a natural solution by its definition. Namely, LMs are trained on text corpora, which encodes human knowledge in the form of natural language. During inference, LMs are able to assign probability to any given text based on what they have learned from training data. An overview of our method is shown in Figure 1 .", "Suppose the sentence $S$ of $n$ consecutive words has its pronoun to be resolved specified at the $k^{th}$ position: $S = \\lbrace w_1, .., w_{k-1}, w_{k} \\equiv p, w_{k+1}, .., w_{n}\\rbrace $ . We make use of a trained language model $P_\\theta (w_t | w_{1}, w_2, .., w_{t-1})$ , which defines the probability of word $w_t$ conditioned on the previous words $w_1, ..., w_{t-1}$ . The substitution of a candidate reference $c$ in to the pronoun position $k$ results in a new sentence $S_{w_k\\leftarrow c}$ (we use notation $n$0 to mean that word $n$1 is substituted by candidate $n$2 ). We consider two different ways of scoring the substitution:", "which scores how probable the resulting full sentence is, and", "which scores how probable the part of the resulting sentence following $c$ is, given its antecedent. In other words, it only scores a part $S_{w_k\\leftarrow c}$ conditioned on the rest of the substituted sentence. An example of these two scores is shown in Table 1 . In our experiments, we find that partial scoring strategy is generally better than the naive full scoring strategy." ], [ "In this section we describe tests for commonsense reasoning and the LMs used to solve these tasks. We also detail training text corpora used in our experiments." ], [ "Our experiments start with testing LMs trained on all text corpora with PDP-60 and WSC-273. Next, we show that it is possible to customize training data to obtain even better results." ], [ "We first examine unsupervised single-model resolvers on PDP-60 by training one character-level and one word-level LM on the Gutenberg corpus. In Table 2 , these two resolvers outperform previous results by a large margin. For this task, we found full scoring gives better results than partial scoring. In Section \"Partial scoring is better than full scoring.\" , we provide evidences that this is an atypical case due to the very small size of PDP-60.", "Next, we allow systems to take in necessary components to maximize their test performance. This includes making use of supervised training data that maps commonsense reasoning questions to their correct answer. Here we simply train another three variants of LMs on LM-1-Billion, CommonCrawl, and SQuAD and ensemble all of them. As reported in Table 3 , this ensemble of five unsupervised models outperform the best system in the 2016 competition (58.3%) by a large margin. Specifically, we achieve 70.0% accuracy, better than the more recent reported results from Quan Liu et al (66.7%) BIBREF26 , who makes use of three knowledge bases and a supervised deep neural network." ], [ "On the harder task WSC-273, our single-model resolvers also outperform the current state-of-the-art by a large margin, as shown in Table 4 . Namely, our word-level resolver achieves an accuracy of 56.4%. By training another 4 LMs, each on one of the 4 text corpora LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and add to the previous ensemble, we are able to reach 61.5%, nearly 10% of accuracy above the previous best result. This is a drastic improvement considering this previous best system outperforms random guess by only 3% in accuracy.", "This task is more difficult than PDP-60. First, the overall performance of all competing systems are much lower than that of PDP-60. Second, incorporating supervised learning and expensive annotated knowledge bases to USSM provides insignificant gain this time (+3%), comparing to the large gain on PDP-60 (+19%)." ], [ "As previous systems collect relevant data from knowledge bases after observing questions during evaluation BIBREF24 , BIBREF25 , we also explore using this option. Namely, we build a customized text corpus based on questions in commonsense reasoning tasks. It is important to note that this does not include the answers and therefore does not provide supervision to our resolvers. In particular, we aggregate documents from the CommonCrawl dataset that has the most overlapping n-grams with the questions. The score for each document is a weighted sum of $F_1(n)$ scores when counting overlapping n-grams: $Similarity\\_Score_{document} = \\frac{\\sum _{n=1}^4nF_1(n)}{\\sum _{n=1}^4n}$ ", "The top 0.1% of highest ranked documents is chosen as our new training corpus. Details of the ranking is shown in Figure 2 . This procedure resulted in nearly 1,000,000 documents, with the highest ranking document having a score of $8\\times 10^{-2}$ , still relatively small to a perfect score of $1.0$ . We name this dataset STORIES since most of the constituent documents take the form of a story with long chain of coherent events.", "We train four different LMs on STORIES and add them to the previous ensemble of 10 LMs, resulting in a gain of 2% accuracy in the final system as shown in Table 5 . Remarkably, single models trained on this corpus are already extremely strong, with a word-level LM achieving 62.6% accuracy, even better than the ensemble of 10 models previously trained on 4 other text corpora (61.5%)." ], [ "We introduce a method to potentially detect keywords at which our proposed resolvers make decision between the two candidates $c_{correct}$ and $c_{incorrect}$ . Namely, we look at the following ratio: $q_t = \\frac{P_\\theta (w_t | w_1, w_2, ..., w_{t-1}; w_k \\leftarrow c_{correct})}{P_\\theta (w_t | w_1, w_2, ..., w_{t-1}; w_k \\leftarrow c_{incorrect})}$ ", "Where $1 \\le t \\le n$ for full scoring, and $k +1 \\le t \\le n$ for partial scoring. It follows that the choice between $c_{correct}$ or $c_{incorrect}$ is made by the value of $Q = \\prod _tq_t$ being bigger than $1.0$ or not. By looking at the value of each individual $q_t$ , it is possible to retrieve words with the largest values of $q_t$ and hence most responsible for the final value of $Q$ .", "We visualize the probability ratios $q_t$ to have more insights into the decisions of our resolvers. Figure 3 displays a sample of incorrect decisions made by full scoring and is corrected by partial scoring. Interestingly, we found $q_t$ with large values coincides with the special keyword of each Winograd Schema in several cases. Intuitively, this means the LMs assigned very low probability for the keyword after observing the wrong substitution. It follows that we can predict the keyword in each the Winograd Schema question by selecting top word positions with the highest value of $q_t$ .", "For questions with keyword appearing before the reference, we detect them by backward-scoring models. Namely, we ensemble 6 LMs, each trained on one text corpora with word order reversed. This ensemble also outperforms the previous best system on WSC-273 with a remarkable accuracy of 58.2%. Overall, we are able to discover a significant amount of special keywords (115 out of 178 correctly answered questions) as shown in Table 6 . This strongly indicates a correct understanding of the context and a good grasp of commonsense knowledge in the resolver's decision process." ], [ "In this set of experiments, we look at wrong predictions from a word-level LM. With full scoring strategy, we observe that $q_t$ at the pronoun position is most responsible for a very large percentage of incorrect decisions as shown in Figfure 3 and Table 7 . For example, with the test \"The trophy cannot fit in the suitcase because it is too big.\", the system might return $c_{incorrect} = $ \"suitcase\" simply because $c_{correct} = $ \"trophy\" is a very rare word in its training corpus and therefore, is assigned a very low probability, overpowering subsequent $q_t$ values.", "Following this reasoning, we apply a simple fix to full scoring by normalizing its score with the unigram count of $c$ : $Score_{full~normalized} = Score_{full} / Count(c)$ . Partial scoring, on the other hand, disregards $c$ altogether. As shown in Figure 4 , this normalization fixes full scoring in 9 out of 10 tested LMs on PDP-122. On WSC-273, the result is very decisive as partial scoring strongly outperforms the other two scoring in all cases. Since PDP-122 is a larger superset of PDP-60, we attribute the different behaviour observed on PDP-60 as an atypical case due to its very small size." ], [ "In this set of experiments, we examine the effect of training data on commonsense reasoning test performance. Namely, we train both word-level and character-level LMs on each of the five corpora: LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and STORIES. A held-out dataset from each text corpus is used for early stopping on the corresponding training data.", "To speed up training on these large corpora, we first train the models on the LM-1-Billion text corpus. Each trained model is then divided into three groups of parameters: Embedding, Recurrent Cell, and Softmax. Each of the three is optionally transferred to train the same architectures on CommonCrawl, SQuAD and Gutenberg Books. The best transferring combination is chosen by cross-validation.", "Figure 5 -left and middle show that STORIES always yield the highest accuracy for both types of input processing. We next rank the text corpora based on ensemble performance for more reliable results. Namely, we compare the previous ensemble of 10 models against the same set of models trained on each single text corpus. This time, the original ensemble trained on a diverse set of text corpora outperforms all other single-corpus ensembles including STORIES. This highlights the important role of diversity in training data for commonsense reasoning accuracy of the final system." ], [ "We introduce a simple unsupervised method for Commonsense Reasoning tasks. Key to our proposal are large language models, trained on a number of massive and diverse text corpora. The resulting systems outperform previous best systems on both Pronoun Disambiguation Problems and Winograd Schema Challenge. Remarkably on the later benchmark, we are able to achieve 63.7% accuracy, comparing to 52.8% accuracy of the previous state-of-the-art, who utilizes supervised learning and expensively annotated knowledge bases. We analyze our system's answers and observe that it discovers key features of the question that decides the correct answer, indicating good understanding of the context and commonsense knowledge. We also demonstrated that ensembles of models benefit the most when trained on a diverse set of text corpora.", "We anticipate that this simple technique will be a strong building block for future systems that utilize reasoning ability on commonsense knowledge." ], [ "The base model consists of two layers of Long-Short Term Memory (LSTM) BIBREF31 with 8192 hidden units. The output gate of each LSTM uses peepholes and a projection layer to reduce its output dimensionality to 1024. We perform drop-out on LSTM's outputs with probability 0.25.", "For word inputs, we use an embedding lookup of 800000 words, each with dimension 1024. For character inputs, we use an embedding lookup of 256 characters, each with dimension 16. We concatenate all characters in each word into a tensor of shape (word length, 16) and add to its two ends the <begin of word> and <end of word> tokens. The resulting concatenation is zero-padded to produce a fixed size tensor of shape (50, 16). This tensor is then processed by eight different 1-D convolution (Conv) kernels of different sizes and number of output channels, listed in Table 8 , each followed by a ReLU acitvation. The output of all CNNs are then concatenated and processed by two other fully-connected layers with highway connection that persist the input dimensionality. The resulting tensor is projected down to a 1024-feature vector. For both word input and character input, we perform dropout on the tensors that go into LSTM layers with probability 0.25.", "We use a single fully-connected layer followed by a $Softmax$ operator to process the LSTM's output and produce a distribution over word vocabulary of size 800K. During training, LM loss is evaluated using importance sampling with negative sample size of 8192. This loss is minimized using the AdaGrad BIBREF37 algorithm with a learning rate of 0.2. All gradients on LSTM parameters and Character Embedding parameters are clipped by their global norm at 1.0. To avoid storing large matrices in memory, we shard them into 32 equal-sized smaller pieces. In our experiments, we used 8 different variants of this base model as listed in Table 9 .", "In Table 10 , we listed all LMs and their training text corpora used in each of the experiments in Section \"Main results\" ." ], [ "Using the similarity scoring technique in section \"Customized training data for Winograd Schema Challenge\" , we observe a large amount of low quality training text on the lower end of the ranking. Namely, these are documents whose content are mostly unintelligible or unrecognized by our vocabulary. Training LMs for commonsense reasoning tasks on full CommonCrawl, therefore, might not be ideal. On the other hand, we detected and removed a portion of PDP-122 questions presented as an extremely high ranked document." ] ] }
{ "question": [ "Which of their training domains improves performance the most?", "Do they fine-tune their model on the end task?" ], "question_id": [ "05bb75a1e1202850efa9191d6901de0a34744af0", "770aeff30846cd3d0d5963f527691f3685e8af02" ], "nlp_background": [ "infinity", "infinity" ], "topic_background": [ "research", "research" ], "paper_read": [ "no", "no" ], "search_query": [ "commonsense", "commonsense" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "documents from the CommonCrawl dataset that has the most overlapping n-grams with the question" ], "yes_no": null, "free_form_answer": "", "evidence": [ "As previous systems collect relevant data from knowledge bases after observing questions during evaluation BIBREF24 , BIBREF25 , we also explore using this option. Namely, we build a customized text corpus based on questions in commonsense reasoning tasks. It is important to note that this does not include the answers and therefore does not provide supervision to our resolvers. In particular, we aggregate documents from the CommonCrawl dataset that has the most overlapping n-grams with the questions. The score for each document is a weighted sum of $F_1(n)$ scores when counting overlapping n-grams: $Similarity\\_Score_{document} = \\frac{\\sum _{n=1}^4nF_1(n)}{\\sum _{n=1}^4n}$", "The top 0.1% of highest ranked documents is chosen as our new training corpus. Details of the ranking is shown in Figure 2 . This procedure resulted in nearly 1,000,000 documents, with the highest ranking document having a score of $8\\times 10^{-2}$ , still relatively small to a perfect score of $1.0$ . We name this dataset STORIES since most of the constituent documents take the form of a story with long chain of coherent events.", "Figure 5 -left and middle show that STORIES always yield the highest accuracy for both types of input processing. We next rank the text corpora based on ensemble performance for more reliable results. Namely, we compare the previous ensemble of 10 models against the same set of models trained on each single text corpus. This time, the original ensemble trained on a diverse set of text corpora outperforms all other single-corpus ensembles including STORIES. This highlights the important role of diversity in training data for commonsense reasoning accuracy of the final system." ], "highlighted_evidence": [ "In particular, we aggregate documents from the CommonCrawl dataset that has the most overlapping n-grams with the questions.", "We name this dataset STORIES since most of the constituent documents take the form of a story with long chain of coherent events.", "Figure 5 -left and middle show that STORIES always yield the highest accuracy for both types of input processing." ] } ], "annotation_id": [ "075147c75f0f3f557d09f19767204cc334bbd5bb" ], "worker_id": [ "9b253a1f26aaf983aca556df025083a4a2fa4ab9" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "As previous systems collect relevant data from knowledge bases after observing questions during evaluation BIBREF24 , BIBREF25 , we also explore using this option. Namely, we build a customized text corpus based on questions in commonsense reasoning tasks. It is important to note that this does not include the answers and therefore does not provide supervision to our resolvers. In particular, we aggregate documents from the CommonCrawl dataset that has the most overlapping n-grams with the questions. The score for each document is a weighted sum of $F_1(n)$ scores when counting overlapping n-grams: $Similarity\\_Score_{document} = \\frac{\\sum _{n=1}^4nF_1(n)}{\\sum _{n=1}^4n}$" ], "highlighted_evidence": [ "As previous systems collect relevant data from knowledge bases after observing questions during evaluation BIBREF24 , BIBREF25 , we also explore using this option. Namely, we build a customized text corpus based on questions in commonsense reasoning tasks. " ] } ], "annotation_id": [ "b5deee3b5d5803438327c6a50f5facaf409eeb22" ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] } ] }
{ "caption": [ "Figure 1: Overview of our method and analysis. We consider the test \"The trophy doesn’t fit in the suitcase because it is too big.\" Our method first substitutes two candidate references trophy and suitcase into the pronoun position. We then use an LM to score the resulting two substitutions. By looking at probability ratio at every word position, we are able to detect \"big\" as the main contributor to trophy being the chosen answer. When \"big\" is switched to \"small\", the answer changes to suitcase. This switching behaviour is an important feature characterizing the Winograd Schema Challenge.", "Table 1: Example of full and partial scoring for the test \"The trophy doesn’t fit in the suitcase because it is too big.\" with two reference choices \"the suitcase\" and \"the trophy\".", "Table 2: Unsupervised single-model resolver performance on PDP-60", "Table 3: Unconstrained resolvers performance on PDP-60", "Table 4: Accuracy on Winograd Schema Challenge", "Figure 2: Left: Histogram of similarity scores from top 0.1% documents in CommonCrawl corpus, comparing to questions in Winograd Schema Challenge. Right: An excerpt from the document whose score is 0.083 (highest ranking). In comparison, a perfect score is of 1.0. Documents in this corpus contain long series of events with complex references from several pronouns.", "Figure 3: A sample of questions from WSC-273 predicted incorrectly by full scoring, but corrected by partial scoring. Here we mark the correct prediction by an asterisk and display the normalized probability ratio q̂t by coloring its corresponding word. It can be seen that the wrong predictions are made mainly due to qt at the pronoun position, where the LM has not observed the full sentence. Partial scoring shifts the attention to later words and places highest q values on the special keywords, marked by a squared bracket. These keywords characterizes the Winograd Schema Challenge, as they uniquely decide the correct answer. In the last question, since the special keyword appear before the pronoun, our resolver instead chose \"upset\", as a reasonable switch word could be \"annoying\".", "Table 6: Accuracy of keyword detection from forward and backward scoring by retrieving top-2 words with the highest value of qt", "Table 7: Error analysis from a single model resolver. Across all three tests, partial scoring corrected a large portion of wrong predictions made by full scoring. In particular, it corrects more than 62.7% of wrong predictions on the Winograd Schema Challenge (WSC-273).", "Figure 4: Number of correct answers from 10 different LMs in three modes full, full normalized and partial scoring. The second and third outperforms the first mode in almost all cases. The difference is most prominent on the largest test WSC-273, where partial scoring outperforms the other methods by a large margin for all tested LMs.", "Figure 5: Left and middle: Accuracy of word-level LM and char-level LM on PDP-122 and WSC273 test sets, when trained on different text corpora. Right: Accuracy of ensembles of 10 models when trained on five single text corpora and all of them. A low-to-high ranking of these text corpora is LM-1-Billion, CommonCrawl, SQuAD, Gutenberg, STORIES.", "Table 8: One-dimensional convolutional layers used to process character inputs", "Table 9: All variants of recurrent LMs used in our experiments.", "Table 10: Details of LMs and their training corpus reported in our experiments." ], "file": [ "3-Figure1-1.png", "3-Table1-1.png", "4-Table2-1.png", "5-Table3-1.png", "5-Table4-1.png", "6-Figure2-1.png", "7-Figure3-1.png", "7-Table6-1.png", "7-Table7-1.png", "8-Figure4-1.png", "8-Figure5-1.png", "11-Table8-1.png", "11-Table9-1.png", "11-Table10-1.png" ] }
1906.04571
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology
Gender stereotypes are manifest in most of the world's languages and are consequently propagated or amplified by NLP systems. Although research has focused on mitigating gender stereotypes in English, the approaches that are commonly employed produce ungrammatical sentences in morphologically rich languages. We present a novel approach for converting between masculine-inflected and feminine-inflected sentences in such languages. For Spanish and Hebrew, our approach achieves F1 scores of 82% and 73% at the level of tags and accuracies of 90% and 87% at the level of forms. By evaluating our approach using four different languages, we show that, on average, it reduces gender stereotyping by a factor of 2.5 without any sacrifice to grammaticality.
{ "section_name": [ "Introduction", "Gender Stereotypes in Text", "A Markov Random Field for Morpho-Syntactic Agreement", "Parameterization", "Inference", "Parameter Estimation", "Intervention", "Experiments", "Intrinsic Evaluation", "Extrinsic Evaluation", "Related Work", "Conclusion", "Acknowledgments", "Belief Propagation Update Equations", "Adjective Translations", "Extrinsic Evaluation Example Phrases" ], "paragraphs": [ [ "One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases. This is because NLP systems depend on language corpora, which are inherently “not objective; they are creations of human design” BIBREF0 . One type of societal bias that has received considerable attention from the NLP community is gender stereotyping BIBREF1 , BIBREF2 , BIBREF3 . Gender stereotypes can manifest in language in overt ways. For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering. Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women BIBREF4 .", "To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English BIBREF5 , BIBREF6 , BIBREF7 . Yet, gender stereotypes also exist in other languages because they are a function of society, not of grammar. Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement BIBREF8 . In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns. This means that if the gender of one word changes, the others have to be updated to match. As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped BIBREF9 , will yield ungrammatical sentences. Consider the Spanish phrase el ingeniero experto (the skilled engineer). Replacing ingeniero with ingeniera is insufficient—el must also be replaced with la and experto with experta.", "In this paper, we present a new approach to counterfactual data augmentation BIBREF10 for mitigating gender stereotypes associated with animate nouns (i.e., nouns that represent people) for morphologically rich languages. We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns. We use this model as part of a four-step process, depicted in fig:pipeline, to reinflect entire sentences following an intervention on the grammatical gender of one word. We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level INLINEFORM0 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively. We also conduct an extrinsic evaluation using four languages. Following DBLP:journals/corr/abs-1807-11714, we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality." ], [ "Men and women are mentioned at different rates in text BIBREF11 . This problem is exacerbated in certain contexts. For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering. This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resumé filtering system. Gender stereotypes of this sort have been observed in word embeddings BIBREF5 , BIBREF3 , contextual word embeddings BIBREF12 , and co-reference resolution systems BIBREF13 , BIBREF9 inter alia." ], [ "In this section, we present a Markov random field BIBREF17 for morpho-syntactic agreement. This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags. Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement.", "A dependency tree for a sentence (see fig:tree for an example) is a set of ordered triples INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 are positions in the sentence (or a distinguished root symbol) and INLINEFORM3 is the label of the edge INLINEFORM4 in the tree; each position occurs exactly once as the first element in a triple. Each dependency tree INLINEFORM5 is associated with a sequence of morpho-syntactic tags INLINEFORM6 and a sequence of part-of-speech (POS) tags INLINEFORM7 . For example, the tags INLINEFORM8 and INLINEFORM9 for ingeniero are INLINEFORM10 and INLINEFORM11 , respectively, because ingeniero is a masculine, singular noun. For notational simplicity, we define INLINEFORM12 to be the set of all length- INLINEFORM13 sequences of morpho-syntactic tags.", "We define the probability of INLINEFORM0 given INLINEFORM1 and INLINEFORM2 as DISPLAYFORM0 ", " where the binary factor INLINEFORM0 scores how well the morpho-syntactic tags INLINEFORM1 and INLINEFORM2 agree given the POS tags INLINEFORM3 and INLINEFORM4 and the label INLINEFORM5 . For example, consider the INLINEFORM6 (adjectival modifier) edge from experto to ingeniero in fig:tree. The factor INLINEFORM7 returns a high score if the corresponding morpho-syntactic tags agree in gender and number (e.g., INLINEFORM8 and INLINEFORM9 ) and a low score if they do not (e.g., INLINEFORM10 and INLINEFORM11 ). The unary factor INLINEFORM12 scores a morpho-syntactic tag INLINEFORM13 outside the context of the dependency tree. As we explain in sec:constraint, we use these unary factors to force or disallow particular tags when performing an intervention; we do not learn them. eq:dist is normalized by the following partition function: INLINEFORM14 ", " INLINEFORM0 can be calculated using belief propagation; we provide the update equations that we use in sec:bp. Our model is depicted in fig:fg. It is noteworthy that this model is delexicalized—i.e., it considers only the labeled dependency tree and the POS tags, not the actual words themselves." ], [ "We consider a linear parameterization and a neural parameterization of the binary factor INLINEFORM0 .", "We define a matrix INLINEFORM0 for each triple INLINEFORM1 , where INLINEFORM2 is the number of morpho-syntactic subtags. For example, INLINEFORM3 has two subtags INLINEFORM4 and INLINEFORM5 . We then define INLINEFORM6 as follows: INLINEFORM7 ", " where INLINEFORM0 is a multi-hot encoding of INLINEFORM1 .", "As an alternative, we also define a neural parameterization of INLINEFORM0 to allow parameter sharing among edges with different parts of speech and labels: INLINEFORM1 ", " where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 and INLINEFORM3 define the structure of the neural parameterization and each INLINEFORM4 is an embedding function.", "We use the unary factors only to force or disallow particular tags when performing an intervention. Specifically, we define DISPLAYFORM0 ", "where INLINEFORM0 is a strength parameter that determines the extent to which INLINEFORM1 should remain unchanged following an intervention. In the limit as INLINEFORM2 , all tags will remain unchanged except for the tag directly involved in the intervention." ], [ "Because our MRF is acyclic and tree-shaped, we can use belief propagation BIBREF18 to perform exact inference. The algorithm is a generalization of the forward-backward algorithm for hidden Markov models BIBREF19 . Specifically, we pass messages from the leaves to the root and vice versa. The marginal distribution of a node is the point-wise product of all its incoming messages; the partition function INLINEFORM0 is the sum of any node's marginal distribution. Computing INLINEFORM1 takes polynomial time BIBREF18 —specifically, INLINEFORM2 where INLINEFORM3 is the number of morpho-syntactic tags. Finally, inferring the highest-probability morpho-syntactic tag sequence INLINEFORM4 given INLINEFORM5 and INLINEFORM6 can be performed using the max-product modification to belief propagation." ], [ "We use gradient-based optimization. We treat the negative log-likelihood INLINEFORM0 as the loss function for tree INLINEFORM1 and compute its gradient using automatic differentiation BIBREF20 . We learn the parameters of sec:param by optimizing the negative log-likelihood using gradient descent." ], [ "As explained in sec:gender, our goal is to transform sentences like sent:msc to sent:fem by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement. For example, if we change the morpho-syntactic tag for ingeniero from [msc;sg] to [fem;sg], then we must also update the tags for el and experto, but do not need to update the tag for es, which should remain unchanged as [in; pr; sg]. If we intervene on the INLINEFORM0 word in a sentence, changing its tag from INLINEFORM1 to INLINEFORM2 , then using our model to infer the manner in which the remaining tags must be updated means using INLINEFORM3 to identify high-probability tags for the remaining words.", "Crucially, we wish to change as little as possible when intervening on a gendered word. The unary factors INLINEFORM0 enable us to do exactly this. As described in the previous section, the strength parameter INLINEFORM1 determines the extent to which INLINEFORM2 should remain unchanged following an intervention—the larger the value, the less likely it is that INLINEFORM3 will be changed.", "Once the new tags have been inferred, the final step is to reinflect the lemmata to their new forms. This task has received considerable attention from the NLP community BIBREF21 , BIBREF22 . We use the inflection model of D18-1473. This model conditions on the lemma INLINEFORM0 and morpho-syntactic tag INLINEFORM1 to form a distribution over possible inflections. For example, given experto and INLINEFORM2 , the trained inflection model will assign a high probability to expertas. We provide accuracies for the trained inflection model in tab:reinflect." ], [ "We used the Adam optimizer BIBREF23 to train both parameterizations of our model until the change in dev-loss was less than INLINEFORM0 bits. We set INLINEFORM1 without tuning, and chose a learning rate of INLINEFORM2 and weight decay factor of INLINEFORM3 after tuning. We tuned INLINEFORM4 in the set INLINEFORM5 and chose INLINEFORM6 . For the neural parameterization, we set INLINEFORM7 and INLINEFORM8 without any tuning. Finally, we trained the inflection model using only gendered words.", "We evaluate our approach both intrinsically and extrinsically. For the intrinsic evaluation, we focus on whether our approach yields the correct morpho-syntactic tags and the correct reinflections. For the extrinsic evaluation, we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models." ], [ "To the best of our knowledge, this task has not been studied previously. As a result, there is no existing annotated corpus of paired sentences that can be used as “ground truth.” We therefore annotated Spanish and Hebrew sentences ourselves, with annotations made by native speakers of each language. Specifically, for each language, we extracted sentences containing animate nouns from that language's UD treebank. The average length of these extracted sentences was 37 words. We then manually inspected each sentence, intervening on the gender of the animate noun and reinflecting the sentence accordingly. We chose Spanish and Hebrew because gender agreement operates differently in each language. We provide corpus statistics for both languages in the top two rows of tab:data.", "We created a hard-coded INLINEFORM0 to serve as a baseline for each language. For Spanish, we only activated, i.e. set to a number greater than zero, values that relate adjectives and determiners to nouns; for Hebrew, we only activated values that relate adjectives and verbs to nouns. We created two separate baselines because gender agreement operates differently in each language.", "To evaluate our approach, we held all morpho-syntactic subtags fixed except for gender. For each annotated sentence, we intervened on the gender of the animate noun. We then used our model to infer which of the remaining tags should be updated (updating a tag means swapping the gender subtag because all morpho-syntactic subtags were held fixed except for gender) and reinflected the lemmata. Finally, we used the annotations to compute the tag-level INLINEFORM0 score and the form-level accuracy, excluding the animate nouns on which we intervened.", "We present the results in tab:intrinsic. Recall is consistently significantly lower than precision. As expected, the baselines have the highest precision (though not by much). This is because they reflect well-known rules for each language. That said, they have lower recall than our approach because they fail to capture more subtle relationships.", "For both languages, our approach struggles with conjunctions. For example, consider the phrase él es un ingeniero y escritor (he is an engineer and a writer). Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora. This is because two nouns do not normally need to have the same gender when they are conjoined. Moreover, our MRF does not include co-reference information, so it cannot tell that, in this case, both nouns refer to the same person. Note that including co-reference information in our MRF would create cycles and inference would no longer be exact. Additionally, the lack of co-reference information means that, for Spanish, our approach fails to convert nouns that are noun-modifiers or indirect objects of verbs.", "Somewhat surprisingly, the neural parameterization does not outperform the linear parameterization. We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels; however, this parameter sharing does not seem to make a difference in practice, so the linear parameterization is sufficient." ], [ "We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping. Following DBLP:journals/corr/abs-1807-11714, focus on neural language models. We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages.", "As our measure of gender stereotyping, we compare the log ratio of the prefix probabilities under a language model INLINEFORM0 for gendered, animate nouns, such as ingeniero, combined with four adjectives: good, bad, smart, and beautiful. The translations we use for these adjectives are given in sec:translation. We chose the first two adjectives because they should be used equally to describe men and women, and the latter two because we expect that they will reveal gender stereotypes. For example, consider DISPLAYFORM0 ", "If this log ratio is close to 0, then the language model is as likely to generate sentences that start with el ingeniero bueno (the good male engineer) as it is to generate sentences that start with la ingeniera bueno (the good female engineer). If the log ratio is negative, then the language model is more likely to generate the feminine form than the masculine form, while the opposite is true if the log ratio is positive. In practice, given the current gender disparity in engineering, we would expect the log ratio to be positive. If, however, the language model were trained on a corpus to which our CDA approach had been applied, we would then expect the log ratio to be much closer to zero.", "Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): DISPLAYFORM0 ", "We trained the linear parameterization using UD treebanks for Spanish, Hebrew, French, and Italian (see tab:data). For each of the four languages, we parsed one million sentences from Wikipedia (May 2018 dump) using BIBREF24 's parser and extracted taggings and lemmata using the method of BIBREF25 . We automatically extracted an animacy gazetteer from WordNet BIBREF26 and then manually filtered the output for correctness. We provide the size of the languages' animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in tab:anim. For each sentence containing a noun in our animacy gazetteer, we created a copy of the sentence, intervened on the noun, and then used our approach to transform the sentence. For sentences containing more than one animate noun, we generated a separate sentence for each possible combination of genders. Choosing which sentences to duplicate is a difficult task. For example, alemán in Spanish can refer to either a German man or the German language; however, we have no way of distinguishing between these two meanings without additional annotations. Multilingual animacy detection BIBREF27 might help with this challenge; co-reference information might additionally help.", "For each language, we trained the BPE-RNNLM baseline open-vocabulary language model of BIBREF28 using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach. We then computed gender stereotyping and grammaticality as described above. We provide example phrases in tab:lm; we provide a more extensive list of phrases in app:queries.", "fig:bias demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach. It is immediately apparent that our approch reduces gender stereotyping. On average, our approach reduces gender stereotyping by a factor of 2.5 (the lowest and highest factors are 1.2 (Ita) and 5.0 (Esp), respectively). We expected that naïve swapping of gendered words would also reduce gender stereotyping. Indeed, we see that this simple heuristic reduces gender stereotyping for some but not all of the languages. For Spanish, we also examine specific words that are stereotyped toward men or women. We define a word to be stereotyped toward one gender if 75% of its occurrences are of that gender. fig:espbias suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women.", "The grammaticality of the corpora following CDA differs between languages. That said, with the exception of Hebrew, our approach either sacrifices less grammaticality than naïve swapping of gendered words and sometimes increases grammaticality over the original corpus. Given that we know the model did not perform as accurately for Hebrew (see tab:intrinsic), this finding is not surprising." ], [ "In contrast to previous work, we focus on mitigating gender stereotypes in languages with rich morphology—specifically languages that exhibit gender agreement. To date, the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English. For example, BIBREF5 proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings; BIBREF10 studied gender stereotypes in language models; and BIBREF13 introduced a novel Winograd schema for evaluating gender stereotypes in co-reference resolution. The most closely related work is that of BIBREF9 , who used CDA to reduce gender stereotypes in co-reference resolution; however, their approach yields ungrammatical sentences in morphologically rich languages. Our approach is specifically intended to yield grammatical sentences when applied to such languages. BIBREF29 also focused on morphologically rich languages, specifically Arabic, but in the context of gender identification in machine translation." ], [ "We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages. To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns. To the best of our knowledge, this task has not been studied previously. As a result, there is no existing annotated corpus of paired sentences that can be used as “ground truth.” Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results. For example, we demonstrated that our approach reduces gender stereotyping in neural language models. Finally, we also identified avenues for future work, such as the inclusion of co-reference information." ], [ "The last author acknowledges a Facebook Fellowship." ], [ "Our belief propagation update equations are DISPLAYFORM0 DISPLAYFORM1 ", " where INLINEFORM0 returns the set of neighbouring nodes of node INLINEFORM1 . The belief at any node is given by DISPLAYFORM0 " ], [ "tab:fem and tab:masc contain the feminine and masculine translations of the four adjectives that we used." ], [ "For each noun in our animacy gazetteer, we generated sixteen phrases. Consider the noun engineer as an example. We created four phrases—one for each translation of The good engineer, The bad engineer, The smart engineer, and The beautiful engineer. These phrases, as well as their prefix log-likelihoods are provided below in tab:query." ] ] }
{ "question": [ "Why does not the approach from English work on other languages?", "How do they measure grammaticality?", "Which model do they use to convert between masculine-inflected and feminine-inflected sentences?" ], "question_id": [ "f7817b949605fb04b1e4fec9dd9ca8804fb92ae9", "8255f74cae1352e5acb2144fb857758dda69be02", "db62d5d83ec187063b57425affe73fef8733dd28" ], "nlp_background": [ "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Because, unlike other languages, English does not mark grammatical genders", "evidence": [ "To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English BIBREF5 , BIBREF6 , BIBREF7 . Yet, gender stereotypes also exist in other languages because they are a function of society, not of grammar. Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement BIBREF8 . In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns. This means that if the gender of one word changes, the others have to be updated to match. As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped BIBREF9 , will yield ungrammatical sentences. Consider the Spanish phrase el ingeniero experto (the skilled engineer). Replacing ingeniero with ingeniera is insufficient—el must also be replaced with la and experto with experta." ], "highlighted_evidence": [ "Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement BIBREF8 ." ] } ], "annotation_id": [ "075ffbc4f5f1ee3b32ee07258113e5fa1412fe04" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "by calculating log ratio of grammatical phrase over ungrammatical phrase", "evidence": [ "Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): DISPLAYFORM0" ], "highlighted_evidence": [ "Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase):" ] } ], "annotation_id": [ "ea88ebb09c6cad72c89bedff07780b036d2c3159" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Markov random field with an optional neural parameterization" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages. To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns. To the best of our knowledge, this task has not been studied previously. As a result, there is no existing annotated corpus of paired sentences that can be used as “ground truth.” Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results. For example, we demonstrated that our approach reduces gender stereotyping in neural language models. Finally, we also identified avenues for future work, such as the inclusion of co-reference information." ], "highlighted_evidence": [ "We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages. To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns." ] } ], "annotation_id": [ "a3e52b132398d3f6dc4a4f6ba7dc77b9e6898d89" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Figure 1: Transformation of Los ingenieros son expertos (i.e., The male engineers are skilled) to Las ingenieras son expertas (i.e., The female engineers are skilled). We extract the properties of each word in the sentence. We then fix a noun and its tags and infer the manner in which the remaining tags must be updated. Finally, we reinflect the lemmata to their new forms.", "Figure 2: Dependency tree for the sentence El ingeniero alemán es muy experto.", "Figure 3: Factor graph for the sentence El ingeniero alemán es muy experto.", "Table 1: Morphological reinflection accuracies.", "Table 2: Language data.", "Table 3: Tag-level precision, recall, F1 score, and accuracy and form-level accuracy for the baselines (“– BASE”) and for our approach (“–LIN” is the linear parameterization, “–NN” is the neural parameterization).", "Figure 4: Gender stereotyping (left) and grammaticality (right) using the original corpus, the corpus following CDA using naı̈ve swapping of gendered words (“Swap”), and the corpus following CDA using our approach (“MRF”).", "Table 4: Animate noun statistics.", "Figure 5: Gender stereotyping for words that are stereotyped toward men or women in Spanish using the original corpus, the corpus following CDA using naı̈ve swapping of gendered words (“Swap”), and the corpus following CDA using our approach (“MRF”).", "Table 5: Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naı̈ve swapping of gendered words (“Swap”), and the corpus following CDA using our approach (“MRF”). Phrases 1 and 2 are grammatical, while phrases 3 and 4 are not (dentoted by “*”). Gender stereotyping is measured using phrases 1 and 2. Grammaticality is measured using phrases 1 and 3 and using phrases 2 and 4; these scores are then averaged.", "Table 8: Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naı̈ve swapping of gendered words (“Swap”), and the corpus following CDA using our approach (“MRF”). Ungrammatical phrases are denoted by “*”.", "Table 6: Feminine translations of good, bad, smart, beautiful in French, Hebrew, Italian, and Spanish", "Table 7: Masculine translations of good, bad, smart, beautiful in French, Hebrew, Italian, and Spanish" ], "file": [ "1-Figure1-1.png", "3-Figure2-1.png", "4-Figure3-1.png", "4-Table1-1.png", "5-Table2-1.png", "5-Table3-1.png", "6-Figure4-1.png", "6-Table4-1.png", "7-Figure5-1.png", "7-Table5-1.png", "11-Table8-1.png", "11-Table6-1.png", "11-Table7-1.png" ] }
1909.04625
Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study
Neural language models have achieved state-of-the-art performances on many NLP tasks, and recently have been shown to learn a number of hierarchically-sensitive syntactic dependencies between individual words. However, equally important for language processing is the ability to combine words into phrasal constituents, and use constituent-level features to drive downstream expectations. Here we investigate neural models' ability to represent constituent-level features, using coordinated noun phrases as a case study. We assess whether different neural language models trained on English and French represent phrase-level number and gender features, and use those features to drive downstream expectations. Our results suggest that models use a linear combination of NP constituent number to drive CoordNP/verb number agreement. This behavior is highly regular and even sensitive to local syntactic context, however it differs crucially from observed human behavior. Models have less success with gender agreement. Models trained on large corpora perform best, and there is no obvious advantage for models trained using explicit syntactic supervision.
{ "section_name": [ "Introduction", "Methods ::: Psycholinguistics Paradigm", "Methods ::: Models Tested ::: Recurrent Neural Network (RNN) Language Models", "Methods ::: Models Tested ::: ActionLSTM", "Methods ::: Models Tested ::: Generative Recurrent Neural Network Grammars (RNNG)", "Experiment 1: Non-coordination Agreement", "Experiment 2: Simple Coordination", "Experiment 2: Simple Coordination ::: Number Agreement", "Experiment 2: Simple Coordination ::: Gender Agreement", "Experiment 3: Complex Coordination", "Experiment 3: Complex Coordination ::: Complex Coordination Control", "Experiment 3: Complex Coordination ::: Complex Coordination Critical", "Experiment 4: Inverted Coordination", "Discussion", "Acknowledgments", "The Effect of Annotation Schemes", "PTB/FTB Agreement Patterns" ], "paragraphs": [ [ "Humans deploy structure-sensitive expectations to guide processing during natural language comprehension BIBREF0. While it has been shown that neural language models show similar structure-sensitivity in their predictions about upcoming material BIBREF1, BIBREF2, previous work has focused on dependencies that are conditioned by features attached to a single word, such as subject number BIBREF3, BIBREF4 or wh-question words BIBREF5. There has been no systematic investigation into models' ability to compute phrase-level features—features that are attached to a set of words—and whether models can deploy these more abstract properties to drive downstream expectations.", "In this work, we assess whether state-of-the-art neural models can compute and employ phrase-level gender and number features of coordinated subject Noun Phrases (CoordNPs) with two nouns. Typical syntactic phrases are endocentric: they are headed by a single child, whose features determine the agreement requirements for the entire phrase. In Figure FIGREF1, for example, the word star heads the subject NP The star; since star is singular, the verb must be singular. CoordNPs lack endocentricity: neither conjunct NP solely determines the features of the NP as a whole. Instead, these feature values are determined by compositional rules sensitive to the features of the conjuncts and the identity of the coordinator. In Figure FIGREF1, because the coordinator is and, the subject NP number is plural even though both conjuncts (the star and the moon) are singular. As this case demonstrates, the agreement behavior for CoordNPs must be driven by more abstract, constituent-level representations, and cannot be reduced to features hosted on a single lexical item.", "We use four suites of experiments to assess whether neural models are able to build up phrase-level representations of CoordNPs on the fly and deploy them to drive humanlike behavior. First, we present a simple control experiment to show that models can represent number and gender features of non-coordinate NPs (Non-coordination Agreement). Second, we show that models modulate their expectations for downstream verb number based on the CoordNP's coordinating conjunction combined with the features of the coordinated nouns (Simple Coordination). We rule out the possibility that models are using simple heuristics by designing a set of stimuli where a simple heuristic would fail due to structural ambiguity (Complex Coordination). The striking success for all models in this experiment indicates that even neural models with no explicit hierarchical bias, trained on a relatively small amount of text are able to learn fine-grained and robust generalizations about the interaction between CoordNPs and local syntactic context. Finally, we use subject–auxiliary inversion to test whether an upstream lexical item modulates model expectation for the phrasal-level features of a downstream CoordNP (Inverted Coordination). Here, we find that all models are insensitive to the fine-grained features of this particular syntactic context. Overall, our results indicate that neural models can learn fine-grained information about the interaction of Coordinated NPs and local syntactic context, but their behavior remains unhumanlike in many key respects." ], [ "To determine whether state-of-the-art neural architectures are capable of learning humanlike CoordNP/verb agreement properties, we adopt the psycholinguistics paradigm for model assessment. In this paradigm the models are tested using hand-crafted sentences designed to test underlying network knowledge. The assumption here is that if a model implicitly learns humanlike linguistic knowledge during training, its expectations for upcoming words should qualitatively match human expectations in novel contexts. For example, BIBREF1 and BIBREF6 assessed how well neural models had learned the subject/verb number agreement by feeding them with the prefix The keys to the cabinet .... If the models predicted the grammatical continuation are over the ungrammatical continuation is, they can be said to have learned the number agreement insofar as the number of the head noun and not the number of the distractor noun, cabinet, drives expectations about the number of the matrix verb.", "If models are able to robustly modulate their expectations based on the internal components of the CoordNP, this will provide evidence that the networks are building up a context-sensitive phrase-level representation. We quantify model expectations as surprisal values. Surprisal is the negative log-conditional probability $S(x_i) = -\\log _2 p(x_i|x_1 \\dots x_{i-1})$ of a sentence's $i^{th}$ word $x_i$ given the previous words. Surprisal tells us how strongly $x_i$ is expected in context and is known to correlate with human processing difficulty BIBREF7, BIBREF0, BIBREF8. In the CoordNP/Verb agreement studies presented here, cases where the proceeding context sets high expectation for a number-inflected verb form $w_i$, (e.g. singular `is') we would expect $S(w_i)$ to be lower than its number-mismatched counterpart (e.g. plural `are')." ], [ "are trained to output the probability distribution of the upcoming word given a context, without explicitly representing the structure of the context BIBREF9, BIBREF10. We trained two two-layer recurrent neural language models with long short-term memory architecture BIBREF11 on a relatively small corpus. The first model, referred as `LSTM (PTB)' in the following sections, was trained on the sentences from Penn Treebank BIBREF12. The second model, referred as `LSTM (FTB)', was trained on the sentences from French Treebank BIBREF13. We set the size of input word embedding and LSTM hidden layer of both models as 256.", "We also compare LSTM language models trained on large corpora. We incorporate two pretrained English language models: one trained on the Billion Word benchmark (referred as `LSTM (1B)') from BIBREF14, and the other trained on English Wikipedia (referred as `LSTM (enWiki)') from BIBREF3. For French, we trained a large LSTM language model (referred as `LSTM (frWaC)') on a random subset (about 4 million sentences, 138 million word tokens) of the frWaC dataset BIBREF15. We set the size of the input embeddings and hidden layers to 400 for the LSTM (frWaC) model since it is trained on a large dataset." ], [ "models the linearized bracketed tree structure of a sentence by learning to predict the next action required to construct a phrase-structure parse BIBREF16. The action space consists of three possibilities: open a new non-terminal node and opening bracket; generate a terminal node; and close a bracket. To compute surprisal values for a given token, we approximate $P(w_i|w_{1\\cdots i-1)}$ by marginalizing over the most-likely partial parses found by word-synchronous beam search BIBREF17." ], [ "jointly model the word sequence as well as the underlying syntactic structure BIBREF18. Following BIBREF19, we estimate surprisal using word-synchronous beam search BIBREF17. We use the same hyper-parameter settings as BIBREF18.", "The annotation schemes used to train the syntactically-supervised models differ slightly between French and English. In the PTB (English) CoordNPs are flat structures bearing an `NP' label. In FTB (French), CoordNPs are binary-branching, labeled as NPs, except for the phrasal node dominating the coordinating conjunction, which is labeled `COORD'. We examine the effects of annotation schemes on model performance in Appendix SECREF8." ], [ "In order to provide a baseline for following experiments, here we assess whether the models tested have learned basic representations of number and gender features for non-coordinated Noun Phrases. We test number agreement in English and French as well as gender agreement in French. Both English and French have two grammatical number feature: singular (sg) and plural (pl). French has two grammatical gender features: masculine (m) and feminine (f).", "The experimental materials include sentences where the subject NPs contain a single noun which can either match with the matrix verb (in the case of number agreement) or a following predicative adjective (in the case of gender agreement). Conditions are given in Table TABREF9 and Table TABREF10. We measure model behavior by computing the plural expectation, or the surprisal of the singular continuation minus the surprisal of the plural continuation for each condition and took the average for each condition. We expect a positive plural expectation in the Npl conditions and a negative plural expectation in the Nsg conditions. For gender expectation we compute a gender expectation, which is S(feminine continuation) $-$ S(masculine continuation). We measure surprisal at the verbs and predicative adjectives themselves.", "The results for this experiment are in Figure FIGREF11, with the plural expectation and gender expectation on the y-axis and conditions on the x-axis. For this and subsequent experiments error bars represent 95% confidence intervals for across-item means. For number agreement, all the models in English and French show positive plural expectation when the head noun is plural and negative plural expectation when it is singular. For gender agreement, however, only the LSTM (frWaC) shows modulation of gender expectation based on the gender of the head noun. This is most likely due to the lower frequency of predicative adjectives compared to matrix verbs in the corpus." ], [ "In this section, we test whether neural language models can use grammatical features hosted on multiple components of a coordination phrase—the coordinated nouns as well as the coordinating conjunction—to drive downstream expectations. We test number agreement in both English and French and gender agreement in French." ], [ "In simple subject/verb number agreement, the number features of the CoordNP are determined by the coordinating conjunction and the number features of the two coordinated NPs. CoordNPs formed by and are plural and thus require plural verbs; CoordNPs formed by or allow either plural or singular verbs, often with the number features of the noun linearly closest to the verb playing a more important role, although this varies cross-linguistically BIBREF20. Forced-choice preference experiments in BIBREF21 reveal that English native speakers prefer singular agreement when the closest conjunct in an or-CoordNP is singular and plural agreement when the closest conjunct is plural. In French, both singular and plural verbs are possible when two singular NPs are joined via disjunction BIBREF22.", "In order to assess whether the neural models learn the basic CoordNP licensing for English, we adapted 37 items from BIBREF21, along the 16 conditions outlined in Table TABREF14. Test items consist of the sentence preamble, followed by either the singular or plural BE verb, half the time in present tense (is/are) and half the time in past tense (was/were). We measured the plural expectation, following the procedure in Section SECREF3. We created 24 items using the same conditions as the English experiment to test the models trained in French, using the 3rd person singular and plural form of verb aller, `to go' (va, vont). Within each item, nouns match in gender; across all conditions half the nouns are masculine, half feminine.", "The results for this experiment can be seen in Figure FIGREF12, with the results for English on the left and French on the right. The results for and are on the top row, or on the bottom row. For all figures the y-axis shows the plural expectation, or the difference in surprisal between the singular condition and the plural condition. Turning first to English-and (Figure FIGREF12), all models show plural expectation (the bars are significantly greater than zero) in the pl_and_pl and sg_and_pl conditions, as expected. For the pl_and_sg condition, only the LSTM (enWiki) and ActionLSTM are greater than zero, indicating humanlike behavior. For the sg_and_sg condition, only the LSTM (enWiki) model shows the correct plural expectation. For the French-and (Figure FIGREF12), all models show positive plural expectation in all conditions, as expected, except for the LSTM (FTB) in the sg_and_sg condition.", "Examining the results for English-or, we find that all models demonstrate humanlike expectation in the pl_or_pl and sg_or_pl conditions. The LSTM (1B), LSTM (PTB), and RNNG models show zero or negative singular expectation for the pl_or_sg conditions, as expected. However the LSTM (enWiki) and ActionLSTM models show positive plural expectation in this condition, indicating that they have not learned the humanlike generalizations. All models show significantly negative plural expectation in the sg_or_sg condition, as expected. In the French-or cases, models show almost identical behavior to the and conditions, except the LSTM (frWaC) shows smaller plural expectation when singular nouns are linearly proximal to the verb.", "These results indicate moderate success at learning coordinate NP agreement, however this success may be the result of an overly simple heuristic. It appears that expectation for both plural and masculine continuations are driven by a linear combination of the two nominal number/gender features transferred into log-probability space, with the earlier noun mattering less than the later noun. A model that optimally captures human grammatical preferences should show no or only slight difference across conditions in the surprisal differential for the and conditions, and be greater than zero in all cases. Yet, all the models tested show gradient performance based on the number of plural conjuncts." ], [ "In French, if two nouns are coordinated with et (and-coordination), agreement must be masculine if there is one masculine element in the coordinate structure. If the nouns are coordinated with ou (or-coordination), both masculine and feminine agreement is acceptable BIBREF23, BIBREF24. Although linear proximity effects have been tested for a number of languages that employ grammatical gender, as in e.g. Slavic languages BIBREF25, there is no systematic study for French.", "To assess whether the French neural models learned humanlike gender agreement, we created 24 test items, following the examples in Table TABREF16, and measured the masculine expectation. In our test items, the coordinated subject NP is followed by a predicative adjective, which either takes on masculine or feminine gender morphology.", "Results from the experiment can be seen in Figure FIGREF17. No models shows qualitative difference based on the coordinator, and only the LSTM (frWaC) shows significant behavior difference between conditions. Here, we find positive masculine expectation in the m_and_m and f_and_m conditions, and negative masculine expectation in the f_and_f condition, as expected. However, in the m_and_f condition, the masculine expectation is not significantly different from zero, where we would expect it to be positive. In the or-coordination conditions, following our expectation, masculine expectation is positive when both conjuncts are masculine and negative when both are feminine. For the LSTM (FTB) and ActionLSTM models, the masculine expectation is positive (although not significantly so) in all conditions, consistent with results in Section SECREF3." ], [ "One possible explanation for the results presented in the previous section is that the models are using a `bag of features' approach to plural and masculine licensing that is opaque to syntactic context: Following a coordinating conjunction surrounded by nouns, models simply expect the following verb to be plural, proportionally to the number of plural nouns.", "In this section, we control for this potential confound by conducting two experiments: In the Complex Coordination Control experiments we assess models' ability to extend basic CoordNP licensing into sententially-embedded environments, where the CoordNP can serve as an embedded subject. In the Complex Coordination Critical experiments, we leverage the sentential embedding environment to demonstrate that when the CoordNPs cannot plausibly serve as the subject of the embedded phrase, models are able to suppress the previously-demonstrated expectations set up by these phrases. These results demonstrate that models are not following a simple strategy for predicting downstream number and gender features, but are building up CoordNP representations on the fly, conditioned on the local syntactic context." ], [ "Following certain sentential-embedding verbs, CoordNPs serve unambiguously as the subject of the verb's sentence complement and should trigger number agreement behavior in the main verb of the embedded clause, similar to the behavior presented in SECREF13. To assess this, we use the 37 test items in English and 24 items in French in section SECREF13, following the conditions in Table TABREF19 (for number agreement), testing only and coordination. For gender agreement, we use the same test items and conditions for and coordination in Section SECREF15, but with the Coordinated NPs embedded in a context similar to SECREF18. As before, we derived the plural expectation by measuring the difference in surprisal between the singular and plural continuations and the gender expectation by computing the difference in surprisal between the masculine and feminine predicates.", ". Je croyais que les prix et les dépenses étaient importants/importantes.", "I thought that the.pl price.mpl and the.pl expense.fpl were important.mpl/fpl", "I thought that the prices and the expenses were important.", "The results for the control experiments can be seen in Figure FIGREF20, with English number agreement on the top row, French number agreement in the middle row and French gender agreement on the bottom. The y-axis shows either plural or masculine expectation, with the various conditions along the x-axis. For English number agreement, we find that the models behave similarly as they do for simple coordination contexts. All models show significant plural expectation when the closest noun is plural, with only two models demonstrating plural expectation in the sg_and_sg case. The French number agreement tests show similar results, with all models except LSTM (FTB) demonstrating significant plural prediction in all cases. Turning to French gender agreement, only the LSTM (frWaC) shows sensitivity to the various conditions, with positive masculine expectation in the m_and_m condition and negative expectation in the f_and_f condition, as expected. These results indicate that the behavior shown in Section SECREF13 extends to more complex syntactic environments—in this case to sentential embeddings. Interestingly, for some models, such as the LSTM (1B), behavior is more humanlike when the CoordNP serves as the subject of an embedded sentence. This may be because the model, which has a large number of hidden states and may be extra sensitive to fine-grained syntactic information carried on lexical items BIBREF2, is using the complementizer, that, to drive more robust expectations." ], [ "In order to assess whether the models' strategy for CoordNP/verb number agreement is sensitive to syntactic context, we contrast the results presented above to those from a second, critical experiment. Here, two coordinated nouns follow a verb that cannot take a sentential complement, as in the examples given in Table TABREF23. Of the two possible continuations—are or is—the plural is only grammatically licensed when the second of the two conjuncts is plural. In these cases, the plural continuation may lead to a final sentence where the first noun serves as the verb's object and the second introduces a second main clause coordinated with the first, as in I fixed the doors and the windows are still broken. For the same reason, the singular-verb continuation is only licensed when the noun immediately following and is singular.", "We created 37 test items in both English and French, and calculated the plural expectation. If the models were following a simple strategy to drive CoordNP/verb number agreement, then we should see either no difference in plural expectation across the four conditions or behavior no different from the control experiment. If, however, the models are sensitive to the licensing context, we should see a contrast based solely on the number features of the second conjunct, where plural expectation is positive when the second conjunct is plural, and negative otherwise.", "Experimental items for a critical gender test were created similarly, as in Example SECREF22. As with plural agreement, gender expectation should be driven solely by the second conjunct: For the f_and_m and m_and_m conditions, the only grammatical continuation is one where the adjectival predicate bears masculine gender morphology. Conversely, for the m_and_f or f_and_f conditions, the only grammatical continuation is one where the adjectival predicate bears feminine morphology. As in SECREF13, we created 24 test items and measured the gender expectation by calculating the difference in surprisal between the masculine and feminine continuations.", ". Nous avons accepté les prix et les dépenses étaient importants/importantes.", "we have accepted the.pl price.mpl and the expense.fpl were important.mpl/fpl", "We have accepted the prices and the expenses were important.", "The results from the critical experiments are in Figure FIGREF21, with the English number agreement on the top row, French number agreement in the middle and gender expectation on the bottom row. Here the y-axis shows either plural expectation or masculine expectation, with the various conditions are on the x-axis. The results here are strikingly different from those in the control experiments. For number agreement, all models in both languages show strong plural expectation in conditions where the second noun is plural (blue and green bars), as they do in the control experiments. Crucially, when the second noun is singular, the plural expectation is significantly negative for all models (save for the French LSTM (FTB) pl_and_sg condition). Turning to gender agreement, only the LSTM (frWaC) model shows differentiation between the four conditions tested. However, whereas the f_and_m and m_and_f gender expectations are not significantly different from zero in the control condition, in the critical condition they pattern with the purely masculine and purely feminine conditions, indicating that, in this syntactic context, the model has successfully learned to base gender expectation solely off of the second noun.", "These results are inconsistent with a simple `bag of features' strategy that is insensitive to local syntactic context. They indicate that both models can interpret the same string as either a coordinated noun phrase, or as an NP object and the start of a coordinated VP with the second NP as its subject." ], [ "In addition to using phrase-level features to drive expectation about downstream lexical items, human processors can do the inverse—use lexical features to drive expectations about upcoming syntactic chunks. In this experiment, we assess whether neural models use number features hosted on a verb to modulate their expectations for upcoming CoordNPs.", "To assess whether neural language models learn inverted coordination rules, we adapted items from Section SECREF13 in both English (37 items) and French (24 items), following the paradigm in Table TABREF24. The first part of the phrase contains either a plural or singular verb and a plural or singular noun. In this case, we sample the surprisal for the continuations and (or is grammatical in all conditions, so it is omitted from this study). Our expectation is that `and' is less surprising in the Vpl_Nsg condition than in the Vsg_Nsg condition, where a CoordNP is not licensed by the grammar in either French or English (as in *What is the pig and the cat eating?). We also expect lower surprisal for and in the Vpl_Nsg condition, where it is obligatory for a grammatical continuation, than in the Vpl_Npl condition, where it is optional.", "For French experimental items, the question is embedded into a sentential-complement taking verb, following Example SECREF6, due to the fact that unembedded subject-verb inverted questions sound very formal and might be relatively rare in the training data.", ". Je me demande où vont le maire et", "I myself ask where go.3PL the.MSG mayor.MSG and", "The results for both languages are shown in Figure FIGREF25, with the surprisal at the coordinator on the y-axis and the various conditions on the x-axis. No model in either language shows a signficant difference in surprisal between the Vpl_Nsg and Vpl_Npl conditions or between the Vpl_Nsg and Vsg_Nsg conditions. The LSTM (1B) shows significant difference between the Vpl_Nsg and Vpl_Npl conditions, but in the opposite direction than expected, with the coordinator less surprising in the latter condition. These results indicate that the models are unable to use the fine-grained context sensitivity to drive expectations for CoordNPs, at least in the inversion setting." ], [ "The experiments presented here extend and refine a line of research investigating what linguistic knowledge is acquired by neural language models. Previous studies have demonstrated that sequential models trained on a simple regime of optimizing the next word can learn long-distance syntactic dependencies in impressive detail. Our results provide complimentary insights, demonstrating that a range of model architectures trained on a variety of datasets can learn fine-grained information about the interaction of CoordNPs and local syntactic context, but their behavior remains unhumanlike in many key ways. Furthermore, to our best knowledge, this work presents the first psycholinguistic analysis of neural language models trained on French, a high-resource language that has so far been under-investigated in this line of research.", "In the simple coordination experiment, we demonstrated that models were able to capture some of the agreement behaviors of humans, although their performance deviated in crucial aspects. Whereas human behavior is best modeled as a `percolation' process, the neural models appear to be using a linear combination of NP constituent number to drive CoordNP/verb number agreement, with the second noun weighted more heavily than the first. In these experiments, supervision afforded by the RNNG and ActionLSTM models did not translate into more robust or humanlike learning outcomes. The complex coordination experiments provided evidence that the neural models tested were not using a simple `bag of features' strategy, but were sensitive to syntactic context. All models tested were able to interpret material that had similar surface form in ways that corresponded to two different tree-structural descriptions, based on local context. The inverted coordination experiment provided a contrasting example, in which models were unable to modulate expectations based on subtleties in the syntactic environment.", "Across all our experiments, the French models performed consistently better on subject/verb number agreement than on subject/predicate gender agreement. Although there are likely more examples of subject/verb number agreement in the French training data, gender agreement is syntactically mandated and widespread in French. It remains an open question why all but one of the models tested were unable to leverage the numerous examples of gender agreement seen in various contexts during training to drive correct subject/predicate expectations." ], [ "This project is supported by a grant of Labex EFL ANR-10-LABX-0083 (and Idex ANR-18-IDEX-0001) for AA and MIT–IBM AI Laboratory and the MIT–SenseTimeAlliance on Artificial Intelligence for RPL. We would like to thank the anonymous reviewers for their comments and Anne Abeillé for her advice and feedback." ], [ "This section further investigates the effects of CoordNP annotation schemes on the behaviors of structurally-supervised models. We test whether an explicit COORD phrasal tag improves model performance. We trained two additional RNNG models on 38,546 sentences from the Penn Treebank annotated with two different schemes: The first, RNNG (PTB-control) was trained with the original Penn Treebank annotation. The second, RNNG (PTB-coord), was trained on the same sentences, but with an extended coordination annotation scheme, meant to employ the scheme employed in the FTB, adapted from BIBREF26. We stripped empty categories from their scheme and only kept the NP-COORD label for constituents inside a coordination structure. Figure FIGREF26 illustrates the detailed annotation differences between two datasets. We tested both models on all the experiments presented in Sections SECREF3-SECREF6 above.", "Turning to the results of these six experiments: We see little difference between the two models in the Non-coordination agreement experiment. For the Complex coordination control and Complex coordination critical experiments, both models are largely the same as well. However, in the Simple and-coordination and Simple or-coordination experiments the values for all conditions are shifted upwards for the RNNG PTB-coord model, indicating higher over-all preference for the plural continuation. Furthermore, the range of values is reduced in the RNNG PTB-coord model, compared to the RNNG PTB-control model. These results indicate that adding an explicit COORD phrasal label does not drastically change model performance: Both models still appear to be using a linear combination of number features to drive plural vs. singular expectation. However, the explicit representation has made the interior of the coordination phrase more opaque to the model (each feature matters less) and has slightly shifted model preference towards plural continuations. In this sense, the PTB-coord model may have learned a generalization about CoordNPs, but this generalization remains unlike the ones learned by humans." ], [ "We present statistics of subject/predicate agreement patterns in the Penn Treebank (PTB) and French Treebank (FTB) in Table TABREF28 and TABREF29." ] ] }
{ "question": [ "What is the performance achieved by the model described in the paper?", "What is the best performance achieved by supervised models?", "What is the size of the datasets employed?", "What are the baseline models?" ], "question_id": [ "946676f1a836ea2d6fe98cb4cfc26b9f4f81984d", "3b090b416c4ad7d9b5b05df10c5e7770a4590f6a", "a1e07c7563ad038ee2a7de5093ea08efdd6077d4", "a1c4f9e8661d4d488b8684f055e0ee0e2275f767" ], "nlp_background": [ "two", "two", "two", "two" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "4a934bcefb1de58118f472143fbee7ad933239e4" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "d0f75ed9b2fba743797a70db77076fda241b1029" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "(about 4 million sentences, 138 million word tokens)", "one trained on the Billion Word benchmark" ], "yes_no": null, "free_form_answer": "", "evidence": [ "are trained to output the probability distribution of the upcoming word given a context, without explicitly representing the structure of the context BIBREF9, BIBREF10. We trained two two-layer recurrent neural language models with long short-term memory architecture BIBREF11 on a relatively small corpus. The first model, referred as `LSTM (PTB)' in the following sections, was trained on the sentences from Penn Treebank BIBREF12. The second model, referred as `LSTM (FTB)', was trained on the sentences from French Treebank BIBREF13. We set the size of input word embedding and LSTM hidden layer of both models as 256.", "We also compare LSTM language models trained on large corpora. We incorporate two pretrained English language models: one trained on the Billion Word benchmark (referred as `LSTM (1B)') from BIBREF14, and the other trained on English Wikipedia (referred as `LSTM (enWiki)') from BIBREF3. For French, we trained a large LSTM language model (referred as `LSTM (frWaC)') on a random subset (about 4 million sentences, 138 million word tokens) of the frWaC dataset BIBREF15. We set the size of the input embeddings and hidden layers to 400 for the LSTM (frWaC) model since it is trained on a large dataset." ], "highlighted_evidence": [ "The first model, referred as `LSTM (PTB)' in the following sections, was trained on the sentences from Penn Treebank BIBREF12. The second model, referred as `LSTM (FTB)', was trained on the sentences from French Treebank BIBREF13.", "We incorporate two pretrained English language models: one trained on the Billion Word benchmark (referred as `LSTM (1B)') from BIBREF14, and the other trained on English Wikipedia (referred as `LSTM (enWiki)') from BIBREF3. For French, we trained a large LSTM language model (referred as `LSTM (frWaC)') on a random subset (about 4 million sentences, 138 million word tokens) of the frWaC dataset BIBREF15." ] } ], "annotation_id": [ "0779f8ebeeb399bdf0300bd70e072d054d56bb77" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Recurrent Neural Network (RNN)", "ActionLSTM", "Generative Recurrent Neural Network Grammars (RNNG)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Methods ::: Models Tested ::: Recurrent Neural Network (RNN) Language Models", "are trained to output the probability distribution of the upcoming word given a context, without explicitly representing the structure of the context BIBREF9, BIBREF10. We trained two two-layer recurrent neural language models with long short-term memory architecture BIBREF11 on a relatively small corpus. The first model, referred as `LSTM (PTB)' in the following sections, was trained on the sentences from Penn Treebank BIBREF12. The second model, referred as `LSTM (FTB)', was trained on the sentences from French Treebank BIBREF13. We set the size of input word embedding and LSTM hidden layer of both models as 256.", "Methods ::: Models Tested ::: ActionLSTM", "models the linearized bracketed tree structure of a sentence by learning to predict the next action required to construct a phrase-structure parse BIBREF16. The action space consists of three possibilities: open a new non-terminal node and opening bracket; generate a terminal node; and close a bracket. To compute surprisal values for a given token, we approximate $P(w_i|w_{1\\cdots i-1)}$ by marginalizing over the most-likely partial parses found by word-synchronous beam search BIBREF17.", "Methods ::: Models Tested ::: Generative Recurrent Neural Network Grammars (RNNG)", "jointly model the word sequence as well as the underlying syntactic structure BIBREF18. Following BIBREF19, we estimate surprisal using word-synchronous beam search BIBREF17. We use the same hyper-parameter settings as BIBREF18." ], "highlighted_evidence": [ " Recurrent Neural Network (RNN) Language Models\nare trained to output the probability distribution of the upcoming word given a context, without explicitly representing the structure of the context BIBREF9, BIBREF10.", "ActionLSTM\nmodels the linearized bracketed tree structure of a sentence by learning to predict the next action required to construct a phrase-structure parse BIBREF16.", "Generative Recurrent Neural Network Grammars (RNNG)\njointly model the word sequence as well as the underlying syntactic structure BIBREF18." ] } ], "annotation_id": [ "d41b72390217ee11786addfef87cf8b0ec200264" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: Subject-verb agreement with (a) the head of a noun phrase structure, and (b) the coordination structure.", "Table 1: A summary of models tested.", "Table 2: Conditions of number agreement in Noncoordination Agreement experiment.", "Table 3: Conditions of gender agreement in Noncoordination Agreement experiment.", "Table 4: Conditions of number agreement in Simple Coordination experiment.", "Figure 2: Non-Coordination Agreement experiments for English (number) and French (number and gender).", "Figure 3: Comparison of models’ expectation preferences for singular vs. plural predicate in English and French Simple Coordination experiments.", "Table 5: Conditions for the and-coordination experiment. (Items for or-coordination are the same except that we change the coordinator to ou.)", "Figure 4: Comparison of models’ expectation preferences for Feminine v.s. Masculine predicative adjectives in French.", "Table 6: Conditions of number agreement in Complex Coordination Control experiment.", "Figure 5: Comparison of model’s expectation preferences in the Complex Coordination Control experiments.", "Figure 6: Comparison of model’s expectation preferences in the Complex Coordination Critical experiments.", "Table 7: Conditions of number agreement in Complex Coordination Critical experiment.", "Table 8: Conditions in Inverted Coordination experiment.", "Figure 7: Comparison of models’ surprisals of andcoordination in Inverted Coordination experiment.", "Figure 8: Comparison of annotation schemes of coordination structure.", "Table 9: Frequency of number agreement patterns in PTB and FTB.", "Table 10: Frequency of gender agreement patterns in FTB.", "Figure 9: Comparison between RNNGs trained on PTB data with original annotation vs. fine-grained annotation of coordination structure." ], "file": [ "1-Figure1-1.png", "2-Table1-1.png", "3-Table2-1.png", "3-Table3-1.png", "4-Table4-1.png", "4-Figure2-1.png", "5-Figure3-1.png", "5-Table5-1.png", "6-Figure4-1.png", "6-Table6-1.png", "7-Figure5-1.png", "7-Figure6-1.png", "8-Table7-1.png", "8-Table8-1.png", "9-Figure7-1.png", "11-Figure8-1.png", "11-Table9-1.png", "11-Table10-1.png", "12-Figure9-1.png" ] }
1809.07629
Investigating Linguistic Pattern Ordering in Hierarchical Natural Language Generation
Natural language generation (NLG) is a critical component in spoken dialogue system, which can be divided into two phases: (1) sentence planning: deciding the overall sentence structure, (2) surface realization: determining specific word forms and flattening the sentence structure into a string. With the rise of deep learning, most modern NLG models are based on a sequence-to-sequence (seq2seq) model, which basically contains an encoder-decoder structure; these NLG models generate sentences from scratch by jointly optimizing sentence planning and surface realization. However, such simple encoder-decoder architecture usually fail to generate complex and long sentences, because the decoder has difficulty learning all grammar and diction knowledge well. This paper introduces an NLG model with a hierarchical attentional decoder, where the hierarchy focuses on leveraging linguistic knowledge in a specific order. The experiments show that the proposed method significantly outperforms the traditional seq2seq model with a smaller model size, and the design of the hierarchical attentional decoder can be applied to various NLG systems. Furthermore, different generation strategies based on linguistic patterns are investigated and analyzed in order to guide future NLG research work.
{ "section_name": [ "Introduction", "Hierarchical Natural Language Generation (HNLG)", "Attentional Hierarchical Decoder", "Scheduled Sampling", "Curriculum Learning", "Repeat-Input Mechanism", "Attention Mechanism", "Training", "Setup", "Results and Analysis", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "Spoken dialogue systems that can help users to solve complex tasks have become an emerging research topic in artificial intelligence and natural language processing areas BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . With a well-designed dialogue system as an intelligent personal assistant, people can accomplish certain tasks more easily via natural language interactions. Today, there are several virtual intelligent assistants, such as Apple's Siri, Google's Home, Microsoft's Cortana, and Amazon's Alexa, in the market. A typical dialogue system pipeline can be divided into several parts: a recognized result of a user's speech input is fed into a natural language understanding module (NLU) to classify the domain along with domain-specific intents and fill in a set of slots to form a semantic frame BIBREF4 , BIBREF5 , BIBREF6 . A dialogue state tracking (DST) module predicts the current state of the dialogue by means of the semantic frames extracted from multi-turn conversations. Then the dialogue policy determines the system action for the next step given the current dialogue state. Finally the semantic frame of the system action is then fed into a natural language generation (NLG) module to construct a response utterance to the user BIBREF7 , BIBREF8 .", "As a key component to a dialogue system, the goal of NLG is to generate natural language sentences given the semantics provided by the dialogue manager to feedback to users. As the endpoint of interacting with users, the quality of generated sentences is crucial for better user experience. The common and mostly adopted method is the rule-based (or template-based) method BIBREF9 , which can ensure the natural language quality and fluency. In spite of robustness and adequacy of the rule-based methods, frequent repetition of identical, tedious output makes talking to a template-based machine unsatisfactory. Furthermore, scalability is an issue, because designing sophisticated rules for a specific domain is time-consuming BIBREF10 .", "Recurrent neural network-based language model (RNNLM) have demonstrated the capability of modeling long-term dependency in sequence prediction by leveraging recurrent structures BIBREF11 , BIBREF12 . Previous work proposed an RNNLM-based NLG that can be trained on any corpus of dialogue act-utterance pairs without hand-crafted features and any semantic alignment BIBREF13 . The following work based on sequence-to-sequence (seq2seq) further obtained better performance by employing encoder-decoder structure with linguistic knowledge such as syntax trees BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . However, due to grammar complexity and lack of diction knowledge, it is still challenging to generate long and complex sentences by a simple encoder-decoder structure.", "To address the issue, previous work attempted separating decoding jobs in a decoding hierarchy, which is constructed in terms of part-of-speech (POS) tags BIBREF8 . The original single decoding process is separated into a multi-level decoding hierarchy, where each decoding layer generates words associated with a specific POS set. This paper extends the idea to a more flexible design by incorporating attention mechanisms into the decoding hierarchy. Because prior work designs the decoding hierarchy in a hand-crafted manner based on a subjective intuition BIBREF8 , in this work, we experiment on various generating hierarchies to investigate the importance of linguistic pattern ordering in hierarchical language generation. The experiments show that our proposed method outperforms the classic seq2seq model with a smaller model size; in addition, the concept of the hierarchical decoder is proven general enough for various generating hierarchies. Furthermore, this paper also provides the design guidelines and insights of designing the decoding hierarchy." ], [ "The framework of the proposed hierarchical NLG model is illustrated in Figure FIGREF2 , where the model architecture is based on an encoder-decoder (seq2seq) structure with attentional hierarchical decoders BIBREF14 , BIBREF15 . In the encoder-decoder architecture, a typical generation process includes encoding and decoding phases: First, a given semantic representation sequence INLINEFORM0 is fed into a RNN-based encoder to capture the temporal dependency and project the input to a latent feature space; the semantic representation sequence is also encoded into an one-hot representation as the initial state of the encoder in order to maintain the temporal-independent condition as shown in the left part of Figure FIGREF2 . The recurrent unit of the encoder is bidirectional gated recurrent unit (GRU) BIBREF14 , DISPLAYFORM0 ", "Then the encoded semantic vector, INLINEFORM0 , is fed into an RNN-based decoder as the initial state to decode word sequences, as shown in the right part of Figure FIGREF2 ." ], [ "In spite of the intuitive and elegant design of the seq2seq model, it is still difficult to generate complex and decent sequences by a simple encoder-decoder structure, because a single decoder is not capable of learning all diction, grammar, and other related linguistic knowledge at the same time. Some prior work applied additional techniques such as reranker and beam-search to select a better result among multiple generated sequences BIBREF13 , BIBREF16 . However, it is still an unsolved issue to the NLG community.", "Therefore, we propose a hierarchical decoder to address the above issue, where the core idea is to allow the decoding layers to focus on learning different types of patterns instead of learning all relevant knowledge together. The hierarchical decoder is composed of several decoding layers, each of which is only responsible for learning a portion of the required knowledge. Namely, the linguistic knowledge can be incorporated into the decoding process and divided into several subsets.", "We use part-of-speech (POS) tags as the additional linguistic features to construct the decoding hierarchy in this paper, where POS tags of the words in the target sentence are separated into several subsets, and each layer is responsible for decoding the words associated with a specific set of POS patterns. An example is shown in the right part of Figure FIGREF2 , where the first layer at the bottom is in charge of decoding nouns, pronouns, and proper nouns, and the second layer is for verbs, and so on. The prior work manually designed the decoding hierarchy by considering the subjective intuition about how children learn to speak BIBREF8 : infants first learn to say keywords, which are often nouns. For example, when an infant says “Daddy, toilet.”, it actually means “Daddy, I want to go to the toilet.”. Along with the growth of the age, children learn more grammars and vocabulary and then start adding verbs to the sentences, further adding adverbs, and so on. However, the hand-crafted linguistic order may not be optimal, so we experiment and analyze the model on various generating linguistic hierarchies to deeply investigate the effect of linguistic pattern ordering.", "In the hierarchical decoder, the initial state of each GRU-based decoding layer INLINEFORM0 is the extracted feature INLINEFORM1 from the encoder, and the input at every step is the last predicted token INLINEFORM2 concatenated with the output from the previous layer INLINEFORM3 , DISPLAYFORM0 ", "where INLINEFORM0 is the INLINEFORM1 -th hidden state of the INLINEFORM2 -th GRU decoding layer and INLINEFORM3 is the INLINEFORM4 -th outputted word in the INLINEFORM5 -th layer. We use the cross entropy loss as our training objective for optimization, where the difference between the predicted distribution and target distribution is minimized. To facilitate training and improve the performance, several strategies including scheduled sampling, a repeat input mechanism, curriculum learning, and an attention mechanism are utilized." ], [ "Teacher forcing BIBREF18 is a strategy for training RNN that uses model output from a prior time step as an input, and it works by using the expected output at the current time step INLINEFORM0 as the input at the next time step, rather than the output generated by the network. The teacher forcing techniques can also be triggered only with a certain probability, which is known as the scheduled sampling approach BIBREF19 . We adopt scheduled sampling methods in our experiments. In the proposed framework, an input of a decoder contains not only the output from the last step but one from the last decoding layer. Therefore, we design two types of scheduled sampling approaches – inner-layer and inter-layer.", "Inner-layer schedule sampling is the classic teacher forcing strategy: DISPLAYFORM0 ", "Inter-layer schedule sampling uses the labels instead of the actual output tokens of the last layer: DISPLAYFORM0 " ], [ "The proposed hierarchical decoder consists of several decoding layers, the expected output sequences of upper layers are longer than the ones in the lower layers. The framework is suitable for applying the curriculum learning BIBREF20 , of which core concept is that a curriculum of progressively harder tasks could significantly accelerate a network’s training. The training procedure is to train each decoding layer for some epochs from the bottommost layer to the topmost one." ], [ "The concept of the hierarchical decoding is to hierarchically generate the sequence, gradually adding words associated with different linguistic patterns. Therefore, the generated sequences from the decoders become longer as the generating process proceeds to the higher decoding layers, and the sequence generated by a upper layer should contain the words predicted by the lower layers. To facilitate the behavior, previous work designs a strategy that repeats the outputs from the last layer as inputs until the current decoding layer outputs the same token, so-called the repeat-input mechanism BIBREF8 . This approach offers at least two merits: (1) Repeating inputs tells the decoder that the repeated tokens are important to encourage the decoder to generate them. (2) If the expected output sequence of a layer is much shorter than the one of the next layer, the large difference in length becomes a critical issue of the hierarchical decoder, because the output sequence of a layer will be fed into the next layer. With the repeat-input mechanism, the impact of length difference can be mitigated." ], [ "In order to model the relationship between layers in a generating hierarchy, we further design attention mechanisms for the hierarchical decoder. The proposed attention mechanisms are content-based, which means the weights are determined based on hidden states of neural models: DISPLAYFORM0 ", "where INLINEFORM0 is the hidden state at the current step, INLINEFORM1 are the hidden states from the previous decoder layer, and INLINEFORM2 is a learned weight matrix. At each decoding step, attention values INLINEFORM3 are calculated by these methods and then used to compute the weighted sum as a context vector, which is then concatenated to decoder inputs as additional information." ], [ "The objective of the proposed model is to optimize the conditional probability INLINEFORM0 , so that the difference between the predicted distribution and the target distribution, INLINEFORM1 , can be minimized: DISPLAYFORM0 ", "where INLINEFORM0 is the number of samples and the labels INLINEFORM1 are the word labels. Each decoder in the hierarchical NLG is trained based on curriculum learning with the objective." ], [ "The E2E NLG challenge dataset BIBREF21 is utilized in our experiments, which is a crowd-sourced dataset of 50k instances in the restaurant domain. Our models are trained on the official training set and verified on the official testing set. As shown in Figure FIGREF2 , the inputs are semantic frames containing specific slots and corresponding values, and the outputs are the associated natural language utterances with the given semantics. For example, a semantic frame with the slot-value pairs “name[Bibimbap House], food[English], priceRange[moderate], area [riverside], near [Clare Hall]” corresponds to the target sentence “Bibimbap House is a moderately priced restaurant who's main cuisine is English food. You will find this local gem near Clare Hall in the Riverside area.”.", "The data preprocessing includes trimming punctuation marks, lemmatization, and turning all words into lowercase. To prepare the labels of each layer within the hierarchical structure of the proposed method, we utilize spaCy toolkit to perform POS tagging for the target word sequences. Some properties such as names of restaurants are delexicalized (for example, replaced with a symbol “RESTAURANT_NAME”) to avoid data sparsity. In our experiments, we perform six different generating linguistic orders, in which each hierarchy is constructed based on different permutations of the POS tag sets: (1) nouns, proper nouns, and pronouns (2) verbs (3) adjectives and adverbs (4) others.", "The probability of activating inter-layer and inner-layer teacher forcing is set to 0.5, the probability of teacher forcing is attenuated every epoch, and the decaying ratio is 0.9. The models are trained for 20 training epochs without early stop; when curriculum learning is applied, only the first layer is trained during first five epochs, the second decoder layer starts to be trained at the sixth epoch, and so on. To evaluate the quality of the generated sequences regarding both precision and recall, the evaluation metrics include BLEU and ROUGE (1, 2, L) scores with multiple references BIBREF22 ." ], [ "In the experiments, we borrow the idea of hierarchical decoding proposed by the previous work BIBREF8 and investigate various extensions of generating hierarchies. To examine the effectiveness of hierarchical decoders, we control our model size to be smaller than the baseline's. Specifically, the decoder in the baseline seq2seq model has hidden layers of size 400, while our models with hierarchical decoders have four decoding layers of size 100 for fair comparison.", "Table TABREF13 compares the performance between a baseline and proposed models with different generating linguistic orders. For all generating hierarchies with different orders, simply replacing the decoder by a hierarchical decoder achieves significant improvement in every evaluation metrics; for example, the topmost generating hierarchy in Table TABREF13 has 49.25% improvement in BLEU, 30.03% in ROUGE-1, 96.48% in ROUGE-2, and 25.99% in ROUGE-L respectively. In other words, separating the generation process into several phases is proven to be a promising method. Performing curriculum learning strategy offers a considerable improvement, take the topmost generating hierarchy in Table TABREF13 for example, this method yields a 102.07% improvement in BLEU, 48.26% in ROUGE-1, 144.8% in ROUGE-2, and 39.18% in ROUGE-L. Despite that applying repeat-input mechanism alone does not offer benefit, combining these two strategies together further achieves the best performance. Note that these methods do not require any additional parameters.", "Unfortunately, even some of the attentional hierarchical decoders achieve the best results in the generating hierarchies (Table TABREF18 ). Mostly, the additional attention mechanisms are not capable of bringing benefit for model performance. The reason may be that the decoding process is designed for gradually importing words in the specific set of linguistic patterns to the output sequence, each decoder layer is responsible of copying the output tokens from the previous layer and insert new words into the sequence precisely. Because of this nature, a decoder needs explicit information of the structure of a sentence rather than implicit high-level latent information. For instance, when a decoder is trying to insert some Verb words into the output sequence, knowing the position of subject and object would be very helpful.", "The above results show that among these six different generating hierarchy, the generating order: (1) verbs INLINEFORM0 (2) nouns, proper nouns, and pronouns INLINEFORM1 (3) adjectives and adverbs INLINEFORM2 (4) the other POS tags yields the worst performance. Table TABREF23 shows that the gap of average length of target sequences between the first and the second decoder layer is the largest among all the hierarchies; in average, the second decoder needs to insert up to 8 words into the sequence based on 3.62 words from the first decoder layer in this generation process, which is absolutely difficult. The essence of the hierarchical design is to separate the job of the decoder into several phases; if the job of each phase is balanced, it is intuitive that it is more suitable for applying curriculum learning and improve the model performance.", "The model performance is also related to linguistic structures of sentences: the fifth and the sixth generating hierarchies in Table TABREF13 have very similar trends, where the length of target sentences of each decoder layer is almost identical as shown in Table TABREF23 . However, the model performance differs a lot. An adverb word could be used to modify anything but nouns and pronouns, which means that the number of adverbs used for modifying verbs would be a factor to determine the generating order as well. In our cases, almost all adverbs in the dataset are used to describe adjectives, indicating that generating verbs before inserting adverbs to sequences may not provide enough useful information; instead, it would possibly obstruct the model learning. We can also find that in all experiments, inserting adverbs before verbs would be better.", "In summary, the concept of the hierarchical decoder is simple and useful, separating a difficult job to many phases is demonstrated to be a promising direction and not limited to a specific generating hierarchy. Furthermore, the generating linguistic orders should be determined based on the dataset, and the important factors include the distribution over length of subsequences and the linguistic nature of the dataset for designing a proper generating hierarchy in NLG." ], [ "This paper investigates the seq2seq-based model with a hierarchical decoder that leverages various linguistic patterns. The experiments on different generating linguistic orders demonstrates the generalization about the proposed hierarchical decoder, which is not limited to a specific generating hierarchy. However, there is no universal decoding hierarchy, while the main factor for designing a suitable generating order is the nature of the dataset." ], [ "We would like to thank reviewers for their insightful comments on the paper. This work was financially supported by Ministry of Science and Technology (MOST) in Taiwan." ] ] }
{ "question": [ "What evaluation metrics are used?", "What datasets did they use?" ], "question_id": [ "c5171daf82107fce0f285fa18f19e91fbd1215c5", "baeb6785077931e842079e9d0c9c9040947ffa4e" ], "nlp_background": [ "", "" ], "topic_background": [ "", "" ], "paper_read": [ "", "" ], "search_query": [ "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "the evaluation metrics include BLEU and ROUGE (1, 2, L) scores" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The probability of activating inter-layer and inner-layer teacher forcing is set to 0.5, the probability of teacher forcing is attenuated every epoch, and the decaying ratio is 0.9. The models are trained for 20 training epochs without early stop; when curriculum learning is applied, only the first layer is trained during first five epochs, the second decoder layer starts to be trained at the sixth epoch, and so on. To evaluate the quality of the generated sequences regarding both precision and recall, the evaluation metrics include BLEU and ROUGE (1, 2, L) scores with multiple references BIBREF22 ." ], "highlighted_evidence": [ "o evaluate the quality of the generated sequences regarding both precision and recall, the evaluation metrics include BLEU and ROUGE (1, 2, L) scores with multiple references BIBREF22 ." ] } ], "annotation_id": [ "41933f516cd2eabe497f50d677dc63f2d2da863b" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "The E2E NLG challenge dataset BIBREF21" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The E2E NLG challenge dataset BIBREF21 is utilized in our experiments, which is a crowd-sourced dataset of 50k instances in the restaurant domain. Our models are trained on the official training set and verified on the official testing set. As shown in Figure FIGREF2 , the inputs are semantic frames containing specific slots and corresponding values, and the outputs are the associated natural language utterances with the given semantics. For example, a semantic frame with the slot-value pairs “name[Bibimbap House], food[English], priceRange[moderate], area [riverside], near [Clare Hall]” corresponds to the target sentence “Bibimbap House is a moderately priced restaurant who's main cuisine is English food. You will find this local gem near Clare Hall in the Riverside area.”." ], "highlighted_evidence": [ "The E2E NLG challenge dataset BIBREF21 is utilized in our experiments, which is a crowd-sourced dataset of 50k instances in the restaurant domain. Our models are trained on the official training set and verified on the official testing set. " ] } ], "annotation_id": [ "d426bcf965dfa28f313217b06b4e8dbe71847fab" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Fig. 1. The illustration of the proposed semantically conditioned NLG model. The hierarchical decoder contains four decoder layer, each is only responsible for learning to insert words of a specific set of POS tags into the sequence.", "Table 1. The proposed attentional hierarchical NLG models with various generating linguistic orders.", "Table 2. The proposed hierarchical NLG models with various generating linguistic orders .", "Table 3. The average length of the target sequences for each decoder layer in the training data (left) and testing data (right)." ], "file": [ "2-Figure1-1.png", "4-Table1-1.png", "5-Table2-1.png", "6-Table3-1.png" ] }
1807.05154
Deep Enhanced Representation for Implicit Discourse Relation Recognition
Implicit discourse relation recognition is a challenging task as the relation prediction without explicit connectives in discourse parsing needs understanding of text spans and cannot be easily derived from surface features from the input sentence pairs. Thus, properly representing the text is very crucial to this task. In this paper, we propose a model augmented with different grained text representations, including character, subword, word, sentence, and sentence pair levels. The proposed deeper model is evaluated on the benchmark treebank and achieves state-of-the-art accuracy with greater than 48% in 11-way and $F_1$ score greater than 50% in 4-way classifications for the first time according to our best knowledge.
{ "section_name": [ "Introduction", "Related Work", "Overview", "Word-Level Module", "Sentence-Level Module", "Pair-Level Module", "Classifier", "ExperimentsThe code for this paper is available at https://github.com/diccooo/Deep_Enhanced_Repr_for_IDRR", "11-way Classification", "Binary and 4-way Classification", "Conclusion" ], "paragraphs": [ [ " This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/", "Discourse parsing is a fundamental task in natural language processing (NLP) which determines the structure of the whole discourse and identifies the relations between discourse spans such as clauses and sentences. Improving this task can be helpful to many downstream tasks such as machine translation BIBREF0 , question answering BIBREF1 , and so on. As one of the important parts of discourse parsing, implicit discourse relation recognition task is to find the relation between two spans without explicit connectives (e.g., but, so, etc.), and needs recovering the relation from semantic understanding of texts.", "The Penn Discourse Treebank 2.0 (PDTB 2.0) BIBREF2 is a benchmark corpus for discourse relations. In PDTB style, the connectives can be explicit or implicit, and one entry of the data is separated into Arg1 and Arg2, accompanied with a relation sense. Since the release of PDTB 2.0 dataset, many methods have been proposed, ranging from traditional feature-based methods BIBREF3 , BIBREF4 to latest neural-based methods BIBREF5 , BIBREF6 . Especially through many neural network methods used for this task such as convolutional neural network (CNN) BIBREF7 , recursive neural network BIBREF8 , embedding improvement BIBREF9 , attention mechanism BIBREF10 , gate mechanism BIBREF11 , multi-task method BIBREF6 , the performance of this task has improved a lot since it was first introduced. However, this task is still very challenging with the highest reported accuracy still lower than 50% due to the hardness for the machines to understand the text meaning and the relatively small task corpus.", "In this work, we focus on improving the learned representations of sentence pairs to address the implicit discourse relation recognition. It is well known that text representation is the core part of state-of-the-art deep learning methods for NLP tasks, and improving the representation from all perspective will benefit the concerned task.", "The representation is improved by two ways in our model through three-level hierarchy. The first way is embedding augmentation. Only with informative embeddings, can the final representations be better. This is implemented in our word-level module. We augment word embeddings with subword-level embeddings and pre-trained ELMo embeddings. Subwords coming from unsupervised segmentation demonstrate a better consequent performance than characters for being a better minimal representation unit. The pre-trained contextualized word embeddings (ELMo) can make the embeddings contain more contextual information which is also involved with character-level inputs. The second way is a deep residual bi-attention encoder. Since this task is about classifying sentence pairs, the encoder is implemented in sentence and sentence-pair levels. A deeper model can support richer representations but is hard to train, especially with a small dataset. So we apply residual connections BIBREF12 to each module for facilitating signal propagation and alleviating gradient degradation. The stacked encoder blocks make the single sentence representation richer and bi-attention module mixes two sentence representations focusingly. With introducing richer and deeper representation enhancement, we report the deepest model so far for the task.", "Our representation enhanced model will be evaluated on the benchmark PDTB 2.0 and demonstrate state-of-the-art performance to verify its effectiveness.", "This paper is organized as follows. Section 2 reviews related work. Section 3 introduces our model. Section 4 shows our experiments and analyses the results. Section 5 concludes this work." ], [ "After the release of Penn Discourse Treebank 2.0, many works have been made to solve this concerned task. lin-kan-ng:2009:EMNLP is the first work who considered the second-level classification of the task by empirically evaluating the impact of surface features. Feature based methods BIBREF4 , BIBREF13 , BIBREF14 , BIBREF15 mainly focused on using linguistic, or semantic features from the discourse units, or the relations between unit pairs and word pairs. zhang-EtAl:2015:EMNLP4 is the first one who modeled this task using end-to-end neural network and gained great performance improvement. Neural network methods also used by lots of works BIBREF16 , BIBREF17 for better performance. Since then, a lot of methods have been proposed. braud2015comparing found that word embeddings trained by neural networks is very useful to this task. qin-zhang-zhao:2016:COLING augmented their system with character-level and contextualized embeddings. Recurrent networks and convolutional networks have been used as basic blocks in many works BIBREF18 , BIBREF19 , BIBREF7 . TACL536 used recursive neural networks. Attention mechanism was used by liu-li:2016:EMNLP2016, cai2017discourse and others. wu-EtAl:2016:EMNLP2016 and lan-EtAl:2017:EMNLP20172 applied multi-task component. qin-EtAl:2017:Long utilized adversarial nets to migrate the connective-based features to implicit ones.", "Sentence representation is a key component in many NLP tasks. Usually, better representation means better performance. Plenty of work on language modeling has been done, as language modeling can supply better sentence representations. Since the pioneering work of Bengio2006, neural language models have been well developed BIBREF20 , BIBREF21 , BIBREF22 . Sentence representation is directly handled in a series of work. lin2017structured used self attention mechanism and used matrix to represent sentence, and conneau-EtAl:2017:EMNLP2017 used encoders pre-trained on SNLI BIBREF23 and MultiNLI BIBREF24 .", "Different from all the existing work, for the first time to our best knowledge, this work is devoted to an empirical study on different levels of representation enhancement for implicit discourse relation classification task." ], [ "Figure 1 illustrates an overview of our model, which is mainly consisted of three parts: word-level module, sentence-level module, and pair-level module. Token sequences of sentence pairs (Arg1 and Arg2) are encoded by word-level module first and every token becomes a word embedding augmented by subword and ELMo. Then these embeddings are fed to sentence-level module and processed by stacked encoder blocks (CNN or RNN encoder block). Every block layer outputs representation for each token. Furthermore, the output of each layer is processed by bi-attention module in the pair-level module, and concatenated to pair representation, which is finally sent to classifiers which are multiple layer perceptrons (MLP) with softmax. The model details are given in the rest of this section." ], [ "An inputed token sequence of length $N$ is encoded by the word-level module into an embedding sequence $(\\mathbf {e}_1, \\mathbf {e}_2, \\mathbf {e}_3, \\cdots , \\mathbf {e}_N)$ . For each embedded token $\\mathbf {e}_i$ , it is concatenated from three parts, ", "$$\\mathbf {e}_i = [\\mathbf {e}_i^w;~ \\mathbf {e}_i^s;~ \\mathbf {e}_i^c] \\in \\mathbb {R}^{d_e}$$ (Eq. 4) ", " $\\mathbf {e}_i^w \\in \\mathbb {R}^{d_w}$ is pre-trained word embedding for this token, and is fixed during the training procedure. Our experiments show that fine-tuning the embeddings slowed down the training without better performance. $\\mathbf {e}_i^s \\in \\mathbb {R}^{d_s}$ is subword-level embedding encoded by subword encoder. $\\mathbf {e}_i^c \\in \\mathbb {R}^{d_c}$ is contextualized word embedding encoded by pre-trained ELMo encoders, whose parameters are also fixed during training. Subword is merged from single-character segmentation and the input of ELMo encoder is also character.", "Character-level embeddings have been used widely in lots of works and its effectiveness is verified for out-of-vocabulary (OOV) or rare word representation. However, character is not a natural minimal unit for there exists word internal structure, we thus introduce a subword-level embedding instead.", "Subword units can be computationally discovered by unsupervised segmentation over words that are regarded as character sequences. We adopt byte pair encoding (BPE) algorithm introduced by sennrich-haddow-birch:2016:P16-12 for this segmentation. BPE segmentation actually relies on a series of iterative merging operation over bigrams with the highest frequency. The number of merging operation times is roughly equal to the result subword vocabulary size.", "For each word, the subword-level embedding is encoded by a subword encoder as in Figure 2 . Firstly, the subword sequence (of length $n$ ) of the word is mapped to subword embedding sequence $(\\mathbf {se}_1, \\mathbf {se}_2, \\mathbf {se}_3, \\cdots , \\mathbf {se}_n)$ (after padding), which is randomly initialized. Then $K$ (we empirically set $K$ =2) convolutional operations $Conv_1, Conv_2, \\cdots , Conv_K$ followed by max pooling operation are applied to the embedding sequence, and the sequence is padded before the convolutional operation. For the $i$ -th convolution kernel $Conv_i$ , suppose the kernel size is $k_i$ , then the output of $Conv_i$ on embeddings $\\mathbf {se}_{j}$ to $(\\mathbf {se}_1, \\mathbf {se}_2, \\mathbf {se}_3, \\cdots , \\mathbf {se}_n)$0 is $(\\mathbf {se}_1, \\mathbf {se}_2, \\mathbf {se}_3, \\cdots , \\mathbf {se}_n)$1 ", "The final output of $Conv_i$ after max pooling is $\\begin{split}\n\\mathbf {u}_i &= \\mathop {maxpool}{(\\mathbf {C}_1,~ \\cdots ,~ \\mathbf {C}_j,~ \\cdots ,~ \\mathbf {C}_n)}\n\\end{split}$ ", "Finally, the $K$ outputs are concatenated, $\n\\mathbf {u} = [\\mathbf {u}_1;~ \\mathbf {u}_2;~ \\cdots ;~ \\mathbf {u}_K] \\in \\mathbb {R}^{d_s}\n$ ", "to feed a highway network BIBREF25 , ", "$$\\mathbf {g} &=& \\sigma (\\mathbf {W}_g \\mathbf {u}^T + \\mathbf {b}_g) \\in \\mathbb {R}^{d_s} \\nonumber \\\\\n\\mathbf {e}_i^s &=& \\mathbf {g} \\odot \\mathop {ReLU}(\\mathbf {W}_h \\mathbf {u}^T + \\mathbf {b}_h)\n+ (\\mathbf {1} - \\mathbf {g}) \\odot \\mathbf {u} \\nonumber \\\\\n&\\in & \\mathbb {R}^{d_s}$$ (Eq. 6) ", "where $\\mathbf {g}$ denotes the gate, and $\\mathbf {W}_g \\in \\mathbb {R}^{d_s \\times d_s}, \\mathbf {b}_g \\in \\mathbb {R}^{d_s},\n\\mathbf {W}_h \\in \\mathbb {R}^{d_s \\times d_s}, \\mathbf {b}_h \\in \\mathbb {R}^{d_s}$ are parameters. $\\odot $ is element-wise multiplication. The above Eq. 6 gives the subword-level embedding for the $i$ -th word.", "ELMo (Embeddings from Language Models) BIBREF26 is a pre-trained contextualized word embeddings involving character-level representation. It is shown useful in some works BIBREF27 , BIBREF28 . This embedding is trained by bidirectional language models on large corpus using character sequence for each word token as input. The ELMo encoder employs CNN and highway networks over characters, whose output is given to a multiple-layer biLSTM with residual connections. Then the output is contextualized embeddings for each word. It is also can be seen as a hybrid encoder for character, word, and sentence. This encoder can add lots of contextual information to each word, and can ease the semantics learning of the model.", "For the pre-trained ELMo encoder, the output is the result of the last two biLSTM layers. Suppose $\\mathbf {c}_i$ is the character sequence of $i$ -th word in a sentence, then the encoder output is $\n[\\cdots , \\mathbf {h}_i^0, \\cdots ;~ \\cdots , \\mathbf {h}_i^1, \\cdots ]\n= \\mathop {ELMo}(\\cdots , \\mathbf {c}_i, \\cdots )\n$ ", "where $\\mathbf {h}_i^0$ and $\\mathbf {h}_i^1$ denote the outputs of first and second layers of ELMo encoder for $i$ -th word.", "Following Peters2018ELMo, we use a self-adjusted weighted average of $\\mathbf {h}_i^0, \\mathbf {h}_i^1$ , $\\begin{split}\n\\mathbf {s} &= \\mathop {softmax}(\\mathbf {w}) \\in \\mathbb {R}^2\\\\\n\\mathbf {h} &= \\gamma \\sum _{j=0}^1 s_j \\mathbf {h}_i^j \\in \\mathbb {R}^{d_c^{\\prime }}\n\\end{split}$ ", "where $\\gamma \\in \\mathbb {R}$ and $\\mathbf {w} \\in \\mathbb {R}^2$ are parameters tuned during training and $d_c^{\\prime }$ is the dimension of the ELMo encoder's outputs. Then the result is fed to a feed forward network to reduce its dimension, ", "$$\\mathbf {e}_i^c = \\mathbf {W}_c \\mathbf {h}^T + \\mathbf {b}_c \\in \\mathbb {R}^{d_c}$$ (Eq. 7) ", " $\\mathbf {W}_c \\in \\mathbb {R}^{d_c^{\\prime } \\times d_c}$ and $\\mathbf {b}_c \\in \\mathbb {R}^{d_c}$ are parameters. The above Eq. 7 gives ELMo embedding for the $i$ -th word." ], [ "The resulting word embeddings $\\mathbf {e}_i$ (Eq. 4 ) are sent to sentence-level module. The sentence-level module is composed of stacked encoder blocks. The block in each layer receives output of the previous layer as input and sends output to next layer. It also sends its output to the pair-level module. Parameters in different layers are not the same.", "We consider two encoder types, convolutional type and recurrent type. We only use one encoder type in one experiment.", "For the sentence-level module for different arguments (Arg1 and Arg2), many previous works used same parameters to encode different arguments, that is, one encoder for two type arguments. But as indicated by prasad2008penn, Arg1 and Arg2 may have different semantic perspective, we thus introduce argument-aware parameter settings for different arguments.", "Figure 3 is the convolutional encoder block. Suppose the input for the encoder block is $\\mathbf {x}_i ~ (i=1, \\cdots , N)$ , then $\\mathbf {x}_i \\in \\mathbb {R}^{d_e}$ . The input is sent to a convolutional layer and mapped to output $\\mathbf {y}_i = [\\mathbf {A}_i \\; \\mathbf {B}_i] \\in \\mathbb {R}^{2d_e}$ . After the convolutional operation, gated linear units (GLU) BIBREF29 is applied, i.e., $\n\\mathbf {z}_i = \\mathbf {A}_i \\odot \\sigma (\\mathbf {B}_i) \\in \\mathbb {R}^{d_e}\n$ ", "There is also a residual connection (Res 1) in the block, which means adding the output of $\\mathop {GLU}$ and the input of the block as final output, so $\\mathbf {z}_i + \\mathbf {x}_i$ is the output of the block corresponding to the input $\\mathbf {x}_i$ . The output $\\mathbf {z}_i + \\mathbf {x}_i$ for all $i = 1, \\cdots , N$ is sent to both the next layer and the pair-level module as input.", "Similar to the convolutional one, recurrent encoder block is shown in Figure 3 . The input $\\mathbf {x}_i$ is encoded by a biGRU BIBREF30 layer first, $\n\\mathbf {y}_i = \\mathop {biGRU}(\\mathbf {x}_i) \\in \\mathbb {R}^{2d_e}\n$ ", "then this is sent to a feed forword network, ", "$$\\mathbf {z}_i = \\mathbf {W}_r \\mathbf {y}_i^T + \\mathbf {b}_r \\in \\mathbb {R}^{d_e}$$ (Eq. 10) ", " $\\mathbf {W}_r \\in \\mathbb {R}^{2d_e \\times d_e}$ and $\\mathbf {b}_r \\in \\mathbb {R}^{d_e}$ are parameters. There is also a similar residual connection (Res 1) in the block, so $\\mathbf {z}_i + \\mathbf {x}_i$ for all $i = 1, \\cdots , N$ is the final output of the recurrent encoder block." ], [ "Through the sentence-level module, the word representations are contextualized, and these contextualized representations of each layer are sent to pair-level module.", "Suppose the encoder block layer number is $l$ , and the outputs of $j$ -th block layer for Arg1 and Arg2 are $\\mathbf {v}_1^j, \\mathbf {v}_2^j \\in \\mathbb {R}^{N \\times d_e}$ , each row of which is the embedding for the corresponding word. $N$ is the length of word sequence (sentence). Each sentence is padded or truncated to let all sentences have the same length. They are sent to a bi-attention module, the attention matrix is $\n\\mathbf {M}_j = (\\mathop {FFN}(\\mathbf {v}_1^j)) {\\mathbf {v}_2^j}^T\n\\in \\mathbb {R}^{N \\times N}\n$ ", " $\\mathop {FFN}$ is a feed froward network (similar to Eq. 10 ) applied to the last dimension corresponding to the word. Then the projected representations are $\\begin{split}\n\\mathbf {w}_2^j &= \\mathop {softmax}(\\mathbf {M}_j) {\\mathbf {v}_2^j} \\in \\mathbb {R}^{N \\times d_e}\\\\\n\\mathbf {w}_1^j &= \\mathop {softmax}(\\mathbf {M}_j^T) {\\mathbf {v}_1^j} \\in \\mathbb {R}^{N \\times d_e}\n\\end{split}$ ", "where the $\\mathop {softmax}$ is applied to each row of the matrix. We apply 2-max pooling on each projected representation and concatenate them as output of the $j$ -th bi-attention module $\n\\mathbf {o}_j = [\\mathop {top2}(\\mathbf {w}_1^j);~ \\mathop {top2}(\\mathbf {w}_2^j)]\n\\in \\mathbb {R}^{4 d_e}\n$ ", "The number of max pooling operation (top-2) is selected from experiments and it is a balance of more salient features and less noise. The final pair representation is ", "$$\\mathbf {o} = [\\mathbf {o}_1, \\mathbf {o}_2, \\cdots , \\mathbf {o}_l] \\in \\mathbb {R}^{4 l d_e}$$ (Eq. 12) ", "Since the output is concatenated from different layers and the outputs of lower layers are sent directly to the final representation, this also can be seen as residual connections (Res 2). Then the output as Eq. 12 is fed to an MLP classifier with softmax. The parameters for bi-attention modules in different levels are shared." ], [ "We use two classifiers in our model. One is for relation classification, and another one is for connective classification. The classifier is only a multiple layer perceptron (MLP) with softmax layer. qin-EtAl:2017:Long used adversarial method to utilize the connectives, but this method is not suitable for our adopted attention module since the attended part of a sentence will be distinctly different when the argument is with and without connectives. They also proposed a multi-task method that augments the model with an additional classifier for connective prediction, and the input of it is also the pair representation. It is straightforward and simple enough, and can help the model learn better representations, so we include this module in our model. The implicit connectives are provided by PDTB 2.0 dataset, and the connective classifier is only used during training. The loss function for both classifiers is cross entropy loss, and the total loss is the sum of the two losses, i.e., $Loss = Loss_{relation} + Loss_{connective}$ ." ], [ "Our model is evaluated on the benchmark PDTB 2.0 for two types of classification tasks.", "PDTB 2.0 has three levels of senses: Level-1 Class, Level-2 Type, and Level-3 Subtypes. The first level consists of four major relation Classes: COMPARISON, CONTINGENCY, EXPANSION, and TEMPORAL. The second level contains 16 Types.", "All our experiments are implemented by PyTorch. The pre-trained ELMo encoder is from AllenNLP toolkit BIBREF31 ." ], [ "Following the settings of qin-EtAl:2017:Long, we use two splitting methods of PDTB dataset for comprehensive comparison. The first is PDTB-Lin BIBREF3 , which uses section 2-21, 22 and 23 as training, dev and test sets respectively. The second is PDTB-Ji BIBREF8 , which uses section 2-20, 0-1, and 21-22 as training, dev and test sets respectively. According to TACL536, five relation types have few training instances and no dev and test instance. Removing the five types, there remain 11 second level types. During training, instances with more than one annotated relation types are considered as multiple instances, each of which has one of the annotations. At test time, a prediction that matches one of the gold types is considered as correct. All sentences in the dataset are padded or truncated to keep the same 100-word length.", "For the results of both splitting methods, we share some hyperparameters. Table 1 is some of the shared hyperparameter settings. The pre-trained word embeddings are 300-dim word2vec BIBREF32 pre-trained from Google News. So $d_w = 300, d_s = 100, d_c = 300$ , then for the final embedding ( $\\mathbf {e}_i$ ), $d_e = 700$ . For the encoder block in sentence-level module, kernel size is same for every layer. We use AdaGrad optimization BIBREF33 .", "The encoder block layer number is different for the two splitting methods. The layer number for PDTB-Ji splitting method is 4, and the layer number for PDTB-Lin splitting method is 5.", "Compared to other recent state-of-the-art systems in Table 2 , our model achieves new state-of-the-art performance in two splitting methods with great improvements. As to our best knowledge, our model is the first one that exceeds the 48% accuracy in 11-way classification.", "Ablation Study", "To illustrate the effectiveness of our model and the contribution of each module, we use the PTDB-Ji splitting method to do a group of experiments. For the baseline model, we use 4 layer stacked convolutional encoder blocks without the residual connection in the block with only pre-trained word embeddings. We only use the output of the last layer and the output is processed by 2-max pooling without attention and sent to the relation classifier and connective classifier. Without the two residual connections, using 4 layers may be not the best for baseline model but is more convenient to comparison.", "Firstly, we add modules from high level to low level accumulatively to observe the performance improvement. Table 3 is the results, which demonstrate that every module has considerable effect on the performance.", "Then we test the effects of the two residual connections on the performance. The results are in Table 3 . The baseline $^+$ means baseline + bi-attention, i.e., the second row of Table 3 . We find that Res 1 (residual connection in the block) is much more useful than Res 2 (residual connection for pair representation), and they work together can bring even better performance.", "Without ELMo (the same setting as 4-th row in Table 3 ), our data settings is the same as qin-EtAl:2017:Long whose performance was state-of-the-art and will be compared directly. We see that even without the pre-trained ELMo encoder, our performance is better, which is mostly attributed to our better sentence pair representations.", "Subword-Level Embedding For the usefulness of subword-level embedding, we compare its performance to a model with character-level embedding, which was ever used in qin-zhang-zhao:2016:COLING. We use the same model setting as the 4-th row of Table 3 , and then replace subword with character sequence. The subword embedding augmented result is 47.03%, while the character embedding result is 46.37%, which verifies that the former is a better input representation for the task.", "Parameters for Sentence-Level Module As previously discussed, argument specific parameter settings may result in better sentence-level encoders. We use the model which is the same as the third row in Table 3 . If shared parameters are used, the result is 45.97%, which is lower than argument specific parameter settings (46.29%). The comparison shows argument specific parameter settings indeed capture the difference of argument representations and facilitate the sentence pair representation.", "Encoder Block Type and Layer Number In section 3.3, we consider two encoder types, here we compare their effects on the model performance. Like the previous part, The model setting is also the same as the third row in Table 3 except for the block type and layer number. The results are shown in Figure 4 .", "The results in the figure show that both types may reach similar level of top accuracies, as the order of word is not important to the task. We also try to add position information to the convolutional type encoder, and receive a dropped accuracy. This further verifies the order information does not matter too much for the task. For most of the other numbers of layers, the recurrent type shows better, as the number of layers has an impact on the window size of convolutional encoders. When convolutional type is used, the training procedure is much faster, but choosing the suitable kernel size needs extra efforts.", "Bi-Attention", "We visualize the attention weight of one instance in Figure 5 . For lower layers, the attended part is more concentrated. For higher layers, the weights are more average and the attended part moves to the sentence border. This is because the window size is bigger for higher layers, and the convolutional kernel may have higher weights on words at the window edge." ], [ "Settings For the first level classification, we perform both 4-way classification and one-vs-others binary classification. Following the settings of previous works, the dataset splitting method is the same as PDTB-Ji without removing instances. The model uses 5 block layers with kernel size 3, other details are the same as that for 11-way classification on PDTB-Ji.", "Results Table 4 is the result comparison on first level classification. For binary classification, the result is computed by $F_1$ score (%), and for 4-way classification, the result is computed by macro average $F_1$ score (%). Our model gives the state-of-the-art performance for 4-way classification by providing an $F_1$ score greater than 50% for the first time according to our best knowledge." ], [ "In this paper, we propose a deeper neural model augmented by different grained text representations for implicit discourse relation recognition. These different module levels work together and produce task-related representations of the sentence pair. Our experiments show that the model is effective and achieve the state-of-the-art performance. As to our best knowledge, this is the first time that an implicit discourse relation classifier gives an accuracy higher than 48% for 11-way and an $F_1$ score higher than 50% for 4-way classification tasks." ] ] }
{ "question": [ "Why does their model do better than prior models?" ], "question_id": [ "bb570d4a1b814f508a07e74baac735bf6ca0f040" ], "nlp_background": [ "infinity" ], "topic_background": [ "familiar" ], "paper_read": [ "no" ], "search_query": [ "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "better sentence pair representations" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Without ELMo (the same setting as 4-th row in Table 3 ), our data settings is the same as qin-EtAl:2017:Long whose performance was state-of-the-art and will be compared directly. We see that even without the pre-trained ELMo encoder, our performance is better, which is mostly attributed to our better sentence pair representations." ], "highlighted_evidence": [ "Without ELMo (the same setting as 4-th row in Table 3 ), our data settings is the same as qin-EtAl:2017:Long whose performance was state-of-the-art and will be compared directly. We see that even without the pre-trained ELMo encoder, our performance is better, which is mostly attributed to our better sentence pair representations." ] } ], "annotation_id": [ "07bc775ddd83609dbcfab84bd2251bba73fbadde" ], "worker_id": [ "043654eefd60242ac8da08ddc1d4b8d73f86f653" ] } ] }
{ "caption": [ "Figure 1: Model overview.", "Figure 2: Subword encoder.", "Figure 4: Recurrent encoder block.", "Table 1: Shared hyperparameter settings. Before dimension reducing, the dimension of pre-trained ELMo embedding is 1024.", "Table 2: Accuracy (%) comparison with others’ results on PDTB 2.0 test set for 11-way classification.", "Table 3: Accumulatively performance test result.", "Table 4: The effects of residual connections.", "Figure 5: Effects of block type and layer number.", "Figure 6: Attention visualization.", "Table 5: F1 score (%) comparison on binary and 4-way classification." ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "5-Figure4-1.png", "7-Table1-1.png", "8-Table2-1.png", "8-Table3-1.png", "8-Table4-1.png", "9-Figure5-1.png", "9-Figure6-1.png", "10-Table5-1.png" ] }
2002.11402
Detecting Potential Topics In News Using BERT, CRF and Wikipedia
For a news content distribution platform like Dailyhunt, Named Entity Recognition is a pivotal task for building better user recommendation and notification algorithms. Apart from identifying names, locations, organisations from the news for 13+ Indian languages and use them in algorithms, we also need to identify n-grams which do not necessarily fit in the definition of Named-Entity, yet they are important. For example, "me too movement", "beef ban", "alwar mob lynching". In this exercise, given an English language text, we are trying to detect case-less n-grams which convey important information and can be used as topics and/or hashtags for a news. Model is built using Wikipedia titles data, private English news corpus and BERT-Multilingual pre-trained model, Bi-GRU and CRF architecture. It shows promising results when compared with industry best Flair, Spacy and Stanford-caseless-NER in terms of F1 and especially Recall.
{ "section_name": [ "Introduction & Related Work", "Data Preparation", "Experiments ::: Model Architecture", "Experiments ::: Training", "Experiments ::: Results", "Experiments ::: Discussions", "Conclusion and Future Work" ], "paragraphs": [ [ "Named-Entity-Recognition(NER) approaches can be categorised broadly in three types. Detecting NER with predefined dictionaries and rulesBIBREF2, with some statistical approachesBIBREF3 and with deep learning approachesBIBREF4.", "Stanford CoreNLP NER is a widely used baseline for many applications BIBREF5. Authors have used approaches of Gibbs sampling and conditional random field (CRF) for non-local information gathering and then Viterbi algorithm to infer the most likely state in the CRF sequence outputBIBREF6.", "Deep learning approaches in NLP use document, word or token representations instead of one-hot encoded vectors. With the rise of transfer learning, pretrained Word2VecBIBREF7, GloVeBIBREF8, fasttextBIBREF9 which provides word embeddings were being used with recurrent neural networks (RNN) to detect NERs. Using LSTM layers followed by CRF layes with pretrained word-embeddings as input has been explored hereBIBREF10. Also, CNNs with character embeddings as inputs followed by bi-directional LSTM and CRF layers, were explored hereBIBREF11.", "With the introduction of attentions and transformersBIBREF12 many deep architectures emerged in last few years. Approach of using these pretrained models like ElmoBIBREF13, FlairBIBREF14 and BERTBIBREF0 for word representations followed by variety of LSMT and CRF combinations were tested by authors in BIBREF15 and these approaches show state-of-the-art performance.", "There are very few approaches where caseless NER task is explored. In this recent paperBIBREF16 authors have explored effects of \"Cased\" entities and how variety of networks perform and they show that the most effective strategy is a concatenation of cased and lowercased training data, producing a single model with high performance on both cased and uncased text.", "In another paperBIBREF17, authors have proposed True-Case pre-training before using BiLSTM+CRF approach to detect NERs effectively. Though it shows good results over previous approaches, it is not useful in Indian Languages context as there is no concept of cases.", "In our approach, we are focusing more on data preparation for our definition of topics using some of the state-of-art architectures based on BERT, LSTM/GRU and CRF layers as they have been explored in previous approaches mentioned above. Detecting caseless topics with higher recall and reasonable precision has been given a priority over f1 score. And comparisons have been made with available and ready-to-use open-source libraries from the productionization perspective." ], [ "We need good amount of data to try deep learning state-of-the-art algorithms. There are lot of open datasets available for names, locations, organisations, but not for topics as defined in Abstract above. Also defining and inferring topics is an individual preference and there are no fix set of rules for its definition. But according to our definition, we can use wikipedia titles as our target topics. English wikipedia dataset has more than 18 million titles if we consider all versions of them till now. We had to clean up the titles to remove junk titles as wikipedia title almost contains all the words we use daily. To remove such titles, we deployed simple rules as follows -", "Remove titles with common words : \"are\", \"the\", \"which\"", "Remove titles with numeric values : 29, 101", "Remove titles with technical components, driver names, transistor names : X00, lga-775", "Remove 1-gram titles except locations (almost 80% of these also appear in remaining n-gram titles)", "After doing some more cleaning we were left with 10 million titles. We have a dump of 15 million English news articles published in past 4 years. Further, we reduced number of articles by removing duplicate and near similar articles. We used our pre-trained doc2vec models and cosine similarity to detect almost similar news articles. Then selected minimum articles required to cover all possible 2-grams to 5-grams. This step is done to save some training time without loosing accuracy. Do note that, in future we are planning to use whole dataset and hope to see gains in F1 and Recall further. But as per manual inspection, our dataset contains enough variations of sentences with rich vocabulary which contains names of celebrities, politicians, local authorities, national/local organisations and almost all locations, India and International, mentioned in the news text, in last 4 years.", "We then created a parallel corpus format as shown in Table 1. Using pre-trained Bert-Tokenizer from hugging-face, converted words in sentences to tokenes. Caseless-BERT pre-trained tokenizer is used. Notice that some of the topic words are broken into tokens and NER tag has been repeated accordingly. For example, in Table 1 second row, word \"harassment\" is broken into \"har ##ass ##ment\". Similarly, one \"NER\" tag is repeated three times to keep the length of sequence-pair same. Finally, for around 3 million news articles, parallel corpus is created, which is of around 150 million sentences, with around 3 billion words (all lower cased) and with around 5 billion tokens approximately." ], [ "We tried multiple variations of LSTM and GRU layes, with/without CRF layer. There is a marginal gain in using GRU layers over LSTM. Also, we saw gain in using just one layers of GRU instead of more. Finally, we settled on the architecture, shown in Figure 1 for the final training, based on validation set scores with sample training set.", "Text had to be tokenized using pytorch-pretrained-bert as explained above before passing to the network. Architecture is built using tensorflow/keras. Coding inspiration taken from BERT-keras and for CRF layer keras-contrib. If one is more comfortable in pytorch there are many examples available on github, but pytorch-bert-crf-ner is better for an easy start.", "We used BERT-Multilingual model so that we can train and fine-tune the same model for other Indian languages. You can take BERT-base or BERT-large for better performance with only English dataset. Or you can use DistilBERT for English and DistilmBERT for 104 languages for faster pre-training and inferences. Also, we did not choose AutoML approach for hyper-parameter tuning which could have resulted in much more accurate results but at the same time could have taken very long time as well. So instead, chose and tweaked the parameters based on initial results.", "We trained two models, one with sequence length 512 to capture document level important n-grams and second with sequence length 64 to capture sentence/paragraph level important n-grams. Through experiments it was evident that, sequence length plays a vital role in deciding context and locally/globally important n-grams. Final output is a concatenation of both the model outputs." ], [ "Trained the topic model on single 32gb NVidia-V100 and it took around 50 hours to train the model with sequence length 512. We had to take 256gb ram machine to accommodate all data in memory for faster read/write. Also, trained model with 64 sequence length in around 17 hours.", "It is very important to note that sequence length decides how many bert-tokens you can pass for inference and also decides training time and accuracy. Ideally more is better because inference would be faster as well. For 64 sequence length, we are moving 64-token window over whole token-text and recognising topics in each window. So, one should choose sequence length according to their use case. Also, we have explained before our motivation of choosing 2 separate sequence lengths models.", "We stopped the training for both the models when it crossed 70% precision, 90% recall on training and testing sets, as we were just looking to get maximum recall and not bothered about precision in our case. Both the models reach this point at around 16 epochs." ], [ "Comparison with existing open-source NER libraries is not exactly fair as they are NOT trained for detecting topics and important n-grams, also NOT trained for case-less text. But they are useful in testing and benchmarking if our model is detecting traditional NERs or not, which it should capture, as Wikipedia titles contains almost all Names, Places and Organisation names. You can check the sample output here", "Comparisons have been made among Flair-NER, Stanford-caseless-NER (used english.conll.4class.caseless as it performed better than 3class and 7class), Spacy-NER and our models. Of which only Stanford-NER provides case-less models. In Table 2, scores are calculated by taking traditional NER list as reference. In Table 4, same is done with Wikipedia Titles reference set.", "As you can see in Table 2 & 3, recall is great for our model but precision is not good as Model is also trying to detect new potential topics which are not there even in reference Wikipedia-Titles and NER sets. In capturing Wikipedia topics our model clearly surpasses other models in all scores.", "Spacy results are good despite not being trained for case-less data. In terms of F1 and overall stability Spacy did better than Stanford NER, on our News Validation set. Similarly, Stanford did well in Precision but could not catch up with Spacy and our model in terms of Recall. Flair overall performed poorly, but as said before these open-source models are not trained for our particular use-case." ], [ "Lets check some examples for detailed analysis of the models and their results. Following is the economy related news.", "Example 1 : around $1–1.5 trillion or around two percent of global gdp, are lost to corruption every year, president of the natural resource governance institute nrgi has said. speaking at a panel on integrity in public governance during the world bank group and international monetary fund annual meeting on sunday, daniel kaufmann, president of nrgi, presented the statistic, result of a study by the nrgi, an independent, non-profit organisation based in new york. however, according to kaufmann, the figure is only the direct costs of corruption as it does not factor in the opportunities lost on innovation and productivity, xinhua news agency reported. a country that addresses corruption and significantly improves rule of law can expect a huge increase in per capita income in the long run, the study showed. it will also see similar gains in reducing infant mortality and improving education, said kaufmann.", "Detected NERs can be seen per model in Table 4. Our model do not capture numbers as we have removed all numbers from my wiki-titles as topics. Reason behind the same is that we can easily write regex to detect currency, prices, time, date and deep learning is not required for the same. Following are few important n-grams only our models was able to capture -", "capita income", "infant mortality", "international monetary fund annual meeting", "natural resource governance institute", "public governance", "At the same time, we can see that Spacy did much better than Stanford-caseless NER and Flair could not capture any of the NERs. Another example of a news in political domain and detected NERs can be seen per model in Table 5.", "Example 2 : wearing the aam aadmi party's trademark cap and with copies of the party's five-year report card in hand, sunita kejriwal appears completely at ease. it's a cold winter afternoon in delhi, as the former indian revenue service (irs) officer hits the campaign trail to support her husband and batchmate, chief minister arvind kejriwal. emerging from the background for the first time, she is lending her shoulder to the aap bandwagon in the new delhi assembly constituency from where the cm, then a political novice, had emerged as the giant killer by defeating congress incumbent sheila dikshit in 2013.", "Correct n-grams captured only by our model are -", "aam aadmi party", "aap bandwagon", "delhi assembly constituency", "giant killer", "indian revenue service", "political novice", "In this example, Stanford model did better and captured names properly, for example \"sheila dikshit\" which Spacy could not detect but Spacy captureed almost all numeric values along with numbers expressed in words.", "It is important to note that, our model captures NERs with some additional words around them. For example, \"president of nrgi\" is detected by the model but not \"ngri\". But model output does convey more information than the later. To capture the same for all models (and to make comparison fair), partial match has been enabled and if correct NER is part of predictied NER then later one is marked as matched. This could be the reason for good score for Spacy. Note that, partial match is disabled for Wikipedia Titles match task as shown in Table 3. Here, our model outperformed all the models." ], [ "Through this exercise, we were able to test out the best suitable model architecture and data preparation steps so that similar models could be trained for Indian languages. Building cased or caseless NERs for English was not the final goal and this has already been benchmarked and explored before in previous approaches explained in \"Related Work\" section. We didn't use traditional datasets for model performance comparisons & benchmarks. As mentioned before, all the comparisons are being done with open-source models and libraries from the productionization point of view. We used a english-news validation dataset which is important and relevant to our specific task and all validation datasets and raw output results can be found at our github link .", "Wikipedia titles for Indian languages are very very less and resulting tagged data is even less to run deep architectures. We are trying out translations/transliterations of the English-Wiki-Titles to improve Indic-languages entity/topics data.", "This approach is also useful in building news-summarizing models as it detects almost all important n-grams present in the news. Output of this model can be introduced in a summarization network to add more bias towards important words and bias for their inclusion." ] ] }
{ "question": [ "What is the difference in recall score between the systems?", "What is their f1 score and recall?", "How many layers does their system have?", "Which news corpus is used?", "How large is the dataset they used?" ], "question_id": [ "1771a55236823ed44d3ee537de2e85465bf03eaf", "1d74fd1d38a5532d20ffae4abbadaeda225b6932", "da8bda963f179f5517a864943dc0ee71249ee1ce", "5c059a13d59947f30877bed7d0180cca20a83284", "a1885f807753cff7a59f69b5cf6d0fdef8484057" ], "nlp_background": [ "two", "two", "two", "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Between the model and Stanford, Spacy and Flair the differences are 42.91, 25.03, 69.8 with Traditional NERs as reference and 49.88, 43.36, 62.43 with Wikipedia titles as reference.", "evidence": [ "FLOAT SELECTED: Table 2. Comparison with Traditional NERs as reference", "FLOAT SELECTED: Table 3. Comparison with Wikipedia titles as reference" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2. Comparison with Traditional NERs as reference", "FLOAT SELECTED: Table 3. Comparison with Wikipedia titles as reference" ] } ], "annotation_id": [ "79e09627dc6d58f94ae96f07ebbfa6e8bedb4338" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "F1 score and Recall are 68.66, 80.08 with Traditional NERs as reference and 59.56, 69.76 with Wikipedia titles as reference.", "evidence": [ "FLOAT SELECTED: Table 2. Comparison with Traditional NERs as reference", "FLOAT SELECTED: Table 3. Comparison with Wikipedia titles as reference" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2. Comparison with Traditional NERs as reference", "FLOAT SELECTED: Table 3. Comparison with Wikipedia titles as reference" ] } ], "annotation_id": [ "07c6cdfd9c473ddcfd4e653e5146e6c80be4c5a4" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "4 layers", "evidence": [ "FLOAT SELECTED: Figure 1. BERT + Bi-GRU + CRF, Final Architecture Chosen For Topic Detection Task." ], "highlighted_evidence": [ "FLOAT SELECTED: Figure 1. BERT + Bi-GRU + CRF, Final Architecture Chosen For Topic Detection Task." ] } ], "annotation_id": [ "18a2a4c3ecdea3f8c21a0400e3b957facea2a0b6" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [ "We have a dump of 15 million English news articles published in past 4 years." ] } ], "annotation_id": [ "e20e4bed7b4ec73f1dc1206c120bb196fcf44314" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "English wikipedia dataset has more than 18 million", "a dump of 15 million English news articles " ], "yes_no": null, "free_form_answer": "", "evidence": [ "We need good amount of data to try deep learning state-of-the-art algorithms. There are lot of open datasets available for names, locations, organisations, but not for topics as defined in Abstract above. Also defining and inferring topics is an individual preference and there are no fix set of rules for its definition. But according to our definition, we can use wikipedia titles as our target topics. English wikipedia dataset has more than 18 million titles if we consider all versions of them till now. We had to clean up the titles to remove junk titles as wikipedia title almost contains all the words we use daily. To remove such titles, we deployed simple rules as follows -", "After doing some more cleaning we were left with 10 million titles. We have a dump of 15 million English news articles published in past 4 years. Further, we reduced number of articles by removing duplicate and near similar articles. We used our pre-trained doc2vec models and cosine similarity to detect almost similar news articles. Then selected minimum articles required to cover all possible 2-grams to 5-grams. This step is done to save some training time without loosing accuracy. Do note that, in future we are planning to use whole dataset and hope to see gains in F1 and Recall further. But as per manual inspection, our dataset contains enough variations of sentences with rich vocabulary which contains names of celebrities, politicians, local authorities, national/local organisations and almost all locations, India and International, mentioned in the news text, in last 4 years." ], "highlighted_evidence": [ "We need good amount of data to try deep learning state-of-the-art algorithms. There are lot of open datasets available for names, locations, organisations, but not for topics as defined in Abstract above. Also defining and inferring topics is an individual preference and there are no fix set of rules for its definition. But according to our definition, we can use wikipedia titles as our target topics. English wikipedia dataset has more than 18 million titles if we consider all versions of them till now. We had to clean up the titles to remove junk titles as wikipedia title almost contains all the words we use daily. ", "After doing some more cleaning we were left with 10 million titles. We have a dump of 15 million English news articles published in past 4 years. Further, we reduced number of articles by removing duplicate and near similar articles. We used our pre-trained doc2vec models and cosine similarity to detect almost similar news articles." ] } ], "annotation_id": [ "99c7927e72f3d6e93fd6da0841966e85c4fe4c95" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ] }
{ "caption": [ "Table 1. Parallel Corpus Preparation with BERT Tokenizer", "Table 2. Comparison with Traditional NERs as reference", "Table 3. Comparison with Wikipedia titles as reference", "Figure 1. BERT + Bi-GRU + CRF, Final Architecture Chosen For Topic Detection Task.", "Table 4. Recognised Named Entities Per Model - Example 1", "Table 5. Recognised Named Entities Per Model - Example 2" ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "3-Table3-1.png", "3-Figure1-1.png", "6-Table4-1.png", "6-Table5-1.png" ] }
1804.09301
Gender Bias in Coreference Resolution
We present an empirical study of gender bias in coreference resolution systems. We first introduce a novel, Winograd schema-style set of minimal pair sentences that differ only by pronoun gender. With these"Winogender schemas,"we evaluate and confirm systematic gender bias in three publicly-available coreference resolution systems, and correlate this bias with real-world and textual gender statistics.
{ "section_name": [ "Introduction", "Coreference Systems", "Winogender Schemas", "Results and Discussion", "Related Work", "Conclusion and Future Work", "Acknowledgments" ], "paragraphs": [ [ "There is a classic riddle: A man and his son get into a terrible car crash. The father dies, and the boy is badly injured. In the hospital, the surgeon looks at the patient and exclaims, “I can't operate on this boy, he's my son!” How can this be?", "That a majority of people are reportedly unable to solve this riddle is taken as evidence of underlying implicit gender bias BIBREF0 : many first-time listeners have difficulty assigning both the role of “mother” and “surgeon” to the same entity.", "As the riddle reveals, the task of coreference resolution in English is tightly bound with questions of gender, for humans and automated systems alike (see Figure 1 ). As awareness grows of the ways in which data-driven AI technologies may acquire and amplify human-like biases BIBREF1 , BIBREF2 , BIBREF3 , this work investigates how gender biases manifest in coreference resolution systems.", "There are many ways one could approach this question; here we focus on gender bias with respect to occupations, for which we have corresponding U.S. employment statistics. Our approach is to construct a challenge dataset in the style of Winograd schemas, wherein a pronoun must be resolved to one of two previously-mentioned entities in a sentence designed to be easy for humans to interpret, but challenging for data-driven systems BIBREF4 . In our setting, one of these mentions is a person referred to by their occupation; by varying only the pronoun's gender, we are able to test the impact of gender on resolution. With these “Winogender schemas,” we demonstrate the presence of systematic gender bias in multiple publicly-available coreference resolution systems, and that occupation-specific bias is correlated with employment statistics. We release these test sentences to the public.", "In our experiments, we represent gender as a categorical variable with either two or three possible values: female, male, and (in some cases) neutral. These choices reflect limitations of the textual and real-world datasets we use." ], [ "In this work, we evaluate three publicly-available off-the-shelf coreference resolution systems, representing three different machine learning paradigms: rule-based systems, feature-driven statistical systems, and neural systems." ], [ "Our intent is to reveal cases where coreference systems may be more or less likely to recognize a pronoun as coreferent with a particular occupation based on pronoun gender, as observed in Figure 1 . To this end, we create a specialized evaluation set consisting of 120 hand-written sentence templates, in the style of the Winograd Schemas BIBREF4 . Each sentence contains three referring expressions of interest:", "We use a list of 60 one-word occupations obtained from Caliskan183 (see supplement), with corresponding gender percentages available from the U.S. Bureau of Labor Statistics. For each occupation, we wrote two similar sentence templates: one in which pronoun is coreferent with occupation, and one in which it is coreferent with participant (see Figure 2 ). For each sentence template, there are three pronoun instantiations (female, male, or neutral), and two participant instantiations (a specific participant, e.g., “the passenger,” and a generic paricipant, “someone.”) With the templates fully instantiated, the evaluation set contains 720 sentences: 60 occupations $\\times $ 2 sentence templates per occupation $\\times $ 2 participants $\\times $ 3 pronoun genders." ], [ "We evaluate examples of each of the three coreference system architectures described in \"Coreference Systems\" : the BIBREF5 sieve system from the rule-based paradigm (referred to as RULE), BIBREF6 from the statistical paradigm (STAT), and the BIBREF11 deep reinforcement system from the neural paradigm (NEURAL).", "By multiple measures, the Winogender schemas reveal varying degrees of gender bias in all three systems. First we observe that these systems do not behave in a gender-neutral fashion. That is to say, we have designed test sentences where correct pronoun resolution is not a function of gender (as validated by human annotators), but system predictions do exhibit sensitivity to pronoun gender: 68% of male-female minimal pair test sentences are resolved differently by the RULE system; 28% for STAT; and 13% for NEURAL.", "Overall, male pronouns are also more likely to be resolved as occupation than female or neutral pronouns across all systems: for RULE, 72% male vs 29% female and 1% neutral; for STAT, 71% male vs 63% female and 50% neutral; and for NEURAL, 87% male vs 80% female and 36% neutral. Neutral pronouns are often resolved as neither occupation nor participant, possibly due to the number ambiguity of “they/their/them.”", "When these systems' predictions diverge based on pronoun gender, they do so in ways that reinforce and magnify real-world occupational gender disparities. Figure 4 shows that systems' gender preferences for occupations correlate with real-world employment statistics (U.S. Bureau of Labor Statistics) and the gender statistics from text BIBREF14 which these systems access directly; correlation values are in Table 1 . We also identify so-called “gotcha” sentences in which pronoun gender does not match the occupation's majority gender (BLS) if occupation is the correct answer; all systems perform worse on these “gotchas.” (See Table 2 .)", "Because coreference systems need to make discrete choices about which mentions are coreferent, percentage-wise differences in real-world statistics may translate into absolute differences in system predictions. For example, the occupation “manager” is 38.5% female in the U.S. according to real-world statistics (BLS); mentions of “manager” in text are only 5.18% female (B&L resource); and finally, as viewed through the behavior of the three coreference systems we tested, no managers are predicted to be female. This illustrates two related phenomena: first, that data-driven NLP pipelines are susceptible to sequential amplification of bias throughout a pipeline, and second, that although the gender statistics from B&L correlate with BLS employment statistics, they are systematically male-skewed (Figure 3 )." ], [ "Here we give a brief (and non-exhaustive) overview of prior work on gender bias in NLP systems and datasets. A number of papers explore (gender) bias in English word embeddings: how they capture implicit human biases in modern BIBREF1 and historical BIBREF15 text, and methods for debiasing them BIBREF16 . Further work on debiasing models with adversarial learning is explored by DBLP:journals/corr/BeutelCZC17 and zhang2018mitigating.", "Prior work also analyzes social and gender stereotyping in existing NLP and vision datasets BIBREF17 , BIBREF18 . tatman:2017:EthNLP investigates the impact of gender and dialect on deployed speech recognition systems, while zhao-EtAl:2017:EMNLP20173 introduce a method to reduce amplification effects on models trained with gender-biased datasets. koolen-vancranenburgh:2017:EthNLP examine the relationship between author gender and text attributes, noting the potential for researcher interpretation bias in such studies. Both larson:2017:EthNLP and koolen-vancranenburgh:2017:EthNLP offer guidelines to NLP researchers and computational social scientists who wish to predict gender as a variable. hovy-spruit:2016:P16-2 introduce a helpful set of terminology for identifying and categorizing types of bias that manifest in AI systems, including overgeneralization, which we observe in our work here.", "Finally, we note independent but closely related work by zhao-wang:2018:N18-1, published concurrently with this paper. In their work, zhao-wang:2018:N18-1 also propose a Winograd schema-like test for gender bias in coreference resolution systems (called “WinoBias”). Though similar in appearance, these two efforts have notable differences in substance and emphasis. The contribution of this work is focused primarily on schema construction and validation, with extensive analysis of observed system bias, revealing its correlation with biases present in real-world and textual statistics; by contrast, zhao-wang:2018:N18-1 present methods of debiasing existing systems, showing that simple approaches such as augmenting training data with gender-swapped examples or directly editing noun phrase counts in the B&L resource are effective at reducing system bias, as measured by the schemas. Complementary differences exist between the two schema formulations: Winogender schemas (this work) include gender-neutral pronouns, are syntactically diverse, and are human-validated; WinoBias includes (and delineates) sentences resolvable from syntax alone; a Winogender schema has one occupational mention and one “other participant” mention; WinoBias has two occupational mentions. Due to these differences, we encourage future evaluations to make use of both datasets." ], [ "We have introduced “Winogender schemas,” a pronoun resolution task in the style of Winograd schemas that enables us to uncover gender bias in coreference resolution systems. We evaluate three publicly-available, off-the-shelf systems and find systematic gender bias in each: for many occupations, systems strongly prefer to resolve pronouns of one gender over another. We demonstrate that this preferential behavior correlates both with real-world employment statistics and the text statistics that these systems use. We posit that these systems overgeneralize the attribute of gender, leading them to make errors that humans do not make on this evaluation. We hope that by drawing attention to this issue, future systems will be designed in ways that mitigate gender-based overgeneralization.", "It is important to underscore the limitations of Winogender schemas. As a diagnostic test of gender bias, we view the schemas as having high positive predictive value and low negative predictive value; that is, they may demonstrate the presence of gender bias in a system, but not prove its absence. Here we have focused on examples of occupational gender bias, but Winogender schemas may be extended broadly to probe for other manifestations of gender bias. Though we have used human-validated schemas to demonstrate that existing NLP systems are comparatively more prone to gender-based overgeneralization, we do not presume that matching human judgment is the ultimate objective of this line of research. Rather, human judgements, which carry their own implicit biases, serve as a lower bound for equitability in automated systems." ], [ "The authors thank Rebecca Knowles and Chandler May for their valuable feedback on this work. This research was supported by the JHU HLTCOE, DARPA AIDA, and NSF-GRFP (1232825). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government." ] ] }
{ "question": [ "Which coreference resolution systems are tested?" ], "question_id": [ "c2553166463b7b5ae4d9786f0446eb06a90af458" ], "nlp_background": [ "infinity" ], "topic_background": [ "research" ], "paper_read": [ "yes" ], "search_query": [ "gender bias" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "the BIBREF5 sieve system from the rule-based paradigm (referred to as RULE), BIBREF6 from the statistical paradigm (STAT), and the BIBREF11 deep reinforcement system from the neural paradigm (NEURAL)." ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this work, we evaluate three publicly-available off-the-shelf coreference resolution systems, representing three different machine learning paradigms: rule-based systems, feature-driven statistical systems, and neural systems.", "We evaluate examples of each of the three coreference system architectures described in \"Coreference Systems\" : the BIBREF5 sieve system from the rule-based paradigm (referred to as RULE), BIBREF6 from the statistical paradigm (STAT), and the BIBREF11 deep reinforcement system from the neural paradigm (NEURAL)." ], "highlighted_evidence": [ "In this work, we evaluate three publicly-available off-the-shelf coreference resolution systems, representing three different machine learning paradigms: rule-based systems, feature-driven statistical systems, and neural systems.", "We evaluate examples of each of the three coreference system architectures described in \"Coreference Systems\" : the BIBREF5 sieve system from the rule-based paradigm (referred to as RULE), BIBREF6 from the statistical paradigm (STAT), and the BIBREF11 deep reinforcement system from the neural paradigm (NEURAL)." ] } ], "annotation_id": [ "07c736c5d2ddefb9a6f01f467538b68684d8dd43" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Figure 1: Stanford CoreNLP rule-based coreference system resolves a male and neutral pronoun as coreferent with “The surgeon,” but does not for the corresponding female pronoun.", "Figure 2: A “Winogender” schema for the occupation paramedic. Correct answers in bold. In general, OCCUPATION and PARTICIPANT may appear in either order in the sentence.", "Figure 3: Gender statistics from Bergsma and Lin (2006) correlate with Bureau of Labor Statistics 2015. However, the former has systematically lower female percentages; most points lie well below the 45-degree line (dotted). Regression line and 95% confidence interval in blue. Pearson r = 0.67.", "Table 1: Correlation values for Figures 3 and 4.", "Figure 4: These two plots show how gender bias in coreference systems corresponds with occupational gender statistics from the U.S Bureau of Labor Statistics (left) and from text as computed by Bergsma and Lin (2006) (right); each point represents one occupation. The y-axes measure the extent to which a coref system prefers to match female pronouns with a given occupation over male pronouns, as tested by our Winogender schemas. A value of 100 (maximum female bias) means the system always resolved female pronouns to the given occupation and never male pronouns (100% - 0%); a score of -100 (maximum male bias) is the reverse; and a value of 0 indicates no gender differential. Recall the Winogender evaluation set is gender-balanced for each occupation; thus the horizontal dotted black line (y=0) in both plots represents a hypothetical system with 100% accuracy. Regression lines with 95% confidence intervals are shown.", "Table 2: System accuracy (%) bucketed by gender and difficulty (so-called “gotchas,” shaded in purple). For female pronouns, a “gotcha” sentence is one where either (1) the correct answer is OCCUPATION but the occupation is < 50% female (according to BLS); or (2) the occupation is ≥ 50% female but the correct answer is PARTICIPANT; this is reversed for male pronouns. Systems do uniformly worse on “gotchas.”" ], "file": [ "1-Figure1-1.png", "3-Figure2-1.png", "3-Figure3-1.png", "3-Table1-1.png", "4-Figure4-1.png", "4-Table2-1.png" ] }
2002.00652
How Far are We from Effective Context Modeling ? An Exploratory Study on Semantic Parsing in Context
Recently semantic parsing in context has received a considerable attention, which is challenging since there are complex contextual phenomena. Previous works verified their proposed methods in limited scenarios, which motivates us to conduct an exploratory study on context modeling methods under real-world semantic parsing in context. We present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. We evaluate 13 context modeling methods on two large complex cross-domain datasets, and our best model achieves state-of-the-art performances on both datasets with significant improvements. Furthermore, we summarize the most frequent contextual phenomena, with a fine-grained analysis on representative models, which may shed light on potential research directions.
{ "section_name": [ "Introduction", "Methodology", "Methodology ::: Base Model", "Methodology ::: Base Model ::: Question Encoder", "Methodology ::: Base Model ::: Grammar-based Decoder", "Methodology ::: Recent Questions as Context", "Methodology ::: Recent Questions as Context ::: Concat", "Methodology ::: Recent Questions as Context ::: Turn", "Methodology ::: Recent Questions as Context ::: Gate", "Methodology ::: Precedent SQL as Context", "Methodology ::: Precedent SQL as Context ::: SQL Attn", "Methodology ::: Precedent SQL as Context ::: Action Copy", "Methodology ::: Precedent SQL as Context ::: Tree Copy", "Methodology ::: BERT Enhanced Embedding", "Experiment & Analysis", "Experiment & Analysis ::: Experimental Setup ::: Dataset", "Experiment & Analysis ::: Experimental Setup ::: Evaluation Metrics", "Experiment & Analysis ::: Experimental Setup ::: Implementation Detail", "Experiment & Analysis ::: Experimental Setup ::: Baselines", "Experiment & Analysis ::: Model Comparison", "Experiment & Analysis ::: Fine-grained Analysis", "Experiment & Analysis ::: Fine-grained Analysis ::: Coreference", "Experiment & Analysis ::: Fine-grained Analysis ::: Ellipsis", "Related Work", "Conclusion & Future Work" ], "paragraphs": [ [ "Semantic parsing, which translates a natural language sentence into its corresponding executable logic form (e.g. Structured Query Language, SQL), relieves users from the burden of learning techniques behind the logic form. The majority of previous studies on semantic parsing assume that queries are context-independent and analyze them in isolation. However, in reality, users prefer to interact with systems in a dialogue, where users are allowed to ask context-dependent incomplete questions BIBREF0. That arises the task of Semantic Parsing in Context (SPC), which is quite challenging as there are complex contextual phenomena. In general, there are two sorts of contextual phenomena in dialogues: Coreference and Ellipsis BIBREF1. Figure FIGREF1 shows a dialogue from the dataset SParC BIBREF2. After the question “What is id of the car with the max horsepower?”, the user poses an elliptical question “How about with the max mpg?”, and a question containing pronouns “Show its Make!”. Only when completely understanding the context, could a parser successfully parse the incomplete questions into their corresponding SQL queries.", "A number of context modeling methods have been suggested in the literature to address SPC BIBREF3, BIBREF4, BIBREF2, BIBREF5, BIBREF6. These methods proposed to leverage two categories of context: recent questions and precedent logic form. It is natural to leverage recent questions as context. Taking the example from Figure FIGREF1, when parsing $Q_3$, we also need to take $Q_1$ and $Q_2$ as input. We can either simply concatenate the input questions, or use a model to encode them hierarchically BIBREF4. As for the second category, instead of taking a bag of recent questions as input, it only considers the precedent logic form. For instance, when parsing $Q_3$, we only need to take $S_2$ as context. With such a context, the decoder can attend over it, or reuse it via a copy mechanism BIBREF4, BIBREF5. Intuitively, methods that fall into this category enjoy better generalizability, as they only rely on the last logic form as context, no matter at which turn. Notably, these two categories of context can be used simultaneously.", "However, it remains unclear how far we are from effective context modeling. First, there is a lack of thorough comparisons of typical context modeling methods on complex SPC (e.g. cross-domain). Second, none of previous works verified their proposed context modeling methods with the grammar-based decoding technique, which has been developed for years and proven to be highly effective in semantic parsing BIBREF7, BIBREF8, BIBREF9. To obtain better performance, it is worthwhile to study how context modeling methods collaborate with the grammar-based decoding. Last but not the least, there is limited understanding of how context modeling methods perform on various contextual phenomena. An in-depth analysis can shed light on potential research directions.", "In this paper, we try to fulfill the above insufficiency via an exploratory study on real-world semantic parsing in context. Concretely, we present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. Our best model achieves state-of-the-art (SOTA) performances on both datasets with significant improvements. Furthermore, we summarize and generalize the most frequent contextual phenomena, with a fine-grained analysis on representative models. Through the analysis, we obtain some interesting findings, which may benefit the community on the potential research directions. We will open-source our code and materials to facilitate future work upon acceptance." ], [ "In the task of semantic parsing in context, we are given a dataset composed of dialogues. Denoting $\\langle \\mathbf {x}_1,...,\\mathbf {x}_n\\rangle $ a sequence of natural language questions in a dialogue, $\\langle \\mathbf {y}_1,...,\\mathbf {y}_n\\rangle $ are their corresponding SQL queries. Each SQL query is conditioned on a multi-table database schema, and the databases used in test do not appear in training. In this section, we first present a base model without considering context. Then we introduce 6 typical context modeling methods and describe how we equip the base model with these methods. Finally, we present how to augment the model with BERT BIBREF10." ], [ "We employ the popularly used attention-based sequence-to-sequence architecture BIBREF11, BIBREF12 to build our base model. As shown in Figure FIGREF6, the base model consists of a question encoder and a grammar-based decoder. For each question, the encoder provides contextual representations, while the decoder generates its corresponding SQL query according to a predefined grammar." ], [ "To capture contextual information within a question, we apply Bidirectional Long Short-Term Memory Neural Network (BiLSTM) as our question encoder BIBREF13, BIBREF14. Specifically, at turn $i$, firstly every token $x_{i,k}$ in $\\mathbf {x}_{i}$ is fed into a word embedding layer $\\mathbf {\\phi }^x$ to get its embedding representation $\\mathbf {\\phi }^x{(x_{i,k})}$. On top of the embedding representation, the question encoder obtains a contextual representation $\\mathbf {h}^{E}_{i,k}=[\\mathop {{\\mathbf {h}}^{\\overrightarrow{E}}_{i,k}}\\,;{\\mathbf {h}}^{\\overleftarrow{E}}_{i,k}]$, where the forward hidden state is computed as following:" ], [ "The decoder is grammar-based with attention on the input question BIBREF7. Different from producing a SQL query word by word, our decoder outputs a sequence of grammar rule (i.e. action). Such a sequence has one-to-one correspondence with the abstract syntax tree of the SQL query. Taking the SQL query in Figure FIGREF6 as an example, it is transformed to the action sequence $\\langle $ $\\rm \\scriptstyle {Start}\\rightarrow \\rm {Root}$, $\\rm \\scriptstyle {Root}\\rightarrow \\rm {Select\\ Order}$, $\\rm \\scriptstyle {Select}\\rightarrow \\rm {Agg}$, $\\rm \\scriptstyle {Agg}\\rightarrow \\rm {max\\ Col\\ Tab}$, $\\rm \\scriptstyle {Col}\\rightarrow \\rm {Id}$, $\\rm \\scriptstyle {Tab}\\rightarrow \\rm {CARS\\_DATA}$, $\\rm \\scriptstyle {Order}\\rightarrow \\rm {desc\\ limit\\ Agg}$, $\\rm \\scriptstyle {Agg}\\rightarrow \\rm {none\\ Col\\ Tab}$, $\\rm \\scriptstyle {Col}\\rightarrow \\rm {Horsepower}$, $\\rm \\scriptstyle {Tab}\\rightarrow \\rm {CARS\\_DATA}$ $\\rangle $ by left-to-right depth-first traversing on the tree. At each decoding step, a nonterminal is expanded using one of its corresponding grammar rules. The rules are either schema-specific (e.g. $\\rm \\scriptstyle {Col}\\rightarrow \\rm {Horsepower}$), or schema-agnostic (e.g. $\\rm \\scriptstyle {Start}\\rightarrow \\rm {Root}$). More specifically, as shown at the top of Figure FIGREF6, we make a little modification on $\\rm {Order}$-related rules upon the grammar proposed by BIBREF9, which has been proven to have better performance than vanilla SQL grammar. Denoting $\\mathbf {LSTM}^{\\overrightarrow{D}}$ the unidirectional LSTM used in the decoder, at each decoding step $j$ of turn $i$, it takes the embedding of the previous generated grammar rule $\\mathbf {\\phi }^y(y_{i,j-1})$ (indicated as the dash lines in Figure FIGREF6), and updates its hidden state as:", "where $\\mathbf {c}_{i,j-1}$ is the context vector produced by attending on each encoder hidden state $\\mathbf {h}^E_{i,k}$ in the previous step:", "where $\\mathbf {W}^e$ is a learned matrix. $\\mathbf {h}^{\\overrightarrow{D}}_{i,0}$ is initialized by the final encoder hidden state $\\mathbf {h}^E_{i,|\\mathbf {x}_{i}|}$, while $\\mathbf {c}_{i,0}$ is a zero-vector. For each schema-agnostic grammar rule, $\\mathbf {\\phi }^y$ returns a learned embedding. For schema-specific one, the embedding is obtained by passing its schema (i.e. table or column) through another unidirectional LSTM, namely schema encoder $\\mathbf {LSTM}^{\\overrightarrow{S}}$. For example, the embedding of $\\rm \\scriptstyle {Col}\\rightarrow \\rm {Id}$ is:", "As for the output $y_{i,j}$, if the expanded nonterminal corresponds to schema-agnostic grammar rules, we can obtain the output probability of action ${\\gamma }$ as:", "where $\\mathbf {W}^o$ is a learned matrix. When it comes to schema-specific grammar rules, the main challenge is that the model may encounter schemas never appeared in training due to the cross-domain setting. To deal with it, we do not directly compute the similarity between the decoder hidden state and the schema-specific grammar rule embedding. Instead, we first obtain the unnormalized linking score $l(x_{i,k},\\gamma )$ between the $k$-th token in $\\mathbf {x}_i$ and the schema in action $\\gamma $. It is computed by both handcraft features (e.g. word exact match) BIBREF15 and learned similarity (i.e. dot product between word embedding and grammar rule embedding). With the input question as bridge, we reuse the attention score $a_{i,k}$ in Equation DISPLAY_FORM8 to measure the probability of outputting a schema-specific action $\\gamma $ as:" ], [ "To take advantage of the question context, we provide the base model with recent $h$ questions as additional input. As shown in Figure FIGREF13, we summarize and generalize three ways to incorporate recent questions as context." ], [ "The method concatenates recent questions with the current question in order, making the input of the question encoder be $[\\mathbf {x}_{i-h},\\dots ,\\mathbf {x}_{i}]$, while the architecture of the base model remains the same. We do not insert special delimiters between questions, as there are punctuation marks." ], [ "A dialogue can be seen as a sequence of questions which, in turn, are sequences of words. Considering such hierarchy, BIBREF4 employed a turn-level encoder (i.e. an unidirectional LSTM) to encode recent questions hierarchically. At turn $i$, the turn-level encoder takes the previous question vector $[\\mathbf {h}^{\\overleftarrow{E}}_{i-1,1},\\mathbf {h}^{\\overrightarrow{E}}_{i-1,|\\mathbf {x}_{i-1}|}]$ as input, and updates its hidden state to $\\mathbf {h}^{\\overrightarrow{T}}_{i}$. Then $\\mathbf {h}^{\\overrightarrow{T}}_{i}$ is fed into $\\mathbf {LSTM}^E$ as an implicit context. Accordingly Equation DISPLAY_FORM4 is rewritten as:", "Similar to Concat, BIBREF4 allowed the decoder to attend over all encoder hidden states. To make the decoder distinguish hidden states from different turns, they further proposed a relative distance embedding ${\\phi }^{d}$ in attention computing. Taking the above into account, Equation DISPLAY_FORM8 is as:", "", "where $t{\\in }[0,\\dots ,h]$ represents the relative distance." ], [ "To jointly model the decoder attention in token-level and question-level, inspired by the advances of open-domain dialogue area BIBREF16, we propose a gate mechanism to automatically compute the importance of each question. The importance is computed by:", "where $\\lbrace \\mathbf {V}^{g},\\mathbf {W}^g,\\mathbf {U}^g\\rbrace $ are learned parameters and $0\\,{\\le }\\,t\\,{\\le }\\,h$. As done in Equation DISPLAY_FORM17 except for the relative distance embedding, the decoder of Gate also attends over all the encoder hidden states. And the question-level importance $\\bar{g}_{i-t}$ is employed as the coefficient of the attention scores at turn $i\\!-\\!t$." ], [ "Besides recent questions, as mentioned in Section SECREF1, the precedent SQL can also be context. As shown in Figure FIGREF27, the usage of $\\mathbf {y}_{i-1}$ requires a SQL encoder, where we employ another BiLSTM to achieve it. The $m$-th contextual action representation at turn $i\\!-\\!1$, $\\mathbf {h}^A_{i-1,m}$, can be obtained by passing the action sequence through the SQL encoder." ], [ "Attention over $\\mathbf {y}_{i-1}$ is a straightforward method to incorporate the SQL context. Given $\\mathbf {h}^A_{i-1,m}$, we employ a similar manner as Equation DISPLAY_FORM8 to compute attention score and thus obtain the SQL context vector. This vector is employed as an additional input for decoder in Equation DISPLAY_FORM7." ], [ "To reuse the precedent generated SQL, BIBREF5 presented a token-level copy mechanism on their non-grammar based parser. Inspired by them, we propose an action-level copy mechanism suited for grammar-based decoding. It enables the decoder to copy actions appearing in $\\mathbf {y}_{i-1}$, when the actions are compatible to the current expanded nonterminal. As the copied actions lie in the same semantic space with the generated ones, the output probability for action $\\gamma $ is a mix of generating ($\\mathbf {g}$) and copying ($\\mathbf {c}$). The generating probability $P(y_{i,j}\\!=\\!{\\gamma }\\,|\\,\\mathbf {g})$ follows Equation DISPLAY_FORM10 and DISPLAY_FORM11, while the copying probability is:", "where $\\mathbf {W}^l$ is a learned matrix. Denoting $P^{copy}_{i,j}$ the probability of copying at decoding step $j$ of turn $i$, it can be obtained by $\\sigma (\\mathbf {W}^{c}\\mathbf {h}^{\\overrightarrow{D}}_{i,j}+\\mathbf {b}^{c})$, where $\\lbrace \\mathbf {W}^{c},\\mathbf {b}^{c}\\rbrace $ are learned parameters and $\\sigma $ is the sigmoid function. The final probability $P(y_{i,j}={\\gamma })$ is computed by:" ], [ "Besides the action-level copy, we also introduce a tree-level copy mechanism. As illustrated in Figure FIGREF27, tree-level copy mechanism enables the decoder to copy action subtrees extracted from $\\mathbf {y}_{i-1}$, which shrinks the number of decoding steps by a large margin. Similar idea has been proposed in a non-grammar based decoder BIBREF4. In fact, a subtree is an action sequence starting from specific nonterminals, such as ${\\rm Select}$. To give an example, $\\langle $ $\\rm \\scriptstyle {Select}\\rightarrow \\rm {Agg}$, $\\rm \\scriptstyle {Agg}\\rightarrow \\rm {max\\ Col\\ Tab}$, $\\rm \\scriptstyle {Col}\\rightarrow \\rm {Id}$, $\\rm \\scriptstyle {Tab}\\rightarrow \\rm {CARS\\_DATA}$ $\\rangle $ makes up a subtree for the tree in Figure FIGREF6. For a subtree $\\upsilon $, its representation $\\phi ^{t}(\\upsilon )$ is the final hidden state of SQL encoder, which encodes its corresponding action sequence. Then we can obtain the output probability of subtree $\\upsilon $ as:", "where $\\mathbf {W}^t$ is a learned matrix. The output probabilities of subtrees are normalized together with Equation DISPLAY_FORM10 and DISPLAY_FORM11." ], [ "We employ BERT BIBREF10 to augment our model via enhancing the embedding of questions and schemas. We first concatenate the input question and all the schemas in a deterministic order with [SEP] as delimiter BIBREF17. For instance, the input for $Q_1$ in Figure FIGREF1 is “What is id ... max horsepower? [SEP] CARS_NAMES [SEP] MakeId ... [SEP] Horsepower”. Feeding it into BERT, we obtain the schema-aware question representations and question-aware schema representations. These contextual representations are used to substitute $\\phi ^x$ subsequently, while other parts of the model remain the same." ], [ "We conduct experiments to study whether the introduced methods are able to effectively model context in the task of SPC (Section SECREF36), and further perform a fine-grained analysis on various contextual phenomena (Section SECREF40)." ], [ "Two large complex cross-domain datasets are used: SParC BIBREF2 consists of 3034 / 422 dialogues for train / development, and CoSQL BIBREF6 consists of 2164 / 292 ones. The average turn numbers of SParC and CoSQL are $3.0$ and $5.2$, respectively." ], [ "We evaluate each predicted SQL query using exact set match accuracy BIBREF2. Based on it, we consider three metrics: Question Match (Ques.Match), the match accuracy over all questions, Interaction Match (Int.Match), the match accuracy over all dialogues, and Turn $i$ Match, the match accuracy over questions at turn $i$." ], [ "Our implementation is based on PyTorch BIBREF18, AllenNLP BIBREF19 and the library transformers BIBREF20. We adopt the Adam optimizer and set the learning rate as 1e-3 on all modules except for BERT, for which a learning rate of 1e-5 is used BIBREF21. The dimensions of word embedding, action embedding and distance embedding are 100, while the hidden state dimensions of question encoder, grammar-based decoder, turn-level encoder and SQL encoder are 200. We initialize word embedding using Glove BIBREF22 for non-BERT models. For methods which use recent $h$ questions, $h$ is set as 5 on both datasets." ], [ "We consider three models as our baselines. SyntaxSQL-con and CD-Seq2Seq are two strong baselines introduced in the SParC dataset paper BIBREF2. SyntaxSQL-con employs a BiLSTM model to encode dialogue history upon the SyntaxSQLNet model (analogous to our Turn) BIBREF23, while CD-Seq2Seq is adapted from BIBREF4 for cross-domain settings (analogous to our Turn+Tree Copy). EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and token-level copy (analogous to our Turn+SQL Attn+Action Copy)." ], [ "Taking Concat as a representative, we compare the performance of our model with other models, as shown in Table TABREF34. As illustrated, our model outperforms baselines by a large margin with or without BERT, achieving new SOTA performances on both datasets. Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively.", "To conduct a thorough comparison, we evaluate 13 different context modeling methods upon the same parser, including 6 methods introduced in Section SECREF2 and 7 selective combinations of them (e.g., Concat+Action Copy). The experimental results are presented in Figure FIGREF37. Taken as a whole, it is very surprising to observe that none of these methods can be consistently superior to the others. The experimental results on BERT-based models show the same trend. Diving deep into the methods only using recent questions as context, we observe that Concat and Turn perform competitively, outperforming Gate by a large margin. With respect to the methods only using precedent SQL as context, Action Copy significantly surpasses Tree Copy and SQL Attn in all metrics. In addition, we observe that there is little difference in the performance of Action Copy and Concat, which implies that using precedent SQL as context gives almost the same effect with using recent questions. In terms of the combinations of different context modeling methods, they do not significantly improve the performance as we expected.", "As mentioned in Section SECREF1, intuitively, methods which only use the precedent SQL enjoys better generalizability. To validate it, we further conduct an out-of-distribution experiment to assess the generalizability of different context modeling methods. Concretely, we select three representative methods and train them on questions at turn 1 and 2, whereas test them at turn 3, 4 and beyond. As shown in Figure FIGREF38, Action Copy has a consistently comparable or better performance, validating the intuition. Meanwhile, Concat appears to be strikingly competitive, demonstrating it also has a good generalizability. Compared with them, Turn is more vulnerable to out-of-distribution questions.", "In conclusion, existing context modeling methods in the task of SPC are not as effective as expected, since they do not show a significant advantage over the simple concatenation method." ], [ "By a careful investigation on contextual phenomena, we summarize them in multiple hierarchies. Roughly, there are three kinds of contextual phenomena in questions: semantically complete, coreference and ellipsis. Semantically complete means a question can reflect all the meaning of its corresponding SQL. Coreference means a question contains pronouns, while ellipsis means the question cannot reflect all of its SQL, even if resolving its pronouns. In the fine-grained level, coreference can be divided into 5 types according to its pronoun BIBREF1. Ellipsis can be characterized by its intention: continuation and substitution. Continuation is to augment extra semantics (e.g. ${\\rm Filter}$), and substitution refers to the situation where current question is intended to substitute particular semantics in the precedent question. Substitution can be further branched into 4 types: explicit vs. implicit and schema vs. operator. Explicit means the current question provides contextual clues (i.e. partial context overlaps with the precedent question) to help locate the substitution target, while implicit does not. On most cases, the target is schema or operator. In order to study the effect of context modeling methods on various phenomena, as shown in Table TABREF39, we take the development set of SParC as an example to perform our analysis. The analysis begins by presenting Ques.Match of three representative models on above fine-grained types in Figure FIGREF42. As shown, though different methods have different strengths, they all perform poorly on certain types, which will be elaborated below." ], [ "Diving deep into the coreference (left of Figure FIGREF42), we observe that all methods struggle with two fine-grained types: definite noun phrases and one anaphora. Through our study, we find the scope of antecedent is a key factor. An antecedent is one or more entities referred by a pronoun. Its scope is either whole, where the antecedent is the precedent answer, or partial, where the antecedent is part of the precedent question. The above-mentioned fine-grained types are more challenging as their partial proportion are nearly $40\\%$, while for demonstrative pronoun it is only $22\\%$. It is reasonable as partial requires complex inference on context. Considering the 4th example in Table TABREF39, “one” refers to “pets” instead of “age” because the accompanying verb is “weigh”. From this observation, we draw the conclusion that current context modeling methods do not succeed on pronouns which require complex inference on context." ], [ "As for ellipsis (right of Figure FIGREF42), we obtain three interesting findings by comparisons in three aspects. The first finding is that all models have a better performance on continuation than substitution. This is expected since there are redundant semantics in substitution, while not in continuation. Considering the 8th example in Table TABREF39, “horsepower” is a redundant semantic which may raise noise in SQL prediction. The second finding comes from the unexpected drop from implicit(substitution) to explicit(substitution). Intuitively, explicit should surpass implicit on substitution as it provides more contextual clues. The finding demonstrates that contextual clues are obviously not well utilized by the context modeling methods. Third, compared with schema(substitution), operator(substitution) achieves a comparable or better performance consistently. We believe it is caused by the cross-domain setting, which makes schema related substitution more difficult." ], [ "The most related work is the line of semantic parsing in context. In the topic of SQL, BIBREF24 proposed a context-independent CCG parser and then applied it to do context-dependent substitution, BIBREF3 applied a search-based method for sequential questions, and BIBREF4 provided the first sequence-to-sequence solution in the area. More recently, BIBREF5 presented a edit-based method to reuse the precedent generated SQL. With respect to other logic forms, BIBREF25 focuses on understanding execution commands in context, BIBREF26 on question answering over knowledge base in a conversation, and BIBREF27 on code generation in environment context. Our work is different from theirs as we perform an exploratory study, not fulfilled by previous works.", "There are also several related works that provided studies on context. BIBREF17 explored the contextual representations in context-independent semantic parsing, and BIBREF28 studied how conversational agents use conversation history to generate response. Different from them, our task focuses on context modeling for semantic parsing. Under the same task, BIBREF1 summarized contextual phenomena in a coarse-grained level, while BIBREF0 performed a wizard-of-oz experiment to study the most frequent phenomena. What makes our work different from them is that we not only summarize contextual phenomena by fine-grained types, but also perform an analysis on context modeling methods." ], [ "This work conducts an exploratory study on semantic parsing in context, to realize how far we are from effective context modeling. Through a thorough comparison, we find that existing context modeling methods are not as effective as expected. A simple concatenation method can be much competitive. Furthermore, by performing a fine-grained analysis, we summarize two potential directions as our future work: incorporating common sense for better pronouns inference, and modeling contextual clues in a more explicit manner. By open-sourcing our code and materials, we believe our work can facilitate the community to debug models in a fine-grained level and make more progress." ] ] }
{ "question": [ "How big is improvement in performances of proposed model over state of the art?", "What two large datasets are used for evaluation?", "What context modelling methods are evaluated?", "What are two datasets models are tested on?" ], "question_id": [ "cc9f0ac8ead575a9b485a51ddc06b9ecb2e2a44d", "69e678666d11731c9bfa99953e2cd5a5d11a4d4f", "471d624498ab48549ce492ada9e6129da05debac", "f858031ebe57b6139af46ee0f25c10870bb00c3c" ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively." ], "yes_no": null, "free_form_answer": "", "evidence": [ "We consider three models as our baselines. SyntaxSQL-con and CD-Seq2Seq are two strong baselines introduced in the SParC dataset paper BIBREF2. SyntaxSQL-con employs a BiLSTM model to encode dialogue history upon the SyntaxSQLNet model (analogous to our Turn) BIBREF23, while CD-Seq2Seq is adapted from BIBREF4 for cross-domain settings (analogous to our Turn+Tree Copy). EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and token-level copy (analogous to our Turn+SQL Attn+Action Copy).", "Taking Concat as a representative, we compare the performance of our model with other models, as shown in Table TABREF34. As illustrated, our model outperforms baselines by a large margin with or without BERT, achieving new SOTA performances on both datasets. Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively.", "FLOAT SELECTED: Table 1: We report the best performance observed in 5 runs on the development sets of both SPARC and COSQL, since their test sets are not public. We also conduct Wilcoxon signed-rank tests between our method and the baselines, and the results show the improvements of our model are significant with p < 0.005." ], "highlighted_evidence": [ "EditSQL BIBREF5 is a STOA baseline which mainly makes use of SQL attention and token-level copy (analogous to our Turn+SQL Attn+Action Copy).", "Taking Concat as a representative, we compare the performance of our model with other models, as shown in Table TABREF34. As illustrated, our model outperforms baselines by a large margin with or without BERT, achieving new SOTA performances on both datasets. Compared with the previous SOTA without BERT on SParC, our model improves Ques.Match and Int.Match by $10.6$ and $5.4$ points, respectively.", "FLOAT SELECTED: Table 1: We report the best performance observed in 5 runs on the development sets of both SPARC and COSQL, since their test sets are not public. We also conduct Wilcoxon signed-rank tests between our method and the baselines, and the results show the improvements of our model are significant with p < 0.005." ] } ], "annotation_id": [ "dd3f3fb7924027f3d1d27347939df4aa60f5b89e" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "SParC BIBREF2 and CoSQL BIBREF6" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this paper, we try to fulfill the above insufficiency via an exploratory study on real-world semantic parsing in context. Concretely, we present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. Our best model achieves state-of-the-art (SOTA) performances on both datasets with significant improvements. Furthermore, we summarize and generalize the most frequent contextual phenomena, with a fine-grained analysis on representative models. Through the analysis, we obtain some interesting findings, which may benefit the community on the potential research directions. We will open-source our code and materials to facilitate future work upon acceptance." ], "highlighted_evidence": [ "Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. " ] } ], "annotation_id": [ "07cc2547a5636d8efd45277b27e554600311e0e7" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Concat\nTurn\nGate\nAction Copy\nTree Copy\nSQL Attn\nConcat + Action Copy\nConcat + Tree Copy\nConcat + SQL Attn\nTurn + Action Copy\nTurn + Tree Copy\nTurn + SQL Attn\nTurn + SQL Attn + Action Copy", "evidence": [ "To conduct a thorough comparison, we evaluate 13 different context modeling methods upon the same parser, including 6 methods introduced in Section SECREF2 and 7 selective combinations of them (e.g., Concat+Action Copy). The experimental results are presented in Figure FIGREF37. Taken as a whole, it is very surprising to observe that none of these methods can be consistently superior to the others. The experimental results on BERT-based models show the same trend. Diving deep into the methods only using recent questions as context, we observe that Concat and Turn perform competitively, outperforming Gate by a large margin. With respect to the methods only using precedent SQL as context, Action Copy significantly surpasses Tree Copy and SQL Attn in all metrics. In addition, we observe that there is little difference in the performance of Action Copy and Concat, which implies that using precedent SQL as context gives almost the same effect with using recent questions. In terms of the combinations of different context modeling methods, they do not significantly improve the performance as we expected.", "FLOAT SELECTED: Figure 5: Question Match, Interaction Match and Turn i Match on SPARC and COSQL development sets. The numbers are averaged over 5 runs. The first column represents absolute values. The rest are improvements of different context modeling methods over CONCAT." ], "highlighted_evidence": [ "To conduct a thorough comparison, we evaluate 13 different context modeling methods upon the same parser, including 6 methods introduced in Section SECREF2 and 7 selective combinations of them (e.g., Concat+Action Copy). The experimental results are presented in Figure FIGREF37. Taken as a whole, it is very surprising to observe that none of these methods can be consistently superior to the others. The experimental results on BERT-based models show the same trend. Diving deep into the methods only using recent questions as context, we observe that Concat and Turn perform competitively, outperforming Gate by a large margin. With respect to the methods only using precedent SQL as context, Action Copy significantly surpasses Tree Copy and SQL Attn in all metrics. In addition, we observe that there is little difference in the performance of Action Copy and Concat, which implies that using precedent SQL as context gives almost the same effect with using recent questions. In terms of the combinations of different context modeling methods, they do not significantly improve the performance as we expected.", "FLOAT SELECTED: Figure 5: Question Match, Interaction Match and Turn i Match on SPARC and COSQL development sets. The numbers are averaged over 5 runs. The first column represents absolute values. The rest are improvements of different context modeling methods over CONCAT." ] } ], "annotation_id": [ "f85cd8cb2e930ddf579c2a28e1b9bedad79f19dc" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "SParC BIBREF2 and CoSQL BIBREF6" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this paper, we try to fulfill the above insufficiency via an exploratory study on real-world semantic parsing in context. Concretely, we present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. Our best model achieves state-of-the-art (SOTA) performances on both datasets with significant improvements. Furthermore, we summarize and generalize the most frequent contextual phenomena, with a fine-grained analysis on representative models. Through the analysis, we obtain some interesting findings, which may benefit the community on the potential research directions. We will open-source our code and materials to facilitate future work upon acceptance." ], "highlighted_evidence": [ "Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods." ] } ], "annotation_id": [ "b2a511c76b52fce2865b0cd74f268894a014b94d" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Figure 1: An example dialogue (right) and its database schema (left).", "Figure 2: The grammar rule and the abstract syntax tree for the SQL", "Figure 3: Different methods to incorporate recent h questions [xi−h, ...,xi−1]. (a) CONCAT: concatenate recent questions with xi as input; (b) TURN: employ a turn-level encoder to capture the inter-dependencies among questions in different turns; (c) GATE: use a gate mechanism to compute the importance of each question.", "Figure 4: Different methods to employ the precedent SQL yi−1. SQL Enc. is short for SQL Encoder, and Tree Ext. is short for Subtree Extractor. (a) SQL ATTN: attending over yi−1; (b) ACTION COPY: allow to copy actions from yi−1; (c) TREE COPY: allow to copy action subtrees extracted from yi−1.", "Table 1: We report the best performance observed in 5 runs on the development sets of both SPARC and COSQL, since their test sets are not public. We also conduct Wilcoxon signed-rank tests between our method and the baselines, and the results show the improvements of our model are significant with p < 0.005.", "Figure 6: Out-of-distribution experimental results (Turn i Match) of three models on SPARC and COSQL development sets.", "Figure 5: Question Match, Interaction Match and Turn i Match on SPARC and COSQL development sets. The numbers are averaged over 5 runs. The first column represents absolute values. The rest are improvements of different context modeling methods over CONCAT.", "Table 2: Different fine-grained types, their count and representative examples from the SPARC development set. one means one is a pronoun. Winners means Winners is a phrase intended to substitute losers.", "Figure 7: Different context modeling methods have different strengths on fine-grained types (better viewed in color)." ], "file": [ "1-Figure1-1.png", "2-Figure2-1.png", "3-Figure3-1.png", "4-Figure4-1.png", "4-Table1-1.png", "5-Figure6-1.png", "5-Figure5-1.png", "6-Table2-1.png", "6-Figure7-1.png" ] }
1909.00324
A Novel Aspect-Guided Deep Transition Model for Aspect Based Sentiment Analysis
Aspect based sentiment analysis (ABSA) aims to identify the sentiment polarity towards the given aspect in a sentence, while previous models typically exploit an aspect-independent (weakly associative) encoder for sentence representation generation. In this paper, we propose a novel Aspect-Guided Deep Transition model, named AGDT, which utilizes the given aspect to guide the sentence encoding from scratch with the specially-designed deep transition architecture. Furthermore, an aspect-oriented objective is designed to enforce AGDT to reconstruct the given aspect with the generated sentence representation. In doing so, our AGDT can accurately generate aspect-specific sentence representation, and thus conduct more accurate sentiment predictions. Experimental results on multiple SemEval datasets demonstrate the effectiveness of our proposed approach, which significantly outperforms the best reported results with the same setting.
{ "section_name": [ "Introduction", "Model Description", "Model Description ::: Aspect-Guided Encoder", "Model Description ::: Aspect-Reconstruction", "Model Description ::: Training Objective", "Experiments ::: Datasets and Metrics ::: Data Preparation.", "Experiments ::: Datasets and Metrics ::: Aspect-Category Sentiment Analysis.", "Experiments ::: Datasets and Metrics ::: Aspect-Term Sentiment Analysis.", "Experiments ::: Datasets and Metrics ::: Metrics.", "Experiments ::: Implementation Details", "Experiments ::: Baselines", "Experiments ::: Main Results and Analysis ::: Aspect-Category Sentiment Analysis Task", "Experiments ::: Main Results and Analysis ::: Aspect-Term Sentiment Analysis Task", "Experiments ::: Main Results and Analysis ::: Ablation Study", "Experiments ::: Main Results and Analysis ::: Impact of Model Depth", "Experiments ::: Main Results and Analysis ::: Effectiveness of Aspect-reconstruction Approach", "Experiments ::: Main Results and Analysis ::: Impact of Loss Weight @!START@$\\lambda $@!END@", "Experiments ::: Main Results and Analysis ::: Comparison on Three-Class for the Aspect-Term Sentiment Analysis Task", "Analysis and Discussion ::: Case Study and Visualization.", "Analysis and Discussion ::: Error Analysis.", "Related Work ::: Sentiment Analysis.", "Related Work ::: Deep Transition.", "Conclusions", "Acknowledgments" ], "paragraphs": [ [ "Aspect based sentiment analysis (ABSA) is a fine-grained task in sentiment analysis, which can provide important sentiment information for other natural language processing (NLP) tasks. There are two different subtasks in ABSA, namely, aspect-category sentiment analysis and aspect-term sentiment analysis BIBREF0, BIBREF1. Aspect-category sentiment analysis aims at predicting the sentiment polarity towards the given aspect, which is in predefined several categories and it may not appear in the sentence. For instance, in Table TABREF2, the aspect-category sentiment analysis is going to predict the sentiment polarity towards the aspect “food”, which is not appeared in the sentence. By contrast, the goal of aspect-term sentiment analysis is to predict the sentiment polarity over the aspect term which is a subsequence of the sentence. For instance, the aspect-term sentiment analysis will predict the sentiment polarity towards the aspect term “The appetizers”, which is a subsequence of the sentence. Additionally, the number of categories of the aspect term is more than one thousand in the training corpus.", "As shown in Table TABREF2, sentiment polarity may be different when different aspects are considered. Thus, the given aspect (term) is crucial to ABSA tasks BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. Besides, BIBREF7 show that not all words of a sentence are useful for the sentiment prediction towards a given aspect (term). For instance, when the given aspect is the “service”, the words “appetizers” and “ok” are irrelevant for the sentiment prediction. Therefore, an aspect-independent (weakly associative) encoder may encode such background words (e.g., “appetizers” and “ok”) into the final representation, which may lead to an incorrect prediction.", "Numerous existing models BIBREF8, BIBREF9, BIBREF10, BIBREF1 typically utilize an aspect-independent encoder to generate the sentence representation, and then apply the attention mechanism BIBREF11 or gating mechanism to conduct feature selection and extraction, while feature selection and extraction may base on noised representations. In addition, some models BIBREF12, BIBREF13, BIBREF14 simply concatenate the aspect embedding with each word embedding of the sentence, and then leverage conventional Long Short-Term Memories (LSTMs) BIBREF15 to generate the sentence representation. However, it is insufficient to exploit the given aspect and conduct potentially complex feature selection and extraction.", "To address this issue, we investigate a novel architecture to enhance the capability of feature selection and extraction with the guidance of the given aspect from scratch. Based on the deep transition Gated Recurrent Unit (GRU) BIBREF16, BIBREF17, BIBREF18, BIBREF19, an aspect-guided GRU encoder is thus proposed, which utilizes the given aspect to guide the sentence encoding procedure at the very beginning stage. In particular, we specially design an aspect-gate for the deep transition GRU to control the information flow of each token input, with the aim of guiding feature selection and extraction from scratch, i.e. sentence representation generation. Furthermore, we design an aspect-oriented objective to enforce our model to reconstruct the given aspect, with the sentence representation generated by the aspect-guided encoder. We name this Aspect-Guided Deep Transition model as AGDT. With all the above contributions, our AGDT can accurately generate an aspect-specific representation for a sentence, and thus conduct more accurate sentiment predictions towards the given aspect.", "We evaluate the AGDT on multiple datasets of two subtasks in ABSA. Experimental results demonstrate the effectiveness of our proposed approach. And the AGDT significantly surpasses existing models with the same setting and achieves state-of-the-art performance among the models without using additional features (e.g., BERT BIBREF20). Moreover, we also provide empirical and visualization analysis to reveal the advantages of our model. Our contributions can be summarized as follows:", "We propose an aspect-guided encoder, which utilizes the given aspect to guide the encoding of a sentence from scratch, in order to conduct the aspect-specific feature selection and extraction at the very beginning stage.", "We propose an aspect-reconstruction approach to further guarantee that the aspect-specific information has been fully embedded into the sentence representation.", "Our AGDT substantially outperforms previous systems with the same setting, and achieves state-of-the-art results on benchmark datasets compared to those models without leveraging additional features (e.g., BERT)." ], [ "As shown in Figure FIGREF6, the AGDT model mainly consists of three parts: aspect-guided encoder, aspect-reconstruction and aspect concatenated embedding. The aspect-guided encoder is specially designed to guide the encoding of a sentence from scratch for conducting the aspect-specific feature selection and extraction at the very beginning stage. The aspect-reconstruction aims to guarantee that the aspect-specific information has been fully embedded in the sentence representation for more accurate predictions. The aspect concatenated embedding part is used to concatenate the aspect embedding and the generated sentence representation so as to make the final prediction." ], [ "The aspect-guided encoder is the core module of AGDT, which consists of two key components: Aspect-guided GRU and Transition GRU BIBREF16.", "A-GRU: Aspect-guided GRU (A-GRU) is a specially-designed unit for the ABSA tasks, which is an extension of the L-GRU proposed by BIBREF19. In particular, we design an aspect-gate to select aspect-specific representations through controlling the transformation scale of token embeddings at each time step.", "At time step $t$, the hidden state $\\mathbf {h}_{t}$ is computed as follows:", "where $\\odot $ represents element-wise product; $\\mathbf {z}_{t}$ is the update gate BIBREF16; and $\\widetilde{\\mathbf {h}}_{t}$ is the candidate activation, which is computed as:", "where $\\mathbf {g}_{t}$ denotes the aspect-gate; $\\mathbf {x}_{t}$ represents the input word embedding at time step $t$; $\\mathbf {r}_{t}$ is the reset gate BIBREF16; $\\textbf {H}_1(\\mathbf {x}_{t})$ and $\\textbf {H}_2(\\mathbf {x}_{t})$ are the linear transformation of the input $\\mathbf {x}_{t}$, and $\\mathbf {l}_{t}$ is the linear transformation gate for $\\mathbf {x}_{t}$ BIBREF19. $\\mathbf {r}_{t}$, $\\mathbf {z}_{t}$, $\\mathbf {l}_{t}$, $\\mathbf {g}_{t}$, $\\textbf {H}_{1}(\\mathbf {x}_{t})$ and $\\textbf {H}_{2}(\\mathbf {x}_{t})$ are computed as:", "where “$\\mathbf {a}$\" denotes the embedding of the given aspect, which is the same at each time step. The update gate $\\mathbf {z}_t$ and reset gate $\\mathbf {r}_t$ are the same as them in the conventional GRU.", "In Eq. (DISPLAY_FORM9) $\\sim $ (), the aspect-gate $\\mathbf {g}_{t}$ controls both nonlinear and linear transformations of the input $\\mathbf {x}_{t}$ under the guidance of the given aspect at each time step. Besides, we also exploit a linear transformation gate $\\mathbf {l}_{t}$ to control the linear transformation of the input, according to the current input $\\mathbf {x}_t$ and previous hidden state $\\mathbf {h}_{t-1}$, which has been proved powerful in the deep transition architecture BIBREF19.", "As a consequence, A-GRU can control both non-linear transformation and linear transformation for input $\\mathbf {x}_{t}$ at each time step, with the guidance of the given aspect, i.e., A-GRU can guide the encoding of aspect-specific features and block the aspect-irrelevant information at the very beginning stage.", "T-GRU: Transition GRU (T-GRU) BIBREF17 is a crucial component of deep transition block, which is a special case of GRU with only “state” as an input, namely its input embedding is zero embedding. As in Figure FIGREF6, a deep transition block consists of an A-GRU followed by several T-GRUs at each time step. For the current time step $t$, the output of one A-GRU/T-GRU is fed into the next T-GRU as the input. The output of the last T-GRU at time step $t$ is fed into A-GRU at the time step $t+1$. For a T-GRU, each hidden state at both time step $t$ and transition depth $i$ is computed as:", "where the update gate $\\mathbf {z}_{t}^i$ and the reset gate $\\mathbf {r}_{t}^i$ are computed as:", "The AGDT encoder is based on deep transition cells, where each cell is composed of one A-GRU at the bottom, followed by several T-GRUs. Such AGDT model can encode the sentence representation with the guidance of aspect information by utilizing the specially designed architecture." ], [ "We propose an aspect-reconstruction approach to guarantee the aspect-specific information has been fully embedded in the sentence representation. Particularly, we devise two objectives for two subtasks in ABSA respectively. In terms of aspect-category sentiment analysis datasets, there are only several predefined aspect categories. While in aspect-term sentiment analysis datasets, the number of categories of term is more than one thousand. In a real-life scenario, the number of term is infinite, while the words that make up terms are limited. Thus we design different loss-functions for these two scenarios.", "For the aspect-category sentiment analysis task, we aim to reconstruct the aspect according to the aspect-specific representation. It is a multi-class problem. We take the softmax cross-entropy as the loss function:", "where C1 is the number of predefined aspects in the training example; ${y}_{i}^{c}$ is the ground-truth and ${p}_{i}^{c}$ is the estimated probability of a aspect.", "For the aspect-term sentiment analysis task, we intend to reconstruct the aspect term (may consist of multiple words) according to the aspect-specific representation. It is a multi-label problem and thus the sigmoid cross-entropy is applied:", "where C2 denotes the number of words that constitute all terms in the training example, ${y}_{i}^{t}$ is the ground-truth and ${p}_{i}^{t}$ represents the predicted value of a word.", "Our aspect-oriented objective consists of $\\mathcal {L}_{c}$ and $\\mathcal {L}_{t}$, which guarantee that the aspect-specific information has been fully embedded into the sentence representation." ], [ "The final loss function is as follows:", "where the underlined part denotes the conventional loss function; C is the number of sentiment labels; ${y}_{i}$ is the ground-truth and ${p}_{i}$ represents the estimated probability of the sentiment label; $\\mathcal {L}$ is the aspect-oriented objective, where Eq. DISPLAY_FORM14 is for the aspect-category sentiment analysis task and Eq. DISPLAY_FORM15 is for the aspect-term sentiment analysis task. And $\\lambda $ is the weight of $\\mathcal {L}$.", "As shown in Figure FIGREF6, we employ the aspect reconstruction approach to reconstruct the aspect (term), where “softmax” is for the aspect-category sentiment analysis task and “sigmoid” is for the aspect-term sentiment analysis task. Additionally, we concatenate the aspect embedding on the aspect-guided sentence representation to predict the sentiment polarity. Under that loss function (Eq. DISPLAY_FORM17), the AGDT can produce aspect-specific sentence representations." ], [ "We conduct experiments on two datasets of the aspect-category based task and two datasets of the aspect-term based task. For these four datasets, we name the full dataset as “DS\". In each “DS\", there are some sentences like the example in Table TABREF2, containing different sentiment labels, each of which associates with an aspect (term). For instance, Table TABREF2 shows the customer's different attitude towards two aspects: “food” (“The appetizers\") and “service”. In order to measure whether a model can detect different sentiment polarities in one sentence towards different aspects, we extract a hard dataset from each “DS”, named “HDS”, in which each sentence only has different sentiment labels associated with different aspects. When processing the original sentence $s$ that has multiple aspects ${a}_{1},{a}_{2},...,{a}_{n}$ and corresponding sentiment labels ${l}_{1},{l}_{2},...,{l}_{n}$ ($n$ is the number of aspects or terms in a sentence), the sentence will be expanded into (s, ${a}_{1}$, ${l}_{1}$), (s, ${a}_{2}$, ${l}_{2}$), ..., (s, ${a}_{n}$, ${l}_{n}$) in each dataset BIBREF21, BIBREF22, BIBREF1, i.e, there will be $n$ duplicated sentences associated with different aspects and labels." ], [ "For comparison, we follow BIBREF1 and use the restaurant reviews dataset of SemEval 2014 (“restaurant-14”) Task 4 BIBREF0 to evaluate our AGDT model. The dataset contains five predefined aspects and four sentiment labels. A large dataset (“restaurant-large”) involves restaurant reviews of three years, i.e., 2014 $\\sim $ 2016 BIBREF0. There are eight predefined aspects and three labels in that dataset. When creating the “restaurant-large” dataset, we follow the same procedure as in BIBREF1. Statistics of datasets are shown in Table TABREF19." ], [ "We use the restaurant and laptop review datasets of SemEval 2014 Task 4 BIBREF0 to evaluate our model. Both datasets contain four sentiment labels. Meanwhile, we also conduct a three-class experiment, in order to compare with some work BIBREF13, BIBREF3, BIBREF7 which removed “conflict” labels. Statistics of both datasets are shown in Table TABREF20." ], [ "The evaluation metrics are accuracy. All instances are shown in Table TABREF19 and Table TABREF20. Each experiment is repeated five times. The mean and the standard deviation are reported." ], [ "We use the pre-trained 300d Glove embeddings BIBREF23 to initialize word embeddings, which is fixed in all models. For out-of-vocabulary words, we randomly sample their embeddings by the uniform distribution $U(-0.25, 0.25)$. Following BIBREF8, BIBREF24, BIBREF25, we take the averaged word embedding as the aspect representation for multi-word aspect terms. The transition depth of deep transition model is 4 (see Section SECREF30). The hidden size is set to 300. We set the dropout rate BIBREF26 to 0.5 for input token embeddings and 0.3 for hidden states. All models are optimized using Adam optimizer BIBREF27 with gradient clipping equals to 5 BIBREF28. The initial learning rate is set to 0.01 and the batch size is set to 4096 at the token level. The weight of the reconstruction loss $\\lambda $ in Eq. DISPLAY_FORM17 is fine-tuned (see Section SECREF30) and respectively set to 0.4, 0.4, 0.2 and 0.5 for four datasets." ], [ "To comprehensively evaluate our AGDT, we compare the AGDT with several competitive models.", "ATAE-LSTM. It is an attention-based LSTM model. It appends the given aspect embedding with each word embedding, and then the concatenated embedding is taken as the input of LSTM. The output of LSTM is appended aspect embedding again. Furthermore, attention is applied to extract features for final predictions.", "CNN. This model focuses on extracting n-gram features to generate sentence representation for the sentiment classification.", "TD-LSTM. This model uses two LSTMs to capture the left and right context of the term to generate target-dependent representations for the sentiment prediction.", "IAN. This model employs two LSTMs and interactive attention mechanism to learn representations of the sentence and the aspect, and concatenates them for the sentiment prediction.", "RAM. This model applies multiple attentions and memory networks to produce the sentence representation.", "GCAE. It uses CNNs to extract features and then employs two Gated Tanh-Relu units to selectively output the sentiment information flow towards the aspect for predicting sentiment labels." ], [ "We present the overall performance of our model and baseline models in Table TABREF27. Results show that our AGDT outperforms all baseline models on both “restaurant-14” and “restaurant-large” datasets. ATAE-LSTM employs an aspect-weakly associative encoder to generate the aspect-specific sentence representation by simply concatenating the aspect, which is insufficient to exploit the given aspect. Although GCAE incorporates the gating mechanism to control the sentiment information flow according to the given aspect, the information flow is generated by an aspect-independent encoder. Compared with GCAE, our AGDT improves the performance by 2.4% and 1.6% in the “DS” part of the two dataset, respectively. These results demonstrate that our AGDT can sufficiently exploit the given aspect to generate the aspect-guided sentence representation, and thus conduct accurate sentiment prediction. Our model benefits from the following aspects. First, our AGDT utilizes an aspect-guided encoder, which leverages the given aspect to guide the sentence encoding from scratch and generates the aspect-guided representation. Second, the AGDT guarantees that the aspect-specific information has been fully embedded in the sentence representation via reconstructing the given aspect. Third, the given aspect embedding is concatenated on the aspect-guided sentence representation for final predictions.", "The “HDS”, which is designed to measure whether a model can detect different sentiment polarities in a sentence, consists of replicated sentences with different sentiments towards multiple aspects. Our AGDT surpasses GCAE by a very large margin (+11.4% and +4.9% respectively) on both datasets. This indicates that the given aspect information is very pivotal to the accurate sentiment prediction, especially when the sentence has different sentiment labels, which is consistent with existing work BIBREF2, BIBREF3, BIBREF4. Those results demonstrate the effectiveness of our model and suggest that our AGDT has better ability to distinguish the different sentiments of multiple aspects compared to GCAE." ], [ "As shown in Table TABREF28, our AGDT consistently outperforms all compared methods on both domains. In this task, TD-LSTM and ATAE-LSTM use a aspect-weakly associative encoder. IAN, RAM and GCAE employ an aspect-independent encoder. In the “DS” part, our AGDT model surpasses all baseline models, which shows that the inclusion of A-GRU (aspect-guided encoder), aspect-reconstruction and aspect concatenated embedding has an overall positive impact on the classification process.", "In the “HDS” part, the AGDT model obtains +3.6% higher accuracy than GCAE on the restaurant domain and +4.2% higher accuracy on the laptop domain, which shows that our AGDT has stronger ability for the multi-sentiment problem against GCAE. These results further demonstrate that our model works well across tasks and datasets." ], [ "We conduct ablation experiments to investigate the impacts of each part in AGDT, where the GRU is stacked with 4 layers. Here “AC” represents aspect concatenated embedding , “AG” stands for A-GRU (Eq. (DISPLAY_FORM8) $\\sim $ ()) and “AR” denotes the aspect-reconstruction (Eq. (DISPLAY_FORM14) $\\sim $ (DISPLAY_FORM17)).", "From Table TABREF31 and Table TABREF32, we can conclude:", "Deep Transition (DT) achieves superior performances than GRU, which is consistent with previous work BIBREF18, BIBREF19 (2 vs. 1).", "Utilizing “AG” to guide encoding aspect-related features from scratch has a significant impact for highly competitive results and particularly in the “HDS” part, which demonstrates that it has the stronger ability to identify different sentiment polarities towards different aspects. (3 vs. 2).", "Aspect concatenated embedding can promote the accuracy to a degree (4 vs. 3).", "The aspect-reconstruction approach (“AR”) substantially improves the performance, especially in the “HDS\" part (5 vs. 4).", "the results in 6 show that all modules have an overall positive impact on the sentiment classification." ], [ "We have demonstrated the effectiveness of the AGDT. Here, we investigate the impact of model depth of AGDT, varying the depth from 1 to 6. Table TABREF39 shows the change of accuracy on the test sets as depth increases. We find that the best results can be obtained when the depth is equal to 4 at most case, and further depth do not provide considerable performance improvement." ], [ "Here, we investigate how well the AGDT can reconstruct the aspect information. For the aspect-term reconstruction, we count the construction is correct when all words of the term are reconstructed. Table TABREF40 shows all results on four test datasets, which shows the effectiveness of aspect-reconstruction approach again." ], [ "We randomly sample a temporary development set from the “HDS\" part of the training set to choose the lambda for each dataset. And we investigate the impact of $\\lambda $ for aspect-oriented objectives. Specifically, $\\lambda $ is increased from 0.1 to 1.0. Figure FIGREF33 illustrates all results on four “HDS\" datasets, which show that reconstructing the given aspect can enhance aspect-specific sentiment features and thus obtain better performances." ], [ "We also conduct a three-class experiment to compare our AGDT with previous models, i.e., IARM, TNet, VAE, PBAN, AOA and MGAN, in Table TABREF41. These previous models are based on an aspect-independent (weakly associative) encoder to generate sentence representations. Results on all domains suggest that our AGDT substantially outperforms most competitive models, except for the TNet on the laptop dataset. The reason may be TNet incorporates additional features (e.g., position features, local ngrams and word-level features) compared to ours (only word-level features)." ], [ "To give an intuitive understanding of how the proposed A-GRU works from scratch with different aspects, we take a review sentence as an example. As the example “the appetizers are ok, but the service is slow.” shown in Table TABREF2, it has different sentiment labels towards different aspects. The color depth denotes the semantic relatedness level between the given aspect and each word. More depth means stronger relation to the given aspect.", "Figure FIGREF43 shows that the A-GRU can effectively guide encoding the aspect-related features with the given aspect and identify corresponding sentiment. In another case, “overpriced Japanese food with mediocre service.”, there are two extremely strong sentiment words. As the above of Figure FIGREF44 shows, our A-GRU generates almost the same weight to the word “overpriced” and “mediocre”. The bottom of Figure FIGREF44 shows that reconstructing the given aspect can effectively enhance aspect-specific sentiment features and produce correct sentiment predictions." ], [ "We further investigate the errors from AGDT, which can be roughly divided into 3 types. 1) The decision boundary among the sentiment polarity is unclear, even the annotators can not sure what sentiment orientation over the given aspect in the sentence. 2) The “conflict/neutral” instances are extremely easily misclassified as “positive” or “negative”, due to the imbalanced label distribution in training corpus. 3) The polarity of complex instances is hard to predict, such as the sentence that express subtle emotions, which are hardly effectively captured, or containing negation words (e.g., never, less and not), which easily affect the sentiment polarity." ], [ "There are kinds of sentiment analysis tasks, such as document-level BIBREF34, sentence-level BIBREF35, BIBREF36, aspect-level BIBREF0, BIBREF37 and multimodal BIBREF38, BIBREF39 sentiment analysis. For the aspect-level sentiment analysis, previous work typically apply attention mechanism BIBREF11 combining with memory network BIBREF40 or gating units to solve this task BIBREF8, BIBREF41, BIBREF42, BIBREF1, BIBREF43, BIBREF44, BIBREF45, BIBREF46, where an aspect-independent encoder is used to generate the sentence representation. In addition, some work leverage the aspect-weakly associative encoder to generate aspect-specific sentence representation BIBREF12, BIBREF13, BIBREF14. All of these methods make insufficient use of the given aspect information. There are also some work which jointly extract the aspect term (and opinion term) and predict its sentiment polarity BIBREF47, BIBREF48, BIBREF49, BIBREF50, BIBREF51, BIBREF52, BIBREF53, BIBREF54, BIBREF55. In this paper, we focus on the latter problem and leave aspect extraction BIBREF56 to future work. And some work BIBREF57, BIBREF58, BIBREF59, BIBREF30, BIBREF60, BIBREF51 employ the well-known BERT BIBREF20 or document-level corpora to enhance ABSA tasks, which will be considered in our future work to further improve the performance." ], [ "Deep transition has been proved its superiority in language modeling BIBREF17 and machine translation BIBREF18, BIBREF19. We follow the deep transition architecture in BIBREF19 and extend it by incorporating a novel A-GRU for ABSA tasks." ], [ "In this paper, we propose a novel aspect-guided encoder (AGDT) for ABSA tasks, based on a deep transition architecture. Our AGDT can guide the sentence encoding from scratch for the aspect-specific feature selection and extraction. Furthermore, we design an aspect-reconstruction approach to enforce AGDT to reconstruct the given aspect with the generated sentence representation. Empirical studies on four datasets suggest that the AGDT outperforms existing state-of-the-art models substantially on both aspect-category sentiment analysis task and aspect-term sentiment analysis task of ABSA without additional features." ], [ "We sincerely thank the anonymous reviewers for their thorough reviewing and insightful suggestions. Liang, Xu, and Chen are supported by the National Natural Science Foundation of China (Contract 61370130, 61976015, 61473294 and 61876198), and the Beijing Municipal Natural Science Foundation (Contract 4172047), and the International Science and Technology Cooperation Program of the Ministry of Science and Technology (K11F100010)." ] ] }
{ "question": [ "How big is the improvement over the state-of-the-art results?", "Is the model evaluated against other Aspect-Based models?" ], "question_id": [ "1763a029daca7cab10f18634aba02a6bd1b6faa7", "f9de9ddea0c70630b360167354004ab8cbfff041" ], "nlp_background": [ "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no" ], "search_query": [ "", "" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "AGDT improves the performance by 2.4% and 1.6% in the “DS” part of the two dataset", "Our AGDT surpasses GCAE by a very large margin (+11.4% and +4.9% respectively) on both datasets", "In the “HDS” part, the AGDT model obtains +3.6% higher accuracy than GCAE on the restaurant domain and +4.2% higher accuracy on the laptop domain" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Experiments ::: Main Results and Analysis ::: Aspect-Category Sentiment Analysis Task", "We present the overall performance of our model and baseline models in Table TABREF27. Results show that our AGDT outperforms all baseline models on both “restaurant-14” and “restaurant-large” datasets. ATAE-LSTM employs an aspect-weakly associative encoder to generate the aspect-specific sentence representation by simply concatenating the aspect, which is insufficient to exploit the given aspect. Although GCAE incorporates the gating mechanism to control the sentiment information flow according to the given aspect, the information flow is generated by an aspect-independent encoder. Compared with GCAE, our AGDT improves the performance by 2.4% and 1.6% in the “DS” part of the two dataset, respectively. These results demonstrate that our AGDT can sufficiently exploit the given aspect to generate the aspect-guided sentence representation, and thus conduct accurate sentiment prediction. Our model benefits from the following aspects. First, our AGDT utilizes an aspect-guided encoder, which leverages the given aspect to guide the sentence encoding from scratch and generates the aspect-guided representation. Second, the AGDT guarantees that the aspect-specific information has been fully embedded in the sentence representation via reconstructing the given aspect. Third, the given aspect embedding is concatenated on the aspect-guided sentence representation for final predictions.", "The “HDS”, which is designed to measure whether a model can detect different sentiment polarities in a sentence, consists of replicated sentences with different sentiments towards multiple aspects. Our AGDT surpasses GCAE by a very large margin (+11.4% and +4.9% respectively) on both datasets. This indicates that the given aspect information is very pivotal to the accurate sentiment prediction, especially when the sentence has different sentiment labels, which is consistent with existing work BIBREF2, BIBREF3, BIBREF4. Those results demonstrate the effectiveness of our model and suggest that our AGDT has better ability to distinguish the different sentiments of multiple aspects compared to GCAE.", "Experiments ::: Main Results and Analysis ::: Aspect-Term Sentiment Analysis Task", "In the “HDS” part, the AGDT model obtains +3.6% higher accuracy than GCAE on the restaurant domain and +4.2% higher accuracy on the laptop domain, which shows that our AGDT has stronger ability for the multi-sentiment problem against GCAE. These results further demonstrate that our model works well across tasks and datasets." ], "highlighted_evidence": [ "Experiments ::: Main Results and Analysis ::: Aspect-Category Sentiment Analysis Task", "Compared with GCAE, our AGDT improves the performance by 2.4% and 1.6% in the “DS” part of the two dataset, respectively. These results demonstrate that our AGDT can sufficiently exploit the given aspect to generate the aspect-guided sentence representation, and thus conduct accurate sentiment prediction.", "The “HDS”, which is designed to measure whether a model can detect different sentiment polarities in a sentence, consists of replicated sentences with different sentiments towards multiple aspects. Our AGDT surpasses GCAE by a very large margin (+11.4% and +4.9% respectively) on both datasets.", "Experiments ::: Main Results and Analysis ::: Aspect-Term Sentiment Analysis Task", "In the “HDS” part, the AGDT model obtains +3.6% higher accuracy than GCAE on the restaurant domain and +4.2% higher accuracy on the laptop domain, which shows that our AGDT has stronger ability for the multi-sentiment problem against GCAE. These results further demonstrate that our model works well across tasks and datasets." ] } ], "annotation_id": [ "330a435706fa26dea6869c11033f80d466cc541b" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Experiments ::: Baselines", "To comprehensively evaluate our AGDT, we compare the AGDT with several competitive models." ], "highlighted_evidence": [ "Experiments ::: Baselines\nTo comprehensively evaluate our AGDT, we compare the AGDT with several competitive models." ] } ], "annotation_id": [ "07f6f5475bdfa5f55cb27952613b1a0c68544048" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Table 1: The instance contains different sentiment polarities towards two aspects.", "Figure 1: The overview of AGDT. The bottom right dark node (above the aspect embedding) is the aspect gate and other dark nodes (⊗) means element-wise multiply for the input token and the aspect gate. The aspect-guided encoder consists of a L-GRU (the circle frames fused with a small circle on above) at the bottom followed by several T-GRUs (the circle frames) from bottom to up.", "Table 2: Statistics of datasets for the aspect-category sentiment analysis task.", "Table 3: Statistics of datasets for the aspect-term sentiment analysis task. The ‘NC’ indicates No “Conflict” label, which is just removed the “conflict” label and is prepared for the three-class experiment.", "Table 4: The accuracy of the aspect-category sentiment analysis task. ‘*’ refers to citing from GCAE (Xue and Li, 2018).", "Table 5: The accuracy of the aspect-term sentiment analysis task. ‘*’ refers to citing from GCAE (Xue and Li, 2018).", "Table 6: Ablation study of the AGDT on the aspectcategory sentiment analysis task. Here “AC”, “AG” and “AR” represent aspect concatenated embedding, A-GRU and aspect-reconstruction, respectively, ‘ √ ’ and ‘×’ denotes whether to apply the operation. ‘Rest14’: Restaurant-14,‘Rest-Large’: Restaurant-Large.", "Table 7: Ablation study of the AGDT on the aspectterm sentiment analysis task.", "Figure 2: The impact of λ w.r.t. accuracy on “HDS”.", "Table 8: The accuracy of model depth on the four datasets. ‘D1’: Restaurant-14, ‘D2’: Restaurant-Large, ‘D3’: Restaurant, ‘D4’: Laptop.", "Table 10: The three-class accuracy of the aspect-term sentiment analysis task on SemEval 2014. ‘*’ refers to citing from the original paper. ‘Rest.’: Restaurant.", "Figure 3: The output of A-GRU.", "Table 9: The accuracy of aspect reconstruction on the full test set. ‘Rest-14’: Restaurant-14, ‘Rest-Large’: Restaurant-Large, ‘Rest.’: Restaurant." ], "file": [ "1-Table1-1.png", "3-Figure1-1.png", "5-Table2-1.png", "5-Table3-1.png", "6-Table4-1.png", "6-Table5-1.png", "7-Table6-1.png", "7-Table7-1.png", "7-Figure2-1.png", "8-Table8-1.png", "8-Table10-1.png", "8-Figure3-1.png", "8-Table9-1.png" ] }
1905.06566
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
Neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence-level labels, which are created heuristically using rule-based methods. Training the hierarchical encoder with these \emph{inaccurate} labels is challenging. Inspired by the recent work on pre-training transformer sentence encoders \cite{devlin:2018:arxiv}, we propose {\sc Hibert} (as shorthand for {\bf HI}erachical {\bf B}idirectional {\bf E}ncoder {\bf R}epresentations from {\bf T}ransformers) for document encoding and a method to pre-train it using unlabeled data. We apply the pre-trained {\sc Hibert} to our summarization model and it outperforms its randomly initialized counterpart by 1.25 ROUGE on the CNN/Dailymail dataset and by 2.0 ROUGE on a version of New York Times dataset. We also achieve the state-of-the-art performance on these two datasets.
{ "section_name": [ "Introduction", "Related Work", "Model", "Document Representation", "Pre-training", "Extractive Summarization", "Experiments", "Datasets", "Implementation Details", "Evaluations", "Results", "Conclusions" ], "paragraphs": [ [ "Automatic document summarization is the task of rewriting a document into its shorter form while still retaining its important content. Over the years, many paradigms for document summarization have been explored (see Nenkova:McKeown:2011 for an overview). The most popular two among them are extractive approaches and abstractive approaches. As the name implies, extractive approaches generate summaries by extracting parts of the original document (usually sentences), while abstractive methods may generate new words or phrases which are not in the original document.", "Extractive summarization is usually modeled as a sentence ranking problem with length constraints (e.g., max number of words or sentences). Top ranked sentences (under constraints) are selected as summaries. Early attempts mostly leverage manually engineered features BIBREF1 . Based on these sparse features, sentence are selected using a classifier or a regression model. Later, the feature engineering part in this paradigm is replaced with neural networks. cheng:2016:acl propose a hierarchical long short-term memory network (LSTM; BIBREF2 ) to encode a document and then use another LSTM to predict binary labels for each sentence in the document. This architecture is widely adopted recently BIBREF3 , BIBREF4 , BIBREF5 . Our model also employs a hierarchical document encoder, but we adopt a hierarchical transformer BIBREF6 rather a hierarchical LSTM. Because recent studies BIBREF6 , BIBREF0 show the transformer model performs better than LSTM in many tasks.", "Abstractive models do not attract much attention until recently. They are mostly based on sequence to sequence (seq2seq) models BIBREF7 , where a document is viewed a sequence and its summary is viewed as another sequence. Although seq2seq based summarizers can be equipped with copy mechanism BIBREF8 , BIBREF9 , coverage model BIBREF9 and reinforcement learning BIBREF10 , there is still no guarantee that the generated summaries are grammatical and convey the same meaning as the original document does. It seems that extractive models are more reliable than their abstractive counterparts.", "However, extractive models require sentence level labels, which are usually not included in most summarization datasets (most datasets only contain document-summary pairs). Sentence labels are usually obtained by rule-based methods (e.g., maximizing the ROUGE score between a set of sentences and reference summaries) and may not be accurate. Extractive models proposed recently BIBREF11 , BIBREF3 employ hierarchical document encoders and even have neural decoders, which are complex. Training such complex neural models with inaccurate binary labels is challenging. We observed in our initial experiments on one of our dataset that our extractive model (see Section \"Extractive Summarization\" for details) overfits to the training set quickly after the second epoch, which indicates the training set may not be fully utilized. Inspired by the recent pre-training work in natural language processing BIBREF12 , BIBREF13 , BIBREF0 , our solution to this problem is to first pre-train the “complex”' part (i.e., the hierarchical encoder) of the extractive model on unlabeled data and then we learn to classify sentences with our model initialized from the pre-trained encoder. In this paper, we propose Hibert, which stands for HIerachical Bidirectional Encoder Representations from Transformers. We design an unsupervised method to pre-train Hibert for document modeling. We apply the pre-trained Hibert to the task of document summarization and achieve state-of-the-art performance on both the CNN/Dailymail and New York Times dataset." ], [ "In this section, we introduce work on extractive summarization, abstractive summarization and pre-trained natural language processing models. For a more comprehensive review of summarization, we refer the interested readers to Nenkova:McKeown:2011 and Mani:01." ], [ "In this section, we present our model Hibert. We first introduce how documents are represented in Hibert. We then describe our method to pre-train Hibert and finally move on to the application of Hibert to summarization." ], [ "Let $\\mathcal {D} = (S_1, S_2, \\dots , S_{| \\mathcal {D} |})$ denote a document, where $S_i = (w_1^i, w_2^i, \\dots , w_{|S_i|}^i)$ is a sentence in $\\mathcal {D}$ and $w_j^i$ a word in $S_i$ . Note that following common practice in natural language processing literatures, $w_{|S_i|}^i$ is an artificial EOS (End Of Sentence) token. To obtain the representation of $\\mathcal {D}$ , we use two encoders: a sentence encoder to transform each sentence in $\\mathcal {D}$ to a vector and a document encoder to learn sentence representations given their surrounding sentences as context. Both the sentence encoder and document encoder are based on the Transformer encoder described in vaswani:2017:nips. As shown in Figure 1 , they are nested in a hierarchical fashion. A transformer encoder usually has multiple layers and each layer is composed of a multi-head self attentive sub-layer followed by a feed-forward sub-layer with residual connections BIBREF30 and layer normalizations BIBREF31 . For more details of the Transformer encoder, we refer the interested readers to vaswani:2017:nips. To learn the representation of $S_i$ , $S_i= (w_1^i, w_2^i, \\dots , w_{|S_i|}^i)$ is first mapped into continuous space ", "$$\\begin{split}\n\\mathbf {E}_i = (\\mathbf {e}_1^i, \\mathbf {e}_2^i, \\dots , \\mathbf {e}_{|S_i|}^i) \\\\\n\\quad \\quad \\text{where} \\quad \\mathbf {e}_j^i = e(w_j^i) + \\mathbf {p}_j\n\\end{split}$$ (Eq. 6) ", " where $e(w_j^i)$ and $\\mathbf {p}_j$ are the word and positional embeddings of $w_j^i$ , respectively. The word embedding matrix is randomly initialized and we adopt the sine-cosine positional embedding BIBREF6 . Then the sentence encoder (a Transformer) transforms $\\mathbf {E}_i$ into a list of hidden representations $(\\mathbf {h}_1^i, \\mathbf {h}_2^i, \\dots , \\mathbf {h}_{|S_i|}^i)$ . We take the last hidden representation $\\mathbf {h}_{|S_i|}^i$ (i.e., the representation at the EOS token) as the representation of sentence $S_i$ . Similar to the representation of each word in $S_i$ , we also take the sentence position into account. The final representation of $S_i$ is ", "$$\\hat{\\mathbf {h}}_i = \\mathbf {h}_{|S_i|}^i + \\mathbf {p}_i$$ (Eq. 8) ", "Note that words and sentences share the same positional embedding matrix.", "In analogy to the sentence encoder, as shown in Figure 1 , the document encoder is yet another Transformer but applies on the sentence level. After running the Transformer on a sequence of sentence representations $( \\hat{\\mathbf {h}}_1, \\hat{\\mathbf {h}}_2, \\dots , \\hat{\\mathbf {h}}_{|\\mathcal {D}|} )$ , we obtain the context sensitive sentence representations $( \\mathbf {d}_1, \\mathbf {d}_2, \\dots , \\mathbf {d}_{|\\mathcal {D}|} )$ . Now we have finished the encoding of a document with a hierarchical bidirectional transformer encoder Hibert. Note that in previous work, document representation are also learned with hierarchical models, but each hierarchy is a Recurrent Neural Network BIBREF3 , BIBREF21 or Convolutional Neural Network BIBREF11 . We choose the Transformer because it outperforms CNN and RNN in machine translation BIBREF6 , semantic role labeling BIBREF32 and other NLP tasks BIBREF0 . In the next section we will introduce how we train Hibert with an unsupervised training objective." ], [ "Most recent encoding neural models used in NLP (e.g., RNNs, CNNs or Transformers) can be pre-trained by predicting a word in a sentence (or a text span) using other words within the same sentence (or span). For example, ELMo BIBREF12 and OpenAI-GPT BIBREF13 predict a word using all words on its left (or right); while word2vec BIBREF33 predicts one word with its surrounding words in a fixed window and BERT BIBREF0 predicts (masked) missing words in a sentence given all the other words.", "All the models above learn the representation of a sentence, where its basic units are words. Hibert aims to learn the representation of a document, where its basic units are sentences. Therefore, a natural way of pre-training a document level model (e.g., Hibert) is to predict a sentence (or sentences) instead of a word (or words). We could predict a sentence in a document with all the sentences on its left (or right) as in a (document level) language model. However, in summarization, context on both directions are available. We therefore opt to predict a sentence using all sentences on both its left and right.", "Specifically, suppose $\\mathcal {D} = (S_1, S_2, \\dots , S_{| \\mathcal {D} |})$ is a document, where $S_i = (w_1^i, w_2^i, \\dots , w_{|S_i|}^i)$ is a sentence in it. We randomly select 15% of the sentences in $\\mathcal {D}$ and mask them. Then, we predict these masked sentences. The prediction task here is similar with the Cloze task BIBREF34 , BIBREF0 , but the missing part is a sentence. However, during test time the input document is not masked, to make our model can adapt to documents without masks, we do not always mask the selected sentences. Once a sentence is selected (as one of the 15% selected masked sentences), we transform it with one of three methods below. We will use an example to demonstrate the transformation. For instance, we have the following document and the second sentence is selected:", "William Shakespeare is a poet . He died in 1616 . He is regarded as the greatest writer .", "In 80% of the cases, we mask the selected sentence (i.e., we replace each word in the sentence with a mask token [MASK]). The document above becomes William Shakespeare is a poet . [MASK] [MASK] [MASK] [MASK] [MASK] He is regarded as the greatest writer . (where “He died in 1616 . ” is masked).", "In 10% of the cases, we keep the selected sentence as it is. This strategy is to simulate the input document during test time (with no masked sentences).", "In the rest 10% cases, we replace the selected sentence with a random sentence. In this case, the document after transformation is William Shakespeare is a poet . Birds can fly . He is regarded as the greatest writer . The second sentence is replaced with “Birds can fly .” This strategy intends to add some noise during training and make the model more robust.", "After the application of the above procedures to a document $\\mathcal {D} = (S_1, S_2, \\dots , S_{| \\mathcal {D} |})$ , we obtain the masked document $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$ . Let $\\mathcal {K} $ denote the set of indicies of selected sentences in $\\mathcal {D}$ . Now we are ready to predict the masked sentences $\\mathcal {M} = \\lbrace S_k | k \\in \\mathcal {K} \\rbrace $ using $\\widetilde{ \\mathcal {D} }$ . We first apply the hierarchical encoder Hibert in Section \"Conclusions\" to $\\widetilde{ \\mathcal {D} }$ and obtain its context sensitive sentence representations $( \\tilde{ \\mathbf {d}_1 }, \\tilde{ \\mathbf {d}_2 }, \\dots , \\tilde{ \\mathbf {d}_{| \\mathcal {D} |} } )$ . We will demonstrate how we predict the masked sentence $S_k = (w_0^k, w_1^k, w_2^k, \\dots , w_{|S_k|}^k)$ one word per step ( $w_0^k$ is an artificially added BOS token). At the $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$0 th step, we predict $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$1 given $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$2 and $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$3 . $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$4 already encodes the information of $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$5 with a focus around its $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$6 th sentence $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$7 . As shown in Figure 1 , we employ a Transformer decoder BIBREF6 to predict $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$8 with $\\widetilde{ \\mathcal {D} }= (\\tilde{S_1}, \\tilde{S_2}, \\dots , \\tilde{S_{| \\mathcal {D} |}})$9 as its additional input. The transformer decoder we used here is slightly different from the original one. The original decoder employs two multi-head attention layers to include both the context in encoder and decoder, while we only need one to learn the decoder context, since the context in encoder is a vector (i.e., $\\mathcal {K} $0 ). Specifically, after applying the word and positional embeddings to ( $\\mathcal {K} $1 ), we obtain $\\mathcal {K} $2 (also see Equation 6 ). Then we apply multi-head attention sub-layer to $\\mathcal {K} $3 : ", "$$\\begin{split}\n\\tilde{\\mathbf {h}_{j-1}} &= \\text{MultiHead}(\\mathbf {q}_{j-1}, \\mathbf {K}_{j-1}, \\mathbf {V}_{j-1}) \\\\\n\\mathbf {q}_{j-1} &= \\mathbf {W}^Q \\: \\tilde{\\mathbf {e}_{j-1}^k} \\\\\n\\mathbf {K}_{j-1} &= \\mathbf {W}^K \\: \\widetilde{ \\mathbf {E} }^k_{1:j-1} \\\\\n\\mathbf {K}_{j-1} &= \\mathbf {W}^V \\: \\widetilde{ \\mathbf {E} }^k_{1:j-1}\n\\end{split}$$ (Eq. 13) ", " where $\\mathbf {q}_{j-1}$ , $\\mathbf {K}_{j-1}$ , $\\mathbf {V}_{j-1}$ are the input query, key and value matrices of the multi-head attention function BIBREF6 $\\text{MultiHead}(\\cdot , \\cdot , \\cdot )$ , respectively. $\\mathbf {W}^Q \\in \\mathbb {R}^{d \\times d}$ , $\\mathbf {W}^K \\in \\mathbb {R}^{d \\times d}$ and $\\mathbf {W}^V \\in \\mathbb {R}^{d \\times d}$ are weight matrices.", "Then we include the information of $\\widetilde{ \\mathcal {D} }$ by addition: ", "$$\\tilde{\\mathbf {x}_{j-1}} = \\tilde{\\mathbf {h}_{j-1}} + \\tilde{ \\mathbf {d}_k }$$ (Eq. 14) ", "We also follow a feedforward sub-layer (one hidden layer with ReLU BIBREF35 activation function) after $\\tilde{\\mathbf {x}_{j-1}}$ as in vaswani:2017:nips: ", "$$\\tilde{\\mathbf {g}_{j-1}} = \\mathbf {W}^{ff}_2 \\max (0, \\mathbf {W}^{ff}_1 \\tilde{\\mathbf {x}_{j-1}} + \\mathbf {b}_1) + \\mathbf {b}_2$$ (Eq. 15) ", "Note that the transformer decoder can have multiple layers by applying Equation ( 13 ) to ( 15 ) multiple times and we only show the computation of one layer for simplicity.", "The probability of $w_j^k$ given $w_0^k,\\dots ,w_{j-1}^k$ and $\\widetilde{ \\mathcal {D} }$ is: ", "$$p( w_j^k | w_{0:j-1}^k, \\widetilde{ \\mathcal {D} } ) = \\text{softmax}( \\mathbf {W}^O \\: \\tilde{\\mathbf {g}_{j-1}} )$$ (Eq. 16) ", "Finally the probability of all masked sentences $ \\mathcal {M} $ given $\\widetilde{ \\mathcal {D} }$ is ", "$$p(\\mathcal {M} | \\widetilde{ \\mathcal {D} }) = \\prod _{k \\in \\mathcal {K}} \\prod _{j=1}^{|S_k|} p(w_j^k | w_{0:j-1}^k, \\widetilde{ \\mathcal {D} })$$ (Eq. 17) ", "The model above can be trained by minimizing the negative log-likelihood of all masked sentences given their paired documents. We can in theory have unlimited amount of training data for Hibert, since they can be generated automatically from (unlabeled) documents. Therefore, we can first train Hibert on large amount of data and then apply it to downstream tasks. In the next section, we will introduce its application to document summarization." ], [ "Extractive summarization selects the most important sentences in a document as its summary. In this section, summarization is modeled as a sequence labeling problem. Specifically, a document is viewed as a sequence of sentences and a summarization model is expected to assign a True or False label for each sentence, where True means this sentence should be included in the summary. In the following, we will introduce the details of our summarization model based Hibert.", "Let $\\mathcal {D} = (S_1, S_2, \\dots , S_{| \\mathcal {D} |})$ denote a document and $Y = (y_1, y_2, \\dots , y_{| \\mathcal {D} |})$ its sentence labels (methods for obtaining these labels are in Section \"Datasets\" ). As shown in Figure 2 , we first apply the hierarchical bidirectional transformer encoder Hibert to $\\mathcal {D}$ and yields the context dependent representations for all sentences $( \\mathbf {d}_1, \\mathbf {d}_2, \\dots , \\mathbf {d}_{|\\mathcal {D}|} )$ . The probability of the label of $S_i$ can be estimated using an additional linear projection and a softmax: ", "$$p( y_i | \\mathcal {D} ) = \\text{softmax}(\\mathbf {W}^S \\: \\mathbf {d}_i)$$ (Eq. 20) ", "where $\\mathbf {W}^S \\in \\mathbb {R}^{2 \\times d}$ . The summarization model can be trained by minimizing the negative log-likelihood of all sentence labels given their paired documents." ], [ "In this section we assess the performance of our model on the document summarization task. We first introduce the dataset we used for pre-training and the summarization task and give implementation details of our model. We also compare our model against multiple previous models." ], [ "We conducted our summarization experiments on the non-anonymous version CNN/Dailymail (CNNDM) dataset BIBREF36 , BIBREF9 , and the New York Times dataset BIBREF37 , BIBREF38 . For the CNNDM dataset, we preprocessed the dataset using the scripts from the authors of see:2017:acl. The resulting dataset contains 287,226 documents with summaries for training, 13,368 for validation and 11,490 for test. Following BIBREF38 , BIBREF37 , we created the NYT50 dataset by removing the documents whose summaries are shorter than 50 words from New York Times dataset. We used the same training/validation/test splits as in xu:2019:arxiv, which contain 137,778 documents for training, 17,222 for validation and 17,223 for test. To create sentence level labels for extractive summarization, we used a strategy similar to nallapati:2017:aaai. We label the subset of sentences in a document that maximizes Rouge BIBREF39 (against the human summary) as True and all other sentences as False.", "To unsupervisedly pre-train our document model Hibert (see Section \"Pre-training\" for details), we created the GIGA-CM dataset (totally 6,626,842 documents and 2,854 million words), which includes 6,339,616 documents sampled from the English Gigaword dataset and the training split of the CNNDM dataset. We used the validation set of CNNDM as the validation set of GIGA-CM as well. As in see:2017:acl, documents and summaries in CNNDM, NYT50 and GIGA-CM are all segmented and tokenized using Stanford CoreNLP toolkit BIBREF40 . To reduce the vocabulary size, we applied byte pair encoding (BPE; BIBREF41 ) to all of our datasets. To limit the memory consumption during training, we limit the length of each sentence to be 50 words (51th word and onwards are removed) and split documents with more than 30 sentences into smaller documents with each containing at most 30 sentences." ], [ "Our model is trained in three stages, which includes two pre-training stages and one finetuning stage. The first stage is the open-domain pre-training and in this stage we train Hibert with the pre-training objective (Section \"Pre-training\" ) on GIGA-CM dataset. In the second stage, we perform the in-domain pre-training on the CNNDM (or NYT50) dataset still with the same pre-training objective. In the final stage, we finetune Hibert in the summarization model (Section \"Extractive Summarization\" ) to predict extractive sentence labels on CNNDM (or NYT50).", "The sizes of the sentence and document level Transformers as well as the Transformer decoder in Hibert are the same. Let $L$ denote the number of layers in Transformer, $H$ the hidden size and $A$ the number of attention heads. As in BIBREF6 , BIBREF0 , the hidden size of the feedforward sublayer is $4H$ . We mainly trained two model sizes: $\\text{\\sc Hibert}_S$ ( $L=6$ , $H=512$ and $A=8$ ) and $\\text{\\sc Hibert}_M$ ( $L=6$ , $H$0 and $H$1 ). We trained both $H$2 and $H$3 on a single machine with 8 Nvidia Tesla V100 GPUs with a batch size of 256 documents. We optimized our models using Adam with learning rate of 1e-4, $H$4 , $H$5 , L2 norm of 0.01, learning rate warmup 10,000 steps and learning rate decay afterwards using the strategies in vaswani:2017:nips. The dropout rate in all layers are 0.1. In pre-training stages, we trained our models until validation perplexities do not decrease significantly (around 45 epochs on GIGA-CM dataset and 100 to 200 epochs on CNNDM and NYT50). Training $H$6 for one epoch on GIGA-CM dataset takes approximately 20 hours.", "Our models during fine-tuning stage can be trained on a single GPU. The hyper-parameters are almost identical to these in the pre-training stages except that the learning rate is 5e-5, the batch size is 32, the warmup steps are 4,000 and we train our models for 5 epochs. During inference, we rank sentences using $p( y_i | \\mathcal {D} ) $ (Equation ( 20 )) and choose the top $K$ sentences as summary, where $K$ is tuned on the validation set." ], [ "We evaluated the quality of summaries from different systems automatically using ROUGE BIBREF39 . We reported the full length F1 based ROUGE-1, ROUGE-2 and ROUGE-L on the CNNDM and NYT50 datasets. We compute ROUGE scores using the ROUGE-1.5.5.pl script.", "Additionally, we also evaluated the generated summaries by eliciting human judgments. Following BIBREF11 , BIBREF4 , we randomly sampled 20 documents from the CNNDM test set. Participants were presented with a document and a list of summaries produced by different systems. We asked subjects to rank these summaries (ties allowed) by taking informativeness (is the summary capture the important information from the document?) and fluency (is the summary grammatical?) into account. Each document is annotated by three different subjects." ], [ "Our main results on the CNNDM dataset are shown in Table 1 , with abstractive models in the top block and extractive models in the bottom block. Pointer+Coverage BIBREF9 , Abstract-ML+RL BIBREF10 and DCA BIBREF42 are all sequence to sequence learning based models with copy and coverage modeling, reinforcement learning and deep communicating agents extensions. SentRewrite BIBREF26 and InconsisLoss BIBREF25 all try to decompose the word by word summary generation into sentence selection from document and “sentence” level summarization (or compression). Bottom-Up BIBREF27 generates summaries by combines a word prediction model with the decoder attention model. The extractive models are usually based on hierarchical encoders (SummaRuNNer; BIBREF3 and NeuSum; BIBREF11 ). They have been extended with reinforcement learning (Refresh; BIBREF4 and BanditSum; BIBREF20 ), Maximal Marginal Relevance (NeuSum-MMR; BIBREF21 ), latent variable modeling (LatentSum; BIBREF5 ) and syntactic compression (JECS; BIBREF38 ). Lead3 is a baseline which simply selects the first three sentences. Our model $\\text{\\sc Hibert}_S$ (in-domain), which only use one pre-training stage on the in-domain CNNDM training set, outperforms all of them and differences between them are all significant with a 0.95 confidence interval (estimated with the ROUGE script). Note that pre-training $\\text{\\sc Hibert}_S$ (in-domain) is very fast and it only takes around 30 minutes for one epoch on the CNNDM training set. Our models with two pre-training stages ( $\\text{\\sc Hibert}_S$ ) or larger size ( $\\text{\\sc Hibert}_M$ ) perform even better and $\\text{\\sc Hibert}_M$ outperforms BERT by 0.5 ROUGE. We also implemented two baselines. One is the hierarchical transformer summarization model (HeriTransfomer; described in \"Extractive Summarization\" ) without pre-training. Note the setting for HeriTransfomer is ( $L=4$ , $H=300$ and $A=4$ ) . We can see that the pre-training (details in Section \"Pre-training\" ) leads to a +1.25 ROUGE improvement. Another baseline is based on a pre-trained BERT BIBREF0 and finetuned on the CNNDM dataset. We used the $\\text{BERT}_{\\text{base}}$ model because our 16G RAM V100 GPU cannot fit $\\text{BERT}_{\\text{large}}$ for the summarization task even with batch size of 1. The positional embedding of BERT supports input length up to 512 words, we therefore split documents with more than 10 sentences into multiple blocks (each block with 10 sentences). We feed each block (the BOS and EOS tokens of each sentence are replaced with [CLS] and [SEP] tokens) into BERT and use the representation at [CLS] token to classify each sentence. Our model $\\text{\\sc Hibert}_S$1 outperforms BERT by 0.4 to 0.5 ROUGE despite with only half the number of model parameters ( $\\text{\\sc Hibert}_S$2 54.6M v.s. BERT 110M). Results on the NYT50 dataset show the similar trends (see Table 2 ). EXTRACTION is a extractive model based hierarchical LSTM and we use the numbers reported by xu:2019:arxiv. The improvement of $\\text{\\sc Hibert}_S$3 over the baseline without pre-training (HeriTransformer) becomes 2.0 ROUGE. $\\text{\\sc Hibert}_S$4 (in-domain), $\\text{\\sc Hibert}_S$5 (in-domain), $\\text{\\sc Hibert}_S$6 and $\\text{\\sc Hibert}_S$7 all outperform BERT significantly according to the ROUGE script.", "We also conducted human experiment with 20 randomly sampled documents from the CNNDM test set. We compared our model $\\text{\\sc Hibert}_M$ against Lead3, DCA, Latent, BERT and the human reference (Human). We asked the subjects to rank the outputs of these systems from best to worst. As shown in Table 4 , the output of $\\text{\\sc Hibert}_M$ is selected as the best in 30% of cases and we obtained lower mean rank than all systems except for Human. We also converted the rank numbers into ratings (rank $i$ to $7-i$ ) and applied student $t$ -test on the ratings. $\\text{\\sc Hibert}_M$ is significantly different from all systems in comparison ( $p < 0.05$ ), which indicates our model still lags behind Human, but is better than all other systems.", "As mentioned earlier, our pre-training includes two stages. The first stage is the open-domain pre-training stage on the GIGA-CM dataset and the following stage is the in-domain pre-training on the CNNDM (or NYT50) dataset. As shown in Table 3 , we pretrained $\\text{\\sc Hibert}_S$ using only open-domain stage (Open-Domain), only in-domain stage (In-Domain) or both stages (Open+In-Domain) and applied it to the CNNDM summarization task. Results on the validation set of CNNDM indicate the two-stage pre-training process is necessary." ], [ "The core part of a neural extractive summarization model is the hierarchical document encoder. We proposed a method to pre-train document level hierarchical bidirectional transformer encoders on unlabeled data. When we only pre-train hierarchical transformers on the training sets of summarization datasets with our proposed objective, application of the pre-trained hierarchical transformers to extractive summarization models already leads to wide improvement of summarization performance. Adding the large open-domain dataset to pre-training leads to even better performance. In the future, we plan to apply models to other tasks that also require hierarchical document encodings (e.g., document question answering). We are also interested in improving the architectures of hierarchical document encoders and designing other objectives to train hierarchical transformers." ] ] }
{ "question": [ "Is the baseline a non-heirarchical model like BERT?" ], "question_id": [ "fc8bc6a3c837a9d1c869b7ee90cf4e3c39bcd102" ], "nlp_background": [ "five" ], "topic_background": [ "familiar" ], "paper_read": [ "no" ], "search_query": [ "transformers" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "There were hierarchical and non-hierarchical baselines; BERT was one of those baselines", "evidence": [ "FLOAT SELECTED: Table 1: Results of various models on the CNNDM test set using full-length F1 ROUGE-1 (R-1), ROUGE-2 (R2), and ROUGE-L (R-L).", "Our main results on the CNNDM dataset are shown in Table 1 , with abstractive models in the top block and extractive models in the bottom block. Pointer+Coverage BIBREF9 , Abstract-ML+RL BIBREF10 and DCA BIBREF42 are all sequence to sequence learning based models with copy and coverage modeling, reinforcement learning and deep communicating agents extensions. SentRewrite BIBREF26 and InconsisLoss BIBREF25 all try to decompose the word by word summary generation into sentence selection from document and “sentence” level summarization (or compression). Bottom-Up BIBREF27 generates summaries by combines a word prediction model with the decoder attention model. The extractive models are usually based on hierarchical encoders (SummaRuNNer; BIBREF3 and NeuSum; BIBREF11 ). They have been extended with reinforcement learning (Refresh; BIBREF4 and BanditSum; BIBREF20 ), Maximal Marginal Relevance (NeuSum-MMR; BIBREF21 ), latent variable modeling (LatentSum; BIBREF5 ) and syntactic compression (JECS; BIBREF38 ). Lead3 is a baseline which simply selects the first three sentences. Our model $\\text{\\sc Hibert}_S$ (in-domain), which only use one pre-training stage on the in-domain CNNDM training set, outperforms all of them and differences between them are all significant with a 0.95 confidence interval (estimated with the ROUGE script). Note that pre-training $\\text{\\sc Hibert}_S$ (in-domain) is very fast and it only takes around 30 minutes for one epoch on the CNNDM training set. Our models with two pre-training stages ( $\\text{\\sc Hibert}_S$ ) or larger size ( $\\text{\\sc Hibert}_M$ ) perform even better and $\\text{\\sc Hibert}_M$ outperforms BERT by 0.5 ROUGE. We also implemented two baselines. One is the hierarchical transformer summarization model (HeriTransfomer; described in \"Extractive Summarization\" ) without pre-training. Note the setting for HeriTransfomer is ( $L=4$ , $H=300$ and $A=4$ ) . We can see that the pre-training (details in Section \"Pre-training\" ) leads to a +1.25 ROUGE improvement. Another baseline is based on a pre-trained BERT BIBREF0 and finetuned on the CNNDM dataset. We used the $\\text{BERT}_{\\text{base}}$ model because our 16G RAM V100 GPU cannot fit $\\text{BERT}_{\\text{large}}$ for the summarization task even with batch size of 1. The positional embedding of BERT supports input length up to 512 words, we therefore split documents with more than 10 sentences into multiple blocks (each block with 10 sentences). We feed each block (the BOS and EOS tokens of each sentence are replaced with [CLS] and [SEP] tokens) into BERT and use the representation at [CLS] token to classify each sentence. Our model $\\text{\\sc Hibert}_S$1 outperforms BERT by 0.4 to 0.5 ROUGE despite with only half the number of model parameters ( $\\text{\\sc Hibert}_S$2 54.6M v.s. BERT 110M). Results on the NYT50 dataset show the similar trends (see Table 2 ). EXTRACTION is a extractive model based hierarchical LSTM and we use the numbers reported by xu:2019:arxiv. The improvement of $\\text{\\sc Hibert}_S$3 over the baseline without pre-training (HeriTransformer) becomes 2.0 ROUGE. $\\text{\\sc Hibert}_S$4 (in-domain), $\\text{\\sc Hibert}_S$5 (in-domain), $\\text{\\sc Hibert}_S$6 and $\\text{\\sc Hibert}_S$7 all outperform BERT significantly according to the ROUGE script." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Results of various models on the CNNDM test set using full-length F1 ROUGE-1 (R-1), ROUGE-2 (R2), and ROUGE-L (R-L).", "We also implemented two baselines. One is the hierarchical transformer summarization model (HeriTransfomer; described in \"Extractive Summarization\" ) without pre-training." ] } ], "annotation_id": [ "07f9afd79ec1426e67b10f5a598bbe3103f714cf" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Figure 1: The architecture of HIBERT during training. senti is a sentence in the document above, which has four sentences in total. sent3 is masked during encoding and the decoder predicts the original sent3.", "Figure 2: The architecture of our extractive summarization model. The sentence and document level transformers can be pretrained.", "Table 1: Results of various models on the CNNDM test set using full-length F1 ROUGE-1 (R-1), ROUGE-2 (R2), and ROUGE-L (R-L).", "Table 4: Human evaluation: proportions of rankings and mean ranks (MeanR; lower is better) of various models.", "Table 2: Results of various models on the NYT50 test set using full-length F1 ROUGE. HIBERTS (indomain) and HIBERTM (in-domain) only uses one pretraining stage on the NYT50 training set.", "Table 3: Results of summarization model (HIBERTS setting) with different pre-training strategies on the CNNDM validation set using full-length F1 ROUGE." ], "file": [ "3-Figure1-1.png", "5-Figure2-1.png", "7-Table1-1.png", "8-Table4-1.png", "8-Table2-1.png", "8-Table3-1.png" ] }
2003.04032
Shallow Discourse Annotation for Chinese TED Talks
Text corpora annotated with language-related properties are an important resource for the development of Language Technology. The current work contributes a new resource for Chinese Language Technology and for Chinese-English translation, in the form of a set of TED talks (some originally given in English, some in Chinese) that have been annotated with discourse relations in the style of the Penn Discourse TreeBank, adapted to properties of Chinese text that are not present in English. The resource is currently unique in annotating discourse-level properties of planned spoken monologues rather than of written text. An inter-annotator agreement study demonstrates that the annotation scheme is able to achieve highly reliable results.
{ "section_name": [ "Introduction", "Related work", "PDTB and our Annotation Scheme", "PDTB and our Annotation Scheme ::: Arguments", "PDTB and our Annotation Scheme ::: Relations", "PDTB and our Annotation Scheme ::: Senses", "Annotation Procedure", "Annotation Procedure ::: Annotator training", "Annotation Procedure ::: Corpus building", "Annotation Procedure ::: Agreement study", "Results", "Conclusions and Future Work", "Acknowledgement" ], "paragraphs": [ [ "", "Researchers have recognized that performance improvements in natural language processing (NLP) tasks such as summarization BIBREF0, question answering BIBREF1, and machine translation BIBREF2 can come from recognizing discourse-level properties of text. These include properties such as the how new entities are introduced into the text, how entities are subsequently referenced (e.g., coreference chains), and how clauses and sentences relate to one another. Corpora in which such properties have been manually annotated by experts can be used as training data for such tasks, or seed data for creating additional \"silver annotated” data. Penn Discourse Treebank (PDTB), a lexically grounded method for annotation, is a shallow approach to discourse structure which can be adapted to different genres. Annotating discourse relations both within and across sentences, it aims to have wide application in the field of natural language processing. PDTB can effectively help extract discourse semantic features, thus serving as a useful substrate for the development and evaluation of neural models in many downstream NLP applications.", "Few Chinese corpora are both annotated for discourse properties and publicly available. The available annotated texts are primarily newspaper articles. The work described here annotates another type of text – the planned monologues found in TED talks, following the annotation style used in the Penn Discourse TreeBank, but adapted to take account of properties of Chinese described in Section 3.", "TED talks (TED is short for technology, entertainment, design), as examples of planned monologues delivered to a live audience BIBREF3, are scrupulously translated to various languages. Although TED talks have been annotated for discourse relations in several languages BIBREF4, this is the first attempt to annotate TED talks in Chinese (either translated into Chinese, or presented in Chinese), providing data on features of Chinese spoken discourse. Our annotation by and large follows the annotation scheme in the PDTB-3, adapted to features of Chinese spoken discourse described below.", "The rest of the paper is organized as follows: in Section 2, we review the related existing discourse annotation work. In Section 3, we briefly introduce PDTB-3 BIBREF5 and our adapted annotation scheme by examples. In Section 4, we elaborate our annotation process and the results of our inteannotator-agreement study. Finally, in Section 5, we display the results of our annotation and preliminarily analyze corpus statistics, which we compare to the relation distribution of the CUHK Discourse TreeBank for Chinese. (CUHK-DTBC)BIBREF6." ], [ "Following the release of the Penn Discourse Treebank (PDTB-2) in 2008 BIBREF7, several remarkable Chinese discourse corpora have since adapted the PDTB framework BIBREF8, including the Chinese Discourse Treebank BIBREF9, HIT Chinese Discourse Treebank (HIT-CDTB) zhou2014cuhk, and the Discourse Treebank for Chinese (DTBC) BIBREF6. Specifically, Xue proposed the Chinese Discourse Treebank (CDTB) Project BIBREF10. From their annotation work, they discussed the matters such as features of Chinese discourse connectives, definition and scope of arguments, and senses disambiguation, and they argued that determining the argument scope is the most challenging part of the annotation. To further promote their research, zhou2012pdtb presented a PDTB-style discourse corpus for Chinese. They also discussed the key characteristics of Chinese text which differs from English, e.g., the parallel connectives, comma-delimited intra-sentential implicit relations etc. Their data set contains 98 documents from the Chinese Treebank BIBREF10. In 2015, Zhou and Xue expanded their corpus to 164 documents, with more than 5000 relations being annotated. huang-chen-2011-chinese constructed a Chinese discourse corpus with 81 articles. They adopted the top-level senses from PDTB sense hierarchy and focused on the annotation of inter-sentential discourse relations. zhang2014chinese analyzed the differences between Chinese and English, and then presented a new Chinese discourse relation hierarchy based on the PDTB system, in which the discourse relations are divided into 6 types: temporal, causal, condition, comparison, expansion and conjunction. And they constructed a Chinese Discourse Relation corpus called HIT-CDTB based on this hierarchy. Then, zhou2014cuhk presented the first open discourse treebank for Chinese, the CUHK Discourse Treebank for Chinese. They adapted the annotation scheme of Penn Discourse Treebank 2 (PDTB-2) to Chinese language and made adjustments to 3 aspects according to the previous study of Chinese linguistics. However, they just reannotated the documents of the Chinese Treebank and did not annotate inter-sentence level discourse relations.", "It is worth noting that, all these corpora display a similar unbalanced distribution that is likely to be associated with them all being limited to text from the same NEWS genre. In particular, these two senses (Expansion and Conjunction) represent 80 % of the relations annotated in the CDTB.", "In addition, although annotating spoken TED talks has been done on other several languages before BIBREF4, to our knowledge, there is no recent annotation work for Chinese spoken discourses, or particularly for Chinese Ted talks. However, there is some evidence that noticeable differences in the use of discourse connectives and discourse relations can be found between written and spoken discourses BIBREF11. Here, by using the new PDTB-3 sense hierarchy and annotator, which has not been used for Chinese annotation before, we annotated Chinese Ted talks to help others be aware of the differences between the Chinese discourse structure of written and spoken texts and will make our corpus publicly available to benefit the discourse-level NLP researches for spoken discourses." ], [ "The annotation scheme we adopted in this work is based on the framework of PDTB, incorporating the most recent PDTB (PDTB-3) relational taxonomy and sense hierarchy BIBREF5, shown in Table 1. PDTB follows a lexically grounded approach to the representation of discourse relations BIBREF12. Discourse relations are taken to hold between two abstract object arguments, named Arg1 and Arg2 using syntactic conventions, and are triggered either by explicit connectives or, otherwise, by adjacency between clauses and sentences. As we can see from Table 1, the PDTB-3 sense hierarchy has 4 top-level senses (Expansion, Temporal, Contingency, Contrast) and second- and third-level senses for some cases. With obvious differences ranging from the conventions used in annotation, to differences in senses hierarchy, PDTB-3 gives rigorous attention to achieving as much consistency as possible while annotating discourse relations.", "Previously, all Chinese annotation work using PDTB style followed the settings of PDTB-2. Some researchers tried to adapt it in lines of the Chinese characteristics. For example, zhou2012pdtb annotated the parallel connectives continuously rather than discontinuously due to the greater use of parallel connectives in Chinese and a reduced use of explicit connectives in general. zhou2014cuhk added some additional senses into the hierarchy. However, PDTB-3, as a new and enriched version, not only has paid greater attention to intra-sentential senses, but also has incorporated some of those additional senses. Therefore, we just made several modifications including removing, adding, or disambiguating for the practical use of PDTB-3 into our Chinese annotation.", "In practice, using the PDTB annotator tool, we annotated an explicit connective, identified its two arguments in which the connective occurs, and then labeled the sense. For implicit relations, when we inferred the type of relation between two arguments, we tried to insert a connective for this relation, and also the inserted connective is not so strictly restricted, extending to expressions that can convey the sense of the arguments. If a connective conveys more than one sense or more than one relation can be inferred, multiple senses would be assigned to the token. Our adaptations towards PDTB-3 will be introduced from the perspectives of arguments, relations and senses as follows." ], [ "The argument-labelling conventions used in the PDTB-2 had to be modified to deal with the wider variety of discourse relations that needed to be annotated consistently within sentences in the PDTB-3. In particular, in labelling intra-sentential discourse relations, a distinction was made between relations whose arguments were in coordinating syntactic structures and ones whose arguments were in subordinating syntactic structures. For coordinating structures, arguments were labelled by position (Arg1 first, then Arg2), while for subordinating structures, the argument in subordinate position was labelled Arg2, and the other, Arg1, independent of position.", "For discourse in Chinese, this can introduce an unwanted ambiguity. Example 1 is a typical example for illustrate this phenomenon. In the examples throughout the paper, explicit connectives are underlined, while implicit Discourse Connectives and the lexicalizing expression for Alternative Lexicalizations are shown in parentheses and square brackets respectively. The position of the arguments is indicated by the attached composite labels to the right square brackets, and the relation lables and sense lables can be seen in the parentheses at the end of arguments. When the arguments, relations or senses are ambiguous, there may be no corresponding labels shown in the examples.", "UTF8gbsn", "因为 你 让 我 生气, 所以,我 要让", "Because you make me angry, so I want", "你 更难过。(Explicit, Cause.Result)", "you to be sadder.", "“You made me angry, so I return it double back.”", "While“because”and“so”are rarely found together as connectives in a sentence in English, it is not uncommon to find them used concurrently as a paired connective in Chinese. Therefore, due to this difference, the annotators tend to have no idea about which clause is subordinate. Therefore, if we regard the first clause as subordinating structure and “因 为”(because)as connective, then the sense would be Contingency.Cause.Reason. By contrast, the sense would be Contingency.Cause.Result, when the second clause is regarded as Arg2. To get rid of this kind of ambiguity, we just take the first as Arg1 and the second Arg2 regardless of the fact that the parallel connectives are surbodinating or coordinating." ], [ "There are two new types of relation in PDTB-3: AltlexC and Hypophora. Hypophora is an explicitly marked question-response pairs, first used in annotating the TED- MDB BIBREF4. In Hypophora relations, Arg1 expresses a question and Arg2 offers an answer, with no explicit or implicit connective being annotated (Example 2). Because of the nature of TED talks, many relations in both the TED-MDB and in our Chinese TED talks are examples of “Hypophora”. However, not all discourse relations whose first argument is a question are Hypophora. Example 3, instead of seeking information and giving answer, is just a rhetorical question expressing negation by imposing a dramatic effect.", "[我到底 要 讲 什么Arg1]?", "I on earth am going to talk about what ?", "[最后 我决定 要 讲 教育Arg2]。(Hypophora)", "Finally, I decided to talk about education .", "“what am I gonna say? Finally, I decided to talk about education.”", "他说 : “ 我 是 三 天 一 小 哭 、 五 天", "He said, \" I am three days a little cry, five days", "一 大 哭 。 \" 这样 你 有 比较 健康 吗 ?", "a lot cry.\" In this way, you are more healthier?", "都 是 悲伤 , 并 不 是 每 一 个 人 ,每 一 次", "All are sadness, not everyone, every time", "感受 到 悲伤 的 时候 ,都 一定 会 流泪 、 甚至 大哭 。", "feel sad 's time, would shed tears、even cry.", "“He said, \"Three times I cry a little, and five times I cry a lot.\" Is that healthier? Everyone gets sad, but that's not to say that whenever someone feels sad, they necessarily will cry.”", "In addition, we found a new issue when identifying Hypophora, which is shown in Example 4. In this example, we have a series of questions, rather than a series of assertions or a question-response pair. We attempted to capture the rhetorical links by taking advantage of our current inventory of discourse relations. Here, two implicit relations were annotated for this example, and the senses are Arg2-as-detail and Result+SpeechAct respectively. Therefore, when there are subsequent texts related to a question or a sequence of questions, we would not just annotated them as Hypophora but had to do such analysis as what we did for the examples shown.", "[情绪 , 它 到底 是 什么Rel1-Arg1 ]? (具体来说)", "Emotion, it on earth is what? (Specially)", "[它 是 好还是 不 好Rel1-Arg2,Rel2-Arg1]?(Implicit,Arg2-as-detail)", "It is good or bad?", "(所以)[你 会 想要 拥有 它 吗Rel2-Arg2]? (Implicit,Result+SpeechAct)", "(So) You want to have it ?", "“What is it exactly? Is it good or bad? Do you want to have them?”", "Besides, it is widely accepted that the ellipsis of subject or object are frequently seen in Chinese. Then for EntRel, if facing this situation where one of the entities in Arg1 or Arg2 is omitted, we still need to annotate this as EntRel (Example 5). In this following example, we can see in Arg2, the pronoun which means “that”is omitted, but in fact which refers to the phenomenon mentioned in Arg1, so here there is still an EntRel relation between this pair of arguments.", "[我们会以讽刺的口吻来谈论, 并且会", "We in ironic terms talk about, and", "加上引号 : “进步”Arg1 ]", "add quotes: “Progress”.", "[我想是有原因的, 我们也知道 是 什么", "I think there are reasons, we also know are what", "原因Arg2]。(EntRel)", "reasons.", "“We talk about it in ironic terms with little quotes around it:“Progress.”Okay, there are reasons for that, and I think we know what those reasons are.”" ], [ "The great improvement in the sense hierarchy in PDTB-3 enables us to capture more senses with additional types and assign the senses more clearly. For example, the senses under the category of Expansion such as level of detail, manner, disjunction and similarity are indispensable to our annotation. Therefore, we nearly adopted the sense hierarchy in PDTB-3, just with few adaptations. On the one hand, we removed the third level sense “Negative condition+SpeechAct”, since it was not used to label anything in the corpus. On the other hand, we added the Level-2 sense “Expansion.Progression”. This type of sense applies when Arg1 and Arg2 are coordinating structure with different emphasis. The first argument is annotated as Arg1 and the second as Arg2. This sense is usually conveyed by such typical connectives as “不 但 (not only)... 而 且 (but also)...”, “甚 至 (even)... 何 况 (let alone)...”,“... 更 (even more)...”(Example 6).", "[我 去 了 聋人 俱乐部 ,观看 了 聋人 的", "I went to deaf clubs, saw the deaf person’s", "表演 Arg1]。[我甚至 去 了 田纳西州 纳什维尔的", "performances. I even went to the Nashville ’s", "“ 美国 聋人 小姐 ” 选秀赛Arg2]。(Explicit, Progression.Arg2-as-progr)", "“the Miss Deaf” America contest.", "“ I went to deaf clubs. I saw performances of deaf theater and of deaf poetry. I even went to the Miss Deaf America contest in Nashville.”", "Another issue about sense is the inconsistency when we annotated the implicit relations. zhou2012pdtb did not insert connective for implicit relations, but we did this for further researches with regard to implicit relations. However, we found that in some cases where different connectives can be inserted into the same arguments to express the same relation, the annotators found themselves in a dilemma. We can see that Example 7 and Example 8 respectively insert “so” and “because” into the arguments between which there is a causal relation, but the senses in these two examples would be Cause.Result and Cause.Reason. The scheme we adopted for this is that we only take the connectives that we would insert into account, and the position and sense relations of arguments would depend on the inserted connectives.", "[“克服 逆境 ” 这一说法 对我", "“Overcome the adversity” this phrase for me", "根本 不 成立 Arg1],(所以)[别人 让 我", "completely not justified, (so) others asked me", "就 这一话题 说 几 句 的时候, 我很不自在Arg2]。(Implicit, Cause.Result)", "to this topic talk about some, I felt uneasy.", "(因为)[“克服 逆境 ” 这一说法", "(Because) “overcome the adversity” this phrase", "对 我来说根本 不 成立Arg2], [ 别人", "for me completely not justified, (so) others", "让 我就 这一话题 说几句的时候,我很", "asked me to this topic, talk about some, I felt", "不自在Arg1]。(Implicit, Cause.Reason)", "uneasy.", "““overcome the adversity” this phrase never sat right with me, and I always felt uneasy trying to answer people's questions about it.”" ], [ "In this section, we describe our annotation process in creating the Chinese TED discourse treebank. To ensure annotation quality, the whole annotation process has three stages: annotators training, annotation, post-annotation. The training process intends to improve the annotators’ annotation ability, while after the formal annotation, the annotated work was carefully checked by the supervisor, and the possible errors and inconsistencies were dealt with through discussions and further study." ], [ "The annotator team consists of a professor as the supervisor, an experienced annotator and a researcher of PDTB as counselors, two master degree candidates as annotators. Both of the annotators have a certain theoretical foundation of linguistics. To guarantee annotation quality, the annotators were trained through the following steps: firstly, the annotators read the PDTB-3 annotation manual, the PDTB-2 annotation manual and also other related papers carefully; next, the annotators tried to independently annotate same texts, finding out their own uncertainties or problems respectively and discussing these issues together; then, the annotators were asked to create sample annotations on TED talks transcripts for each sense from the top level to the third. They discussed the annotations with the researchers of the team and tried to settle disputes. When sample annotations are created, this part of process is completed; based on the manuals, previous annotation work and also the annotators’ own pre-annotation work, they made a Chinese tutorial on PDTB guidelines, in which major difficulties and perplexities, such as the position and the span of the arguments, the insert of connectives, and the distinction of different categories of relations and senses, are explained clearly in detail by typical samples. This Chinese tutorial is beneficial for those who want to carry out similar Chinese annotation, so we made this useful tutorial available to those who want to carry out similar annotation; finally, to guarantee annotation consistency, the annotators were required to repeat their annotation-discussion process until their annotation results show the Kappa value > 0.8 for each of the indicators for agreement." ], [ "At present, our corpus has been released publiclyFOOTREF19. Our corpus consists of two parts with equal number of texts: (1) 8 English TED talks translated into Chinese, just like the talks in the TED-MDB, all of which were originally presented in English and translated into other languages (including German, Lithuanian, Portuguese,Polish, Russian and Turkish) BIBREF4. (2) 8 Chinese TED talks originally presented in Taipei and translated into English. We got the texts by means of extracting Chinese and English subtitles from TED talks videos . Firstly, we just annotated the talks given in English and translated in Chinese. But after considering the possible divergencies between translated texts and the original texts, we did our annotation for the Taipei TED talks, which were delivered in Chinese. The parallel English texts are also being annotated for discourse relations, but they are not ready for carrying out a systematic comparison between them. At the current stage, we annotated 3212 relations for the TED talks transcripts with 55307 words,and the length of each talk (in words) and the number of annotated", "relations in each talks can be found from Table 2. These TED talks we annotated were prudently selected from dozens of candidate texts. The quality of texts which is principally embodied in content, logic, punctuation and the translation are the major concerns for us. Moreover, when selecting the texts from the Taipei talks, we ruled out those texts which are heavy in dialogues. Some speakers try to interact with the audience, asking the questions, and then commenting on how they have replied. However, what we were annotating was not dialogues. In spite of critically picking over the texts, we still spent considerable time on dealing with them before annotation such as inserting punctuation and correcting the translation. Moreover, before annotation, we did word segmentation by using Stanford Segmenter and corrected improper segmentation.", "While annotating, we assigned the vast majority of the relations a single sense and a small proportion of relations multiple senses. Unlike previous similar Chinese corpora which primarily or just annotated the relations between sentences, we annotated not only discourse relations between sentences but intra-sentential discourse relations as well. To ensure building a high-quality corpus, the annotators regularly discussed their difficulties and confusions with the researcher and the experienced annotator in the whole process of annotation. After discussion, the annotators reached agreement or retained the differences for few ambiguities." ], [ "We measured intra-annotator agreement between two annotators in three aspects: relations, senses, arguments. To be specific, the annotators’ consistency in annotating the type of a specific relation or sense and the position and scope of arguments are measured. To assess the consistency of annotations and also eliminate coincidental annotations, we used agreement rates, which is calculated by dividing the number of senses under each category where the annotators annotate consistently by the total number of each kind of sense. And considering the potential impact of unbalanced distribution of senses, we also used the Kappa value. And the final agreement study was carried out for the first 300 relations in our corpus. We obtained high agreement results and Kappa value for the discourse relation type and top-level senses ($\\ge {0.9} $ ). However, what we did was more than this, and we also achieved great results on the second-level and third-level senses for the sake of our self-demand for high-quality, finally achieving agreement of 0.85 and Kappa value of 0.83 for these two deeper levels of senses.", "Table 3 also shows that agreement on argument order is almost 1.0 (kappa = 0.99). This means that the guidelines were sufficiently clear that the annotators rarely had difficulty in deciding the location of Arg1 and Arg2 when the senses are determined. Concerning the scope of arguments, which is seen as the most challenging part in the annotation work BIBREF10, our agreement and Kappa value on this are 0.88 and 0.86 respectively, while the agreement of the scope of arguments depends on whether the scopes of two arguments the anotators annotated are completely the same. Under such strict requirement, our consistency in this respect is still significantly higher than that of other annotation work done before, for we strictly obeyed the rules of “minimality principle” mentioned in the PDTB-3 annotation manual and got a clearer perspective of supplementary information. Therefore, the annotators are better at excluding the information that do not fall within the scope of the discourse relation.", "It is useful to determine where the annotators disagreed most with each other. The three senses where most disagreement occurred are shown in Table 4. The disagreements were primarily in labelling implicit relations. The highest level of disagreement occurred with Expansion.Conjunction and Expansion.Detail, accounting for 12.5 % among all the inconsistent senses. It is because, more often than not, the annotators failed to judge whether the two arguments make the same contribution with respect to that situation or both arguments describing the same has different level of details. The second highest level of disagreement is reflected in Conjunction and Asynchronous, accounting for 9.3 %. Besides, Contrast and Concession are two similar senses, which are usually signaled by the same connectives like “但是”, “而”, “不过”, and all these words can be translated into“but”in English. Hence, the annotators sometimes tend to be inconsistent when distinguishing them." ], [ "In regard to discourse relations, there are 3212 relations, of which 1237 are explicit relations (39%) and 1174 are implicit relation (37%) (Figure 1). The remaining 801 relations include Hypophora, AltLex, EntRel, and NoRel. Among these 4 kinds of relations, what is worth mentioning is AltLex(Alternative Lexicalizations ),which only constitutes 3% but is of tremendous significance, for we are able to discover inter- or intra-sentential relations when there is no explicit expressions but AltLex expressions conveying the relations. but AltLex expressions(eg, 这导致了(this cause), 一个例子是(one example is... ), 原 因 是 (the reason is), etc.). Originally in English, AltLex is supposed to contain both an anaphoric or deictic reference to an actual argument and an indication of the type of sense BIBREF13. While for Chinese, the instances of Altlex do not differ significantly from those annotated in English. To prove this, two examples are given as below (Example 9 and Example 10). From our annotation, we realized that Altlex deserves more attention, for which can effectively help to recogonize types of discourse relations automatically.", "[“国内 许多被截肢者, 无法使用", "in this country, many of the amputees, cannot use", "他们的假肢Arg1],[这其中的原因是] [他们由于", "their prostheses, the reason was their", "假肢接受腔 无法 与残肢 适配 而", "prosthetic sockets cannot their leg fit well so that", "感到疼痛Arg2].(AltLex, Cause.Reason)", "felt painful.]", "“Many of the amputees in the country would not use their prostheses. The reason, I would come to find out, was that their prosthetic sockets were painful because they did not fit well.”", "三年级的时候考进秀朗小学的游泳班,", "in third grade, got in the swimming class at Xiu Lang", "[ 这个班每天的 游泳", "elementary school, this class everyday’s swimming", "训练量高达 3000 米 Arg1], 我发现 [这样的训练量", "volumm reach 3000 meters, I realized the training load", "使][我 无法同时兼顾两种乐器 Arg2]。(AltLex,", "Cause.Result)", "make me cannot learn the two instruments at the same time", "“I got in the swimming class at Xiu Lang elementary school when I was in third grade. We had to swim up to 3000 meters every day. I realized the training load was too much for me to learn the two instruments at the same time.”", "Obviously, there is approximately the same number of explicit and implicit relations in the corpus. This may indicate that explicit connectives and relations are more likely to present in Chinese spoken texts rather than Chinese written texts.", "The figures shown by Table 4 illustrate the distributions of class level senses. We make a comparison for the class level senses between our corpus and the CUHK Discourse Treebank for Chinese (CUHK-DTBC). CUHK Discourse Treebank for Chinese is a corpus annotating news reports. Therefore, our comparison with it may shed light on the differences of discourse structures in different genres. According to the statistics of CUHK-DTBC for 400 documents and our corpus, while more than half of the senses is Expansion in CUHK-DTBC, it just represents 37.5% in our corpus. In addition, it is highlighted that the ranks of the class level senses are the same in both corpora, although all of the other three senses in our corpus are more than those in CUHK-DTBC.", "The most frequent second-level senses in our corpus can be seen from Table 5. We can find that 20% of the senses is Cause (including Reason and Result), followed by Conjunction and Concession, each with 13%. The top 10 most frequent senses take up 86% of all senses annotated, which reveals that other senses also can validate their existence in our corpus. Therefore, these findings show that, compared with other corpora about Chinese shallow relations where the majority of the documents are news report, our corpus evidently show a more balanced and varied distribution from perspectives of both relations and senses, which in large measure proves the differences in discourse relations between Chinese written texts and Chinese spoken texts." ], [ "In this paper, we describe our scheme and process in annotating shallow discourse relations using PDTB-style. In view of the differences between English and Chinese, we made adaptations for the PDTB-3 scheme such as removing AltLexC and adding Progression into our sense hierarchy. To ensure the annotation quality, we formulated detailed annotation criteria and quality assurance strategies. After serious training, we annotated 3212 discourse relations, and we achieved a satisfactory consistency of labelling with a Kappa value of greater than 0.85 for most of the indicators. Finally, we display our annotation results in which the distribution of discourse relations and senses differ from that in other corpora which annotate news report or newspaper texts. Our corpus contains more Contingency, Temporal and Comparison relations instead of being governed by Expansion.", "In future work, we are planning to 1) expand our corpus by annotating more TED talks or other spoken texts; 2) build a richer and diverse connective set and AltLex expressions set; 3) use the corpus in developing a shallow discourse parser for Chinese spoken discourses; 4) also explore automatic approaches for implicit discourse relations recognition." ], [ "The present research was supported by the National Natural Science Foundation of China (Grant No. 61861130364) and the Royal Society (London) (NAF$\\backslash $R1$\\backslash $180122). We would like to thank the anonymous reviewers for their insightful comments." ] ] }
{ "question": [ "Do they build a model to recognize discourse relations on their dataset?", "Which inter-annotator metric do they use?", "How high is the inter-annotator agreement?", "How are resources adapted to properties of Chinese text?" ], "question_id": [ "58e65741184c81c9e7fe0ca15832df2d496beb6f", "269b05b74d5215b09c16e95a91ae50caedd9e2aa", "0d7f514f04150468b2d1de9174c12c28e52c5511", "4d223225dbf84a80e2235448a4d7ba67bfb12490" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "4e228fbd610091de8b9bb63d773964108c5c975d" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "agreement rates", "Kappa value" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We measured intra-annotator agreement between two annotators in three aspects: relations, senses, arguments. To be specific, the annotators’ consistency in annotating the type of a specific relation or sense and the position and scope of arguments are measured. To assess the consistency of annotations and also eliminate coincidental annotations, we used agreement rates, which is calculated by dividing the number of senses under each category where the annotators annotate consistently by the total number of each kind of sense. And considering the potential impact of unbalanced distribution of senses, we also used the Kappa value. And the final agreement study was carried out for the first 300 relations in our corpus. We obtained high agreement results and Kappa value for the discourse relation type and top-level senses ($\\ge {0.9} $ ). However, what we did was more than this, and we also achieved great results on the second-level and third-level senses for the sake of our self-demand for high-quality, finally achieving agreement of 0.85 and Kappa value of 0.83 for these two deeper levels of senses." ], "highlighted_evidence": [ "To assess the consistency of annotations and also eliminate coincidental annotations, we used agreement rates, which is calculated by dividing the number of senses under each category where the annotators annotate consistently by the total number of each kind of sense. And considering the potential impact of unbalanced distribution of senses, we also used the Kappa value." ] } ], "annotation_id": [ "c3992f39157c3c457362d37be7c4ecaf6607cfa5" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "agreement of 0.85 and Kappa value of 0.83" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We measured intra-annotator agreement between two annotators in three aspects: relations, senses, arguments. To be specific, the annotators’ consistency in annotating the type of a specific relation or sense and the position and scope of arguments are measured. To assess the consistency of annotations and also eliminate coincidental annotations, we used agreement rates, which is calculated by dividing the number of senses under each category where the annotators annotate consistently by the total number of each kind of sense. And considering the potential impact of unbalanced distribution of senses, we also used the Kappa value. And the final agreement study was carried out for the first 300 relations in our corpus. We obtained high agreement results and Kappa value for the discourse relation type and top-level senses ($\\ge {0.9} $ ). However, what we did was more than this, and we also achieved great results on the second-level and third-level senses for the sake of our self-demand for high-quality, finally achieving agreement of 0.85 and Kappa value of 0.83 for these two deeper levels of senses." ], "highlighted_evidence": [ "However, what we did was more than this, and we also achieved great results on the second-level and third-level senses for the sake of our self-demand for high-quality, finally achieving agreement of 0.85 and Kappa value of 0.83 for these two deeper levels of senses." ] } ], "annotation_id": [ "dd46e8a80d19f23938b9efeaa4bfebe04abc9f32" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "removing AltLexC and adding Progression into our sense hierarchy" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this paper, we describe our scheme and process in annotating shallow discourse relations using PDTB-style. In view of the differences between English and Chinese, we made adaptations for the PDTB-3 scheme such as removing AltLexC and adding Progression into our sense hierarchy. To ensure the annotation quality, we formulated detailed annotation criteria and quality assurance strategies. After serious training, we annotated 3212 discourse relations, and we achieved a satisfactory consistency of labelling with a Kappa value of greater than 0.85 for most of the indicators. Finally, we display our annotation results in which the distribution of discourse relations and senses differ from that in other corpora which annotate news report or newspaper texts. Our corpus contains more Contingency, Temporal and Comparison relations instead of being governed by Expansion." ], "highlighted_evidence": [ "In view of the differences between English and Chinese, we made adaptations for the PDTB-3 scheme such as removing AltLexC and adding Progression into our sense hierarchy." ] } ], "annotation_id": [ "081c24f5f832cddcb252d414b622c63842c3e6da" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Table 1: PDTB-3 Sense Hierarchy (Webber et al., 2019)", "Table 2: The length and the number of relations of each text", "Table 3: Agreement study", "Figure 1: Relation distribution", "Table 4: Disagreements between annotators: Percentage of cases", "Table 5: Distribution of class level senses in our corpus and 400 documents of CUHK-DTBC", "Table 6: The most frequent Level-2 senses in our corpus" ], "file": [ "3-Table1-1.png", "5-Table2-1.png", "5-Table3-1.png", "6-Figure1-1.png", "7-Table4-1.png", "7-Table5-1.png", "7-Table6-1.png" ] }
2004.03034
The Role of Pragmatic and Discourse Context in Determining Argument Impact
Research in the social sciences and psychology has shown that the persuasiveness of an argument depends not only the language employed, but also on attributes of the source/communicator, the audience, and the appropriateness and strength of the argument's claims given the pragmatic and discourse context of the argument. Among these characteristics of persuasive arguments, prior work in NLP does not explicitly investigate the effect of the pragmatic and discourse context when determining argument quality. This paper presents a new dataset to initiate the study of this aspect of argumentation: it consists of a diverse collection of arguments covering 741 controversial topics and comprising over 47,000 claims. We further propose predictive models that incorporate the pragmatic and discourse context of argumentative claims and show that they outperform models that rely only on claim-specific linguistic features for predicting the perceived impact of individual claims within a particular line of argument.
{ "section_name": [ "Introduction", "Related Work", "Dataset", "Methodology ::: Hypothesis and Task Description", "Methodology ::: Baseline Models ::: Majority", "Methodology ::: Baseline Models ::: SVM with RBF kernel", "Methodology ::: Baseline Models ::: FastText", "Methodology ::: Baseline Models ::: BiLSTM with Attention", "Methodology ::: Fine-tuned BERT model", "Methodology ::: Fine-tuned BERT model ::: Claim with no context", "Methodology ::: Fine-tuned BERT model ::: Claim with parent representation", "Methodology ::: Fine-tuned BERT model ::: Incorporating larger context", "Results and Analysis", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "Previous work in the social sciences and psychology has shown that the impact and persuasive power of an argument depends not only on the language employed, but also on the credibility and character of the communicator (i.e. ethos) BIBREF0, BIBREF1, BIBREF2; the traits and prior beliefs of the audience BIBREF3, BIBREF4, BIBREF5, BIBREF6; and the pragmatic context in which the argument is presented (i.e. kairos) BIBREF7, BIBREF8.", "Research in Natural Language Processing (NLP) has only partially corroborated these findings. One very influential line of work, for example, develops computational methods to automatically determine the linguistic characteristics of persuasive arguments BIBREF9, BIBREF10, BIBREF11, but it does so without controlling for the audience, the communicator or the pragmatic context.", "Very recent work, on the other hand, shows that attributes of both the audience and the communicator constitute important cues for determining argument strength BIBREF12, BIBREF13. They further show that audience and communicator attributes can influence the relative importance of linguistic features for predicting the persuasiveness of an argument. These results confirm previous findings in the social sciences that show a person's perception of an argument can be influenced by his background and personality traits.", "To the best of our knowledge, however, no NLP studies explicitly investigate the role of kairos — a component of pragmatic context that refers to the context-dependent “timeliness\" and “appropriateness\" of an argument and its claims within an argumentative discourse — in argument quality prediction. Among the many social science studies of attitude change, the order in which argumentative claims are shared with the audience has been studied extensively: 10.1086/209393, for example, summarize studies showing that the argument-related claims a person is exposed to beforehand can affect his perception of an alternative argument in complex ways. article-3 similarly find that changes in an argument's context can have a big impact on the audience's perception of the argument.", "Some recent studies in NLP have investigated the effect of interactions on the overall persuasive power of posts in social media BIBREF10, BIBREF14. However, in social media not all posts have to express arguments or stay on topic BIBREF15, and qualitative evaluation of the posts can be influenced by many other factors such as interactions between the individuals BIBREF16. Therefore, it is difficult to measure the effect of argumentative pragmatic context alone in argument quality prediction without the effect of these confounding factors using the datasets and models currently available in this line of research.", "In this paper, we study the role of kairos on argument quality prediction by examining the individual claims of an argument for their timeliness and appropriateness in the context of a particular line of argument. We define kairos as the sequence of argumentative text (e.g. claims) along a particular line of argumentative reasoning.", "To start, we present a dataset extracted from kialo.com of over 47,000 claims that are part of a diverse collection of arguments on 741 controversial topics. The structure of the website dictates that each argument must present a supporting or opposing claim for its parent claim, and stay within the topic of the main thesis. Rather than being posts on a social media platform, these are community-curated claims. Furthermore, for each presented claim, the audience votes on its impact within the given line of reasoning. Critically then, the dataset includes the argument context for each claim, allowing us to investigate the characteristics associated with impactful arguments.", "With the dataset in hand, we propose the task of studying the characteristics of impactful claims by (1) taking the argument context into account, (2) studying the extent to which this context is important, and (3) determining the representation of context that is more effective. To the best of our knowledge, ours is the first dataset that includes claims with both impact votes and the corresponding context of the argument." ], [ "Recent studies in computational argumentation have mainly focused on the tasks of identifying the structure of the arguments such as argument structure parsing BIBREF17, BIBREF18, and argument component classification BIBREF19, BIBREF20. More recently, there is an increased research interest to develop computational methods that can automatically evaluate qualitative characteristic of arguments, such as their impact and persuasive power BIBREF9, BIBREF10, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28. Consistent with findings in the social sciences and psychology, some of the work in NLP has shown that the impact and persuasive power of the arguments are not simply related to the linguistic characteristics of the language, but also on characteristics the source (ethos) BIBREF16 and the audience BIBREF12, BIBREF13. These studies suggest that perception of the arguments can be influenced by the credibility of the source, and the background of the audience.", "It has also been shown, in social science studies, that kairos, which refers to the “timeliness” and “appropropriateness” of arguments and claims, is important to consider in studies of argument impact and persuasiveness BIBREF7, BIBREF8. One recent study in NLP has investigated the role of argument sequencing in argument persuasion looking at BIBREF14 Change My View, which is a social media platform where users post their views, and challenge other users to present arguments in an attempt to change their them. However, as stated in BIBREF15 many posts on social media platforms either do not express an argument, or diverge from the main topic of conversation. Therefore, it is difficult to measure the effect of pragmatic context in argument impact and persuasion, without confounding factors from using noisy social media data. In contrast, we provide a dataset of claims along with their structured argument path, which only consists of claims and corresponds to a particular line of reasoning for the given controversial topic. This structure enables us to study the characteristics of impactful claims, accounting for the effect of the pragmatic context.", "Consistent with previous findings in the social sciences, we find that incorporating pragmatic and discourse context is important in computational studies of persuasion, as predictive models that with the context representation outperform models that only incorporate claim-specific linguistic features, in predicting the impact of a claim. Such a system that can predict the impact of a claim given an argumentative discourse, for example, could potentially be employed by argument retrieval and generation models which aims to pick or generate the most appropriate possible claim given the discourse." ], [ "Claims and impact votes. We collected 47,219 claims from kialo.com for 741 controversial topics and their corresponding impact votes. Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented.", "Figure FIGREF1 shows a partial argument tree for the argument thesis “Physical torture of prisoners is an acceptable interrogation tool.”. Each node in the argument tree corresponds to a claim, and these argument trees are constructed and edited collaboratively by the users of the platform.", "Except the thesis, every claim in the argument tree either opposes or supports its parent claim. Each path from the root to leaf nodes corresponds to an argument path which represents a particular line of reasoning on the given controversial topic.", "Moreover, each claim has impact votes assigned by the users of the platform. The impact votes evaluate how impactful a claim is within its context, which consists of its predecessor claims from the thesis of the tree. For example, claim O1 “It is morally wrong to harm a defenseless person” is an opposing claim for the thesis and it is an impactful claim since most of its impact votes belong to the category of very high impact. However, claim S3 “It is illegitimate for state actors to harm someone without the process” is a supporting claim for its parent O1 and it is a less impactful claim since most of the impact votes belong to the no impact and low impact categories.", "Distribution of impact votes. The distribution of claims with the given range of number of impact votes are shown in Table TABREF5. There are 19,512 claims in total with 3 or more votes. Out of the claims with 3 or more votes, majority of them have 5 or more votes. We limit our study to the claims with at least 5 votes to have a more reliable assignment for the accumulated impact label for each claim.", "Impact label statistics. Table TABREF7 shows the distribution of the number of votes for each of the impact categories. The claims have $241,884$ total votes. The majority of the impact votes belong to medium impact category. We observe that users assign more high impact and very high impact votes than low impact and no impact votes respectively. When we restrict the claims to the ones with at least 5 impact votes, we have $213,277$ votes in total.", "Agreement for the impact votes. To determine the agreement in assigning the impact label for a particular claim, for each claim, we compute the percentage of the votes that are the same as the majority impact vote for that claim. Let $c_{i}$ denote the count of the claims with the class labels C=[no impact, low impact, medium impact, high impact, very high impact] for the impact label $l$ at index $i$.", "For example, for claim S1 in Figure FIGREF1, the agreement score is $100 * \\frac{30}{90}\\%=33.33\\%$ since the majority class (no impact) has 30 votes and there are 90 impact votes in total for this particular claim. We compute the agreement score for the cases where (1) we treat each impact label separately (5-class case) and (2) we combine the classes high impact and very high impact into a one class: impactful and no impact and low impact into a one class: not impactful (3-class case).", "Table TABREF6 shows the number of claims with the given agreement score thresholds when we include the claims with at least 5 votes. We see that when we combine the low impact and high impact classes, there are more claims with high agreement score. This may imply that distinguishing between no impact-low impact and high impact-very high impact classes is difficult. To decrease the sparsity issue, in our experiments, we use 3-class representation for the impact labels. Moreover, to have a more reliable assignment of impact labels, we consider only the claims with have more than 60% agreement.", "Context. In an argument tree, the claims from the thesis node (root) to each leaf node, form an argument path. This argument path represents a particular line of reasoning for the given thesis. Similarly, for each claim, all the claims along the path from the thesis to the claim, represent the context for the claim. For example, in Figure FIGREF1, the context for O1 consists of only the thesis, whereas the context for S3 consists of both the thesis and O1 since S3 is provided to support the claim O1 which is an opposing claim for the thesis.", "The claims are not constructed independently from their context since they are written in consideration with the line of reasoning so far. In most cases, each claim elaborates on the point made by its parent and presents cases to support or oppose the parent claim's points. Similarly, when users evaluate the impact of a claim, they consider if the claim is timely and appropriate given its context. There are cases in the dataset where the same claim has different impact labels, when presented within a different context. Therefore, we claim that it is not sufficient to only study the linguistic characteristic of a claim to determine its impact, but it is also necessary to consider its context in determining the impact.", "Context length ($\\text{C}_{l}$) for a particular claim C is defined by number of claims included in the argument path starting from the thesis until the claim C. For example, in Figure FIGREF1, the context length for O1 and S3 are 1 and 2 respectively. Table TABREF8 shows number of claims with the given range of context length for the claims with more than 5 votes and $60\\%$ agreement score. We observe that more than half of these claims have 3 or higher context length." ], [ "Similar to prior work, our aim is to understand the characteristics of impactful claims in argumentation. However, we hypothesize that the qualitative characteristics of arguments is not independent of the context in which they are presented. To understand the relationship between argument context and the impact of a claim, we aim to incorporate the context along with the claim itself in our predictive models.", "Prediction task. Given a claim, we want to predict the impact label that is assigned to it by the users: not impactful, medium impact, or impactful.", "Preprocessing. We restrict our study to claims with at least 5 or more votes and greater than $60\\%$ agreement, to have a reliable impact label assignment. We have $7,386$ claims in the dataset satisfying these constraints. We see that the impact class impacful is the majority class since around $58\\%$ of the claims belong to this category.", "For our experiments, we split our data to train (70%), validation (15%) and test (15%) sets." ], [ "The majority baseline assigns the most common label of the training examples (high impact) to every test example." ], [ "Similar to BIBREF9, we experiment with SVM with RBF kernel, with features that represent (1) the simple characteristics of the argument tree and (2) the linguistic characteristics of the claim.", "The features that represent the simple characteristics of the claim's argument tree include the distance and similarity of the claim to the thesis, the similarity of a claim with its parent, and the impact votes of the claim's parent claim. We encode the similarity of a claim to its parent and the thesis claim with the cosine similarity of their tf-idf vectors. The distance and similarity metrics aim to model whether claims which are more similar (i.e. potentially more topically relevant) to their parent claim or the thesis claim, are more impactful.", "We encode the quality of the parent claim as the number of votes for each impact class, and incorporate it as a feature to understand if it is more likely for a claim to impactful given an impactful parent claim.", "Linguistic features. To represent each claim, we extracted the linguistic features proposed by BIBREF9 such as tf-idf scores for unigrams and bigrams, ratio of quotation marks, exclamation marks, modal verbs, stop words, type-token ratio, hedging BIBREF29, named entity types, POS n-grams, sentiment BIBREF30 and subjectivity scores BIBREF31, spell-checking, readibility features such as Coleman-Liau BIBREF32, Flesch BIBREF33, argument lexicon features BIBREF34 and surface features such as word lengths, sentence lengths, word types, and number of complex words." ], [ "joulin-etal-2017-bag introduced a simple, yet effective baseline for text classification, which they show to be competitive with deep learning classifiers in terms of accuracy. Their method represents a sequence of text as a bag of n-grams, and each n-gram is passed through a look-up table to get its dense vector representation. The overall sequence representation is simply an average over the dense representations of the bag of n-grams, and is fed into a linear classifier to predict the label. We use the code released by joulin-etal-2017-bag to train a classifier for argument impact prediction, based on the claim text." ], [ "Another effective baseline BIBREF35, BIBREF36 for text classification consists of encoding the text sequence using a bidirectional Long Short Term Memory (LSTM) BIBREF37, to get the token representations in context, and then attending BIBREF38 over the tokens to get the sequence representation. For the query vector for attention, we use a learned context vector, similar to yang-etal-2016-hierarchical. We picked our hyperparameters based on performance on the validation set, and report our results for the best set of hyperparameters. We initialized our word embeddings with glove vectors BIBREF39 pre-trained on Wikipedia + Gigaword, and used the Adam optimizer BIBREF40 with its default settings." ], [ "devlin2018bert fine-tuned a pre-trained deep bi-directional transformer language model (which they call BERT), by adding a simple classification layer on top, and achieved state of the art results across a variety of NLP tasks. We employ their pre-trained language models for our task and compare it to our baseline models. For all the architectures described below, we finetune for 10 epochs, with a learning rate of 2e-5. We employ an early stopping procedure based on the model performance on a validation set." ], [ "In this setting, we attempt to classify the impact of the claim, based on the text of the claim only. We follow the fine-tuning procedure for sequence classification detailed in BIBREF41, and input the claim text as a sequence of tokens preceded by the special [CLS] token and followed by the special [SEP] token. We add a classification layer on top of the BERT encoder, to which we pass the representation of the [CLS] token, and fine-tune this for argument impact prediction." ], [ "In this setting, we use the parent claim's text, in addition to the target claim text, in order to classify the impact of the target claim. We treat this as a sequence pair classification task, and combine both the target claim and parent claim as a single sequence of tokens, separated by the special separator [SEP]. We then follow the same procedure above, for fine-tuning." ], [ "In this setting, we consider incorporating a larger context from the discourse, in order to assess the impact of a claim. In particular, we consider up to four previous claims in the discourse (for a total context length of 5). We attempt to incorporate larger context into the BERT model in three different ways.", "Flat representation of the path. The first, simple approach is to represent the entire path (claim + context) as a single sequence, where each of the claims is separated by the [SEP] token. BERT was trained on sequence pairs, and therefore the pre-trained encoders only have two segment embeddings BIBREF41. So to fit multiple sequences into this framework, we indicate all tokens of the target claim as belonging to segment A and the tokens for all the claims in the discourse context as belonging to segment B. This way of representing the input, requires no additional changes to the architecture or retraining, and we can just finetune in a similar manner as above. We refer to this representation of the context as a flat representation, and denote the model as $\\text{Context}_{f}(i)$, where $i$ indicates the length of the context that is incorporated into the model.", "Attention over context. Recent work in incorporating argument sequence in predicting persuasiveness BIBREF14 has shown that hierarchical representations are effective in representing context. Similarly, we consider hierarchical representations for representing the discourse. We first encode each claim using the pre-trained BERT model as the claim encoder, and use the representation of the [CLS] token as claim representation. We then employ dot-product attention BIBREF38, to get a weighted representation for the context. We use a learned context vector as the query, for computing attention scores, similar to yang-etal-2016-hierarchical. The attention score $\\alpha _c$ is computed as shown below:", "Where $V_c$ is the claim representation that was computed with the BERT encoder as described above, $V_l$ is the learned context vector that is used for computing attention scores, and $D$ is the set of claims in the discourse. After computing the attention scores, the final context representation $v_d$ is computed as follows:", "We then concatenate the context representation with the target claim representation $[V_d, V_r]$ and pass it to the classification layer to predict the quality. We denote this model as $\\text{Context}_{a}(i)$.", "GRU to encode context Similar to the approach above, we consider a hierarchical representation for representing the context. We compute the claim representations, as detailed above, and we then feed the discourse claims' representations (in sequence) into a bidirectional Gated Recurrent Unit (GRU) BIBREF42, to compute the context representation. We concatenate this with the target claim representation and use this to predict the claim impact. We denote this model as $\\text{Context}_{gru}(i)$." ], [ "Table TABREF21 shows the macro precision, recall and F1 scores for the baselines as well as the BERT models with and without context representations.", "We see that parent quality is a simple yet effective feature and SVM model with this feature can achieve significantly higher ($p<0.001$) F1 score ($46.61\\%$) than distance from the thesis and linguistic features. Claims with higher impact parents are more likely to be have higher impact. Similarity with the parent and thesis is not significantly better than the majority baseline. Although the BiLSTM model with attention and FastText baselines performs better than the SVM with distance from the thesis and linguistic features, it has similar performance to the parent quality baseline.", "We find that the BERT model with claim only representation performs significantly better ($p<0.001$) than the baseline models. Incorporating the parent representation only along with the claim representation does not give significant improvement over representing the claim only. However, incorporating the flat representation of the larger context along with the claim representation consistently achieves significantly better ($p<0.001$) performance than the claim representation alone. Similarly, attention representation over the context with the learned query vector achieves significantly better performance then the claim representation only ($p<0.05$).", "We find that the flat representation of the context achieves the highest F1 score. It may be more difficult for the models with a larger number of parameters to perform better than the flat representation since the dataset is small. We also observe that modeling 3 claims on the argument path before the target claim achieves the best F1 score ($55.98\\%$).", "To understand for what kinds of claims the best performing contextual model is more effective, we evaluate the BERT model with flat context representation for claims with context length values 1, 2, 3 and 4 separately. Table TABREF26 shows the F1 score of the BERT model without context and with flat context representation with different lengths of context. For the claims with context length 1, adding $\\text{Context}_{f}(3)$ and $\\text{Context}_{f}(4)$ representation along with the claim achieves significantly better $(p<0.05)$ F1 score than modeling the claim only. Similarly for the claims with context length 3 and 4, $\\text{Context}_{f}(4)$ and $\\text{Context}_{f}(3)$ perform significantly better than BERT with claim only ($(p<0.05)$ and $(p<0.01)$ respectively). We see that models with larger context are helpful even for claims which have limited context (e.g. $\\text{C}_{l}=1$). This may suggest that when we train the models with larger context, they learn how to represent the claims and their context better." ], [ "In this paper, we present a dataset of claims with their corresponding impact votes, and investigate the role of argumentative discourse context in argument impact classification. We experiment with various models to represent the claims and their context and find that incorporating the context information gives significant improvement in predicting argument impact. In our study, we find that flat representation of the context gives the best improvement in the performance and our analysis indicates that the contextual models perform better even for the claims with limited context." ], [ "This work was supported in part by NSF grants IIS-1815455 and SES-1741441. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF or the U.S. Government." ] ] }
{ "question": [ "How better are results compared to baseline models?", "What models that rely only on claim-specific linguistic features are used as baselines?", "How is pargmative and discourse context added to the dataset?", "What annotations are available in the dataset?" ], "question_id": [ "ca26cfcc755f9d0641db0e4d88b4109b903dbb26", "6cdd61ebf84aa742155f4554456cc3233b6ae2bf", "8e8097cada29d89ca07166641c725e0f8fed6676", "951098f0b7169447695b47c142384f278f451a1e" ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "F1 score of best authors' model is 55.98 compared to BiLSTM and FastText that have F1 score slighlty higher than 46.61.", "evidence": [ "We see that parent quality is a simple yet effective feature and SVM model with this feature can achieve significantly higher ($p<0.001$) F1 score ($46.61\\%$) than distance from the thesis and linguistic features. Claims with higher impact parents are more likely to be have higher impact. Similarity with the parent and thesis is not significantly better than the majority baseline. Although the BiLSTM model with attention and FastText baselines performs better than the SVM with distance from the thesis and linguistic features, it has similar performance to the parent quality baseline.", "We find that the flat representation of the context achieves the highest F1 score. It may be more difficult for the models with a larger number of parameters to perform better than the flat representation since the dataset is small. We also observe that modeling 3 claims on the argument path before the target claim achieves the best F1 score ($55.98\\%$)." ], "highlighted_evidence": [ "We see that parent quality is a simple yet effective feature and SVM model with this feature can achieve significantly higher ($p<0.001$) F1 score ($46.61\\%$) than distance from the thesis and linguistic features.", "Although the BiLSTM model with attention and FastText baselines performs better than the SVM with distance from the thesis and linguistic features, it has similar performance to the parent quality baseline.", "We find that the flat representation of the context achieves the highest F1 score. It may be more difficult for the models with a larger number of parameters to perform better than the flat representation since the dataset is small. We also observe that modeling 3 claims on the argument path before the target claim achieves the best F1 score ($55.98\\%$)." ] } ], "annotation_id": [ "08357ffcc372ab5b2dcdeef00478d3a45f7d1ddc" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "SVM with RBF kernel" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Similar to BIBREF9, we experiment with SVM with RBF kernel, with features that represent (1) the simple characteristics of the argument tree and (2) the linguistic characteristics of the claim." ], "highlighted_evidence": [ "Similar to BIBREF9, we experiment with SVM with RBF kernel, with features that represent (1) the simple characteristics of the argument tree and (2) the linguistic characteristics of the claim." ] } ], "annotation_id": [ "fc4679a243e345a5d645efff11bc4e4317cde929" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument." ], "yes_no": null, "free_form_answer": "", "evidence": [ "Claims and impact votes. We collected 47,219 claims from kialo.com for 741 controversial topics and their corresponding impact votes. Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented." ], "highlighted_evidence": [ "Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented." ] } ], "annotation_id": [ "c9c5229625288c47e9f396728a6162bc35fc8ea8" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Claims and impact votes. We collected 47,219 claims from kialo.com for 741 controversial topics and their corresponding impact votes. Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented." ], "highlighted_evidence": [ " Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of 5 possible impact labels for a particular claim: no impact, low impact, medium impact, high impact and very high impact. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument." ] } ], "annotation_id": [ "2e9ad78831c6a42fc1da68fde798899e8e64d8a8" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: Example partial argument tree with claims and corresponding impact votes for the thesis “PHYSICAL TORTURE OF PRISONERS IS AN ACCEPTABLE INTERROGATION TOOL.”.", "Table 1: Number of claims for the given range of number of votes. There are 19,512 claims in the dataset with 3 or more votes. Out of the claims with 3 or more votes, majority of them have 5 or more votes.", "Table 2: Number of claims, with at least 5 votes, above the given threshold of agreement percentage for 3-class and 5-class cases. When we combine the low impact and high impact classes, there are more claims with high agreement score.", "Table 3: Number of votes for the given impact label. There are 241, 884 total votes and majority of them belongs to the category MEDIUM IMPACT.", "Table 4: Number of claims for the given range of context length, for claims with more than 5 votes and an agreement score greater than 60%.", "Table 5: Results for the baselines and the BERT models with and without the context. Best performing model is BERT with the representation of previous 3 claims in the path along with the claim representation itself. We run the models 5 times and we report the mean and standard deviation.", "Table 6: F1 scores of each model for the claims with various context length values." ], "file": [ "3-Figure1-1.png", "4-Table1-1.png", "5-Table2-1.png", "5-Table3-1.png", "5-Table4-1.png", "7-Table5-1.png", "8-Table6-1.png" ] }
1910.12618
Textual Data for Time Series Forecasting
While ubiquitous, textual sources of information such as company reports, social media posts, etc. are hardly included in prediction algorithms for time series, despite the relevant information they may contain. In this work, openly accessible daily weather reports from France and the United-Kingdom are leveraged to predict time series of national electricity consumption, average temperature and wind-speed with a single pipeline. Two methods of numerical representation of text are considered, namely traditional Term Frequency - Inverse Document Frequency (TF-IDF) as well as our own neural word embedding. Using exclusively text, we are able to predict the aforementioned time series with sufficient accuracy to be used to replace missing data. Furthermore the proposed word embeddings display geometric properties relating to the behavior of the time series and context similarity between words.
{ "section_name": [ "Introduction", "Presentation of the data", "Presentation of the data ::: Time Series", "Presentation of the data ::: Text", "Modeling and forecasting framework", "Modeling and forecasting framework ::: Numerical Encoding of the Text", "Modeling and forecasting framework ::: Machine Learning Algorithms", "Modeling and forecasting framework ::: Hyperparameter Tuning", "Experiments", "Experiments ::: Feature selection", "Experiments ::: Main results", "Experiments ::: Interpretability of the models", "Experiments ::: Interpretability of the models ::: TF-IDF representation", "Experiments ::: Interpretability of the models ::: Vector embedding representation", "Conclusion", "" ], "paragraphs": [ [ "Whether it is in the field of energy, finance or meteorology, accurately predicting the behavior of time series is nowadays of paramount importance for optimal decision making or profit. While the field of time series forecasting is extremely prolific from a research point-of-view, up to now it has narrowed its efforts on the exploitation of regular numerical features extracted from sensors, data bases or stock exchanges. Unstructured data such as text on the other hand remains underexploited for prediction tasks, despite its potentially valuable informative content. Empirical studies have already proven that textual sources such as news articles or blog entries can be correlated to stock exchange time series and have explanatory power for their variations BIBREF0, BIBREF1. This observation has motivated multiple extensive experiments to extract relevant features from textual documents in different ways and use them for prediction, notably in the field of finance. In Lavrenko et al. BIBREF2, language models (considering only the presence of a word) are used to estimate the probability of trends such as surges or falls of 127 different stock values using articles from Biz Yahoo!. Their results show that this text driven approach could be used to make profit on the market. One of the most conventional ways for text representation is the TF-IDF (Term Frequency - Inverse Document Frequency) approach. Authors have included such features derived from news pieces in multiple traditional machine learning algorithms such as support vector machines (SVM) BIBREF3 or logistic regression BIBREF4 to predict the variations of financial series again. An alternative way to encode the text is through latent Dirichlet allocation (LDA) BIBREF5. It assigns topic probabilities to a text, which can be used as inputs for subsequent tasks. This is for instance the case in Wang's aforementioned work (alongside TF-IDF). In BIBREF6, the authors used Reuters news encoded by LDA to predict if NASDAQ and Dow Jones closing prices increased or decreased compared to the opening ones. Their empirical results show that this approach was efficient to improve the prediction of stock volatility. More recently Kanungsukkasem et al. BIBREF7 introduced a variant of the LDA graphical model, named FinLDA, to craft probabilities that are specifically tailored for a financial time series prediction task (although their approach could be generalized to other ones). Their results showed that indeed performance was better when using probabilities from their alternative than those of the original LDA. Deep learning with its natural ability to work with text through word embeddings has also been used for time series prediction with text. Combined with traditional time series features, the authors of BIBREF8 derived sentiment features from a convolutional neural network (CNN) to reduce the prediction error of oil prices. Akita et al. BIBREF9 represented news articles through the use of paragraph vectors BIBREF10 in order to predict 10 closing stock values from the Nikkei 225. While in the case of financial time series the existence of specialized press makes it easy to decide which textual source to use, it is much more tedious in other fields. Recently in Rodrigues et al. BIBREF11, short description of events (such as concerts, sports matches, ...) are leveraged through a word embedding and neural networks in addition to more traditional features. Their experiments show that including the text can bring an improvement of up to 2% of root mean squared error compared to an approach without textual information. Although the presented studies conclude on the usefulness of text to improve predictions, they never thoroughly analyze which aspects of the text are of importance, keeping the models as black-boxes.", "The field of electricity consumption is one where expert knowledge is broad. It is known that the major phenomena driving the load demand are calendar (time of the year, day of the week, ...) and meteorological. For instance generalized additive models (GAM) BIBREF12 representing the consumption as a sum of functions of the time of the year, temperature and wind speed (among others) typically yield less than 1.5% of relative error for French national electricity demand and 8% for local one BIBREF13, BIBREF14. Neural networks and their variants, with their ability to extract patterns from heterogeneous types of data have also obtained state-of-the-art results BIBREF15, BIBREF16, BIBREF17. However to our knowledge no exploratory work using text has been conducted yet. Including such data in electricity demand forecasting models would not only contribute to close the gap with other domains, but also help to understand better which aspects of text are useful, how the encoding of the text influences forecasts and to which extend a prediction algorithm can extract relevant information from unstructured data. Moreover the major drawback of all the aforementioned approaches is that they require meteorological data that may be difficult to find, unavailable in real time or expensive. Textual sources such as weather reports on the other hand are easy to find, usually available on a daily basis and free.", "The main contribution of our paper is to suggest the use of a certain type of textual documents, namely daily weather report, to build forecasters of the daily national electricity load, average temperature and wind speed for both France and the United-Kingdom (UK). Consequently this work represents a significant break with traditional methods, and we do not intend to best state-of-the-art approaches. Textual information is naturally more fuzzy than numerical one, and as such the same accuracy is not expected from the presented approaches. With a single text, we were already able to predict the electricity consumption with a relative error of less than 5% for both data sets. Furthermore, the quality of our predictions of temperature and wind speed is satisfying enough to replace missing or unavailable data in traditional models. Two different approaches are considered to represent the text numerically, as well as multiple forecasting algorithms. Our empirical results are consistent across encoding, methods and language, thus proving the intrinsic value weather reports have for the prediction of the aforementioned time series. Moreover, a major distinction between previous works is our interpretation of the models. We quantify the impact of a word on the forecast and analyze the geometric properties of the word embedding we trained ourselves. Note that although multiple time series are discussed in our paper, the main focus of this paper remains electricity consumption. As such, emphasis is put on the predictive results on the load demand time series.", "The rest of this paper is organized as follows. The following section introduces the two data sets used to conduct our study. Section 3 presents the different machine learning approaches used and how they were tuned. Section 4 highlights the main results of our study, while section 5 concludes this paper and gives insight on future possible work." ], [ "In order to prove the consistency of our work, experiments have been conducted on two data sets, one for France and the other for the UK. In this section details about the text and time series data are given, as well as the major preprocessing steps." ], [ "Three types of time series are considered in our work: national net electricity consumption (also referred as load or demand), national temperature and wind speed. The load data sets were retrieved on the websites of the respective grid operators, respectively RTE (Réseau et Transport d'Électricité) for France and National Grid for the UK. For France, the available data ranges from January the 1st 2007 to August the 31st 2018. The default temporal resolution is 30 minutes, but it is averaged to a daily one. For the UK, it is available from January the 1st 2006 to December the 31st 2018 with the same temporal resolution and thus averaging. Due to social factors such as energy policies or new usages of electricity (e.g. Electric Vehicles), the net consumption usually has a long-term trend (fig. FIGREF2). While for France it seems marginal (fig. FIGREF2), there is a strong decreasing trend for the United-Kingdom (fig. FIGREF2). Such a strong non-stationarity of the time series would cause problems for the forecasting process, since the learnt demand levels would differ significantly from the upcoming ones. Therefore a linear regression was used to approximate the decreasing trend of the net consumption in the UK. It is then subtracted before the training of the methods, and then re-added a posteriori for prediction.", "As for the weather time series, they were extracted from multiple weather stations around France and the UK. The national average is obtained by combining the data from all stations with a weight proportional to the city population the station is located in. For France the stations' data is provided by the French meteorological office, Météo France, while the British ones are scrapped from stations of the National Oceanic and Atmospheric Administration (NOAA). Available on the same time span as the consumption, they usually have a 3 hours temporal resolution but are averaged to a daily one as well. Finally the time series were scaled to the range $[0,1]$ before the training phase, and re-scaled during prediction time." ], [ "Our work aims at predicting time series using exclusively text. Therefore for both countries the inputs of all our models consist only of written daily weather reports. Under their raw shape, those reports take the form of PDF documents giving a short summary of the country's overall weather, accompanied by pressure, temperature, wind, etc. maps. Note that those reports are written a posteriori, although they could be written in a predictive fashion as well. The reports are published by Météo France and the Met Office, its British counterpart. They are publicly available on the respective websites of the organizations. Both corpora span on the same period as the corresponding time series and given their daily nature, it yields a total of 4,261 and 4,748 documents respectively. An excerpt for each language may be found in tables TABREF6 and TABREF7. The relevant text was extracted from the PDF documents using the Python library PyPDF2.", "As emphasized in many studies, preprocessing of the text can ease the learning of the methods and improve accuracy BIBREF18. Therefore the following steps are applied: removal of non-alphabetic characters, removal of stop-words and lowercasing. While it was often highlighted that word lemmatization and stemming improve results, initial experiments showed it was not the case for our study. This is probably due to the technical vocabulary used in both corpora pertaining to the field of meteorology. Already limited in size, the aforementioned preprocessing operations do not yield a significant vocabulary size reduction and can even lead to a loss of linguistic meaning. Finally, extremely frequent or rare words may not have high explanatory power and may reduce the different models' accuracy. That is why words appearing less than 7 times or in more than 40% of the (learning) corpus are removed as well. Figure FIGREF8 represents the distribution of the document lengths after preprocessing, while table TABREF11 gives descriptive statistics on both corpora. Note that the preprocessing steps do not heavily rely on the considered language: therefore our pipeline is easily adaptable for other languages." ], [ "A major target of our work is to show the reports contain an intrinsic information relevant for time series, and that the predictive results do not heavily depend on the encoding of the text or the machine learning algorithm used. Therefore in this section we present the text encoding approaches, as well as the forecasting methods used with them." ], [ "Machines and algorithms cannot work with raw text directly. Thus one major step when working with text is the choice of its numerical representation. In our work two significantly different encoding approaches are considered. The first one is the TF-IDF approach. It embeds a corpus of $N$ documents and $V$ words into a matrix $X$ of size $N \\times V$. As such, every document is represented by a vector of size $V$. For each word $w$ and document $d$ the associated coefficient $x_{d,w}$ represents the frequency of that word in that document, penalized by its overall frequency in the rest of the corpus. Thus very common words will have a low TF-IDF value, whereas specific ones which will appear often in a handful of documents will have a large TF-IDF score. The exact formula to calculate the TF-IDF value of word $w$ in document $d$ is:", "where $f_{d,w}$ is the number of appearances of $w$ in $d$ adjusted by the length of $d$ and $\\#\\lbrace d: w \\in d \\rbrace $ is the number of documents in which the word $w$ appears. In our work we considered only individual words, also commonly referred as 1-grams in the field of natural language processing (NLP). The methodology can be easily extended to $n$-grams (groups of $n$ consecutive words), but initial experiments showed that it did not bring any significant improvement over 1-grams.", "The second representation is a neural word embedding. It consists in representing every word in the corpus by a real-valued vector of dimension $q$. Such models are usually obtained by learning a vector representation from word co-occurrences in a very large corpus (typically hundred thousands of documents, such as Wikipedia articles for example). The two most popular embeddings are probably Google's Word2Vec BIBREF19 and Standford's GloVe BIBREF20. In the former, a neural network is trained to predict a word given its context (continuous bag of word model), whereas in the latter a matrix factorization scheme on the log co-occurences of words is applied. In any case, the very nature of the objective function allows the embedding models to learn to translate linguistic similarities into geometric properties in the vector space. For instance the vector $\\overrightarrow{king} - \\overrightarrow{man} + \\overrightarrow{woman}$ is expected to be very close to the vector $\\overrightarrow{queen}$. However in our case we want a vector encoding which is tailored for the technical vocabulary of our weather reports and for the subsequent prediction task. This is why we decided to train our own word embedding from scratch during the learning phase of our recurrent or convolutional neural network. Aside from the much more restricted size of our corpora, the major difference with the aforementioned embeddings is that in our case it is obtained by minimizing a squared loss on the prediction. In that framework there is no explicit reason for our representation to display any geometric structure. However as detailed in section SECREF36, our word vectors nonetheless display geometric properties pertaining to the behavior of the time series." ], [ "Multiple machine learning algorithms were applied on top of the encoded textual documents. For the TF-IDF representation, the following approaches are applied: random forests (RF), LASSO and multilayer perceptron (MLP) neural networks (NN). We chose these algorithms combined to the TF-IDF representation due to the possibility of interpretation they give. Indeed, considering the novelty of this work, the understanding of the impact of the words on the forecast is of paramount importance, and as opposed to embeddings, TF-IDF has a natural interpretation. Furthermore the RF and LASSO methods give the possibility to interpret marginal effects and analyze the importance of features, and thus to find the words which affect the time series the most.", "As for the word embedding, recurrent or convolutional neural networks (respectively RNN and CNN) were used with them. MLPs are not used, for they would require to concatenate all the vector representations of a sentence together beforehand and result in a network with too many parameters to be trained correctly with our number of available documents. Recall that we decided to train our own vector representation of words instead of using an already available one. In order to obtain the embedding, the texts are first converted into a sequence of integers: each word is given a number ranging from 1 to $V$, where $V$ is the vocabulary size (0 is used for padding or unknown words in the test set). One must then calculate the maximum sequence length $S$, and sentences of length shorter than $S$ are then padded by zeros. During the training process of the network, for each word a $q$ dimensional real-valued vector representation is calculated simultaneously to the rest of the weights of the network. Ergo a sentence of $S$ words is translated into a sequence of $S$ $q$-sized vectors, which is then fed into a recurrent neural unit. For both languages, $q=20$ seemed to yield the best results. In the case of recurrent units two main possibilities arise, with LSTM (Long Short-Term Memory) BIBREF21 and GRU (Gated Recurrent Unit) BIBREF22. After a few initial trials, no significant performance differences were noticed between the two types of cells. Therefore GRU were systematically used for recurrent networks, since their lower amount of parameters makes them easier to train and reduces overfitting. The output of the recurrent unit is afterwards linked to a fully connected (also referred as dense) layer, leading to the final forecast as output. The rectified linear unit (ReLU) activation in dense layers systematically gave the best results, except on the output layer where we used a sigmoid one considering the time series' normalization. In order to tone down overfitting, dropout layers BIBREF23 with probabilities of 0.25 or 0.33 are set in between the layers. Batch normalization BIBREF24 is also used before the GRU since it stabilized training and improved performance. Figure FIGREF14 represents the architecture of our RNN.", "The word embedding matrix is therefore learnt jointly with the rest of the parameters of the neural network by minimization of the quadratic loss with respect to the true electricity demand. Note that while above we described the case of the RNN, the same procedure is considered for the case of the CNN, with only the recurrent layers replaced by a combination of 1D convolution and pooling ones. As for the optimization algorithms of the neural networks, traditional stochastic gradient descent with momentum or ADAM BIBREF25 together with a quadratic loss are used. All of the previously mentioned methods were coded with Python. The LASSO and RF were implemented using the library Scikit Learn BIBREF26, while Keras BIBREF27 was used for the neural networks." ], [ "While most parameters are trained during the learning optimization process, all methods still involve a certain number of hyperparameters that must be manually set by the user. For instance for random forests it can correspond to the maximum depth of the trees or the fraction of features used at each split step, while for neural networks it can be the number of layers, neurons, the embedding dimension or the activation functions used. This is why the data is split into three sets:", "The training set, using all data available up to the 31st of December 2013 (2,557 days for France and 2,922 for the UK). It is used to learn the parameters of the algorithms through mathematical optimization.", "The years 2014 and 2015 serve as validation set (730 days). It is used to tune the hyperparameters of the different approaches.", "All the data from January the 1st 2016 (974 days for France and 1,096 for the UK) is used as test set, on which the final results are presented.", "Grid search is applied to find the best combination of values: for each hyperparameter, a range of values is defined, and all the possible combinations are successively tested. The one yielding the lowest RMSE (see section SECREF4) on the validation set is used for the final results on the test one. While relatively straightforward for RFs and the LASSO, the extreme number of possibilities for NNs and their extensive training time compelled us to limit the range of architectures possible. The hyperparameters are tuned per method and per country: ergo the hyperparameters of a given algorithm will be the same for the different time series of a country (e.g. the RNN architecture for temperature and load for France will be the same, but different from the UK one). Finally before application on the testing set, all the methods are re-trained from scratch using both the training and validation data." ], [ "The goal of our experiments is to quantify how close one can get using textual data only when compared to numerical data. However the inputs of the numerical benchmark should be hence comparable to the information contained in the weather reports. Considering they mainly contain calendar (day of the week and month) as well as temperature and wind information, the benchmark of comparison is a random forest trained on four features only: the time of the year (whose value is 0 on January the 1st and 1 on December the 31st with a linear growth in between), the day of the week, the national average temperature and wind speed. The metrics of evaluation are the Mean Absolute Percentage Error (MAPE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE) and the $R^2$ coefficient given by:", "where $T$ is the number of test samples, $y_t$ and $\\hat{y}_t$ are respectively the ground truth and the prediction for the document of day $t$, and $\\overline{y}$ is the empirical average of the time series over the test sample. A known problem with MAPE is that it unreasonably increases the error score for values close to 0. While for the load it isn't an issue at all, it can be for the meteorological time series. Therefore for the temperature, the MAPE is calculated only when the ground truth is above the 5% empirical quantile. Although we aim at achieving the highest accuracy possible, we focus on the interpretability of our models as well." ], [ "Many words are obviously irrelevant to the time series in our texts. For instance the day of the week, while playing a significant role for the load demand, is useless for temperature or wind. Such words make the training harder and may decrease the accuracy of the prediction. Therefore a feature selection procedure similar to BIBREF28 is applied to select a subset of useful features for the different algorithms, and for each type of time series. Random forests are naturally able to calculate feature importance through the calculation of error increase in the out-of-bag (OOB) samples. Therefore the following process is applied to select a subset of $V^*$ relevant words to keep:", "A RF is trained on the whole training & validation set. The OOB feature importance can thus be calculated.", "The features are then successively added to the RF in decreasing order of feature importance.", "This process is repeated $B=10$ times to tone down the randomness. The number $V^*$ is then set to the number of features giving the highest median OOB $R^2$ value.", "The results of this procedure for the French data is represented in figure FIGREF24. The best median $R^2$ is achieved for $V^* = 52$, although one could argue that not much gain is obtained after 36 words. The results are very similar for the UK data set, thus for the sake of simplicity the same value $V^* = 52$ is used. Note that the same subset of words is used for all the different forecasting models, which could be improved in further work using other selection criteria (e.g. mutual information, see BIBREF29). An example of normalized feature importance is given in figure. FIGREF32." ], [ "Note that most of the considered algorithms involve randomness during the training phase, with the subsampling in the RFs or the gradient descent in the NNs for instance. In order to tone it down and to increase the consistency of our results, the different models are run $B=10$ times. The results presented hereafter correspond to the average and standard-deviation on those runs. The RF model denoted as \"sel\" is the one with the reduced number of features, whereas the other RF uses the full vocabulary. We also considered an aggregated forecaster (abridged Agg), consisting of the average of the two best individual ones in terms of RMSE. All the neural network methods have a reduced vocabulary size $V^*$. The results for the French and UK data are respectively given by tables TABREF26 and TABREF27.", "Our empirical results show that for the electricity consumption prediction task, the order of magnitude of the relative error is around 5%, independently of the language, encoding and machine learning method, thus proving the intrinsic value of the information contained in the textual documents for this time series. As expected, all text based methods perform poorer than when using explicitly numerical input features. Indeed, despite containing relevant information, the text is always more fuzzy and less precise than an explicit value for the temperature or the time of the year for instance. Again the aim of this work is not to beat traditional methods with text, but quantifying how close one can come to traditional approaches when using text exclusively. As such achieving less than 5% of MAPE was nonetheless deemed impressive by expert electricity forecasters. Feature selection brings significant improvement in the French case, although it does not yield any improvement in the English one. The reason for this is currently unknown. Nevertheless the feature selection procedure also helps the NNs by dramatically reducing the vocabulary size, and without it the training of the networks was bound to fail. While the errors accross methods are roughly comparable and highlight the valuable information contained within the reports, the best method nonetheless fluctuates between languages. Indeed in the French case there is a hegemony of the NNs, with the embedding RNN edging the MLP TF-IDF one. However for the UK data set the RFs yield significantly better results on the test set than the NNs. This inversion of performance of the algorithms is possibly due to a change in the way the reports were written by the Met Office after August 2017, since the results of the MLP and RNN on the validation set (not shown here) were satisfactory and better than both RFs. For the two languages both the CNN and the LASSO yielded poor results. For the former, it is because despite grid search no satisfactory architecture was found, whereas the latter is a linear approach and was used more for interpretation purposes than strong performance. Finally the naive aggregation of the two best experts always yields improvement, especially for the French case where the two different encodings are combined. This emphasises the specificity of the two representations leading to different types of errors. An example of comparison between ground truth and forecast for the case of electricity consumption is given for the French language with fig. FIGREF29, while another for temperature may be found in the appendix FIGREF51. The sudden \"spikes\" in the forecast are due to the presence of winter related words in a summer report. This is the case when used in comparisons, such as \"The flood will be as severe as in January\" in a June report and is a limit of our approach. Finally, the usual residual $\\hat{\\varepsilon }_t = y_t - \\hat{y}_t$ analyses procedures were applied: Kolmogorov normality test, QQplots comparaison to gaussian quantiles, residual/fit comparison... While not thoroughly gaussian, the residuals were close to normality nonetheless and displayed satisfactory properties such as being generally independent from the fitted and ground truth values. Excerpts of this analysis for France are given in figure FIGREF52 of the appendix. The results for the temperature and wind series are given in appendix. Considering that they have a more stochastic behavior and are thus more difficult to predict, the order of magnitude of the errors differ (the MAPE being around 15% for temperature for instance) but globally the same observations can be made." ], [ "While accuracy is the most relevant metric to assess forecasts, interpretability of the models is of paramount importance, especially in the field of professional electricity load forecasting and considering the novelty of our work. Therefore in this section we discuss the properties of the RF and LASSO models using the TF-IDF encoding scheme, as well as the RNN word embedding." ], [ "One significant advantage of the TF-IDF encoding when combined with random forests or the LASSO is that it is possible to interpret the behavior of the models. For instance, figure FIGREF32 represents the 20 most important features (in the RF OOB sense) for both data sets when regressing over electricity demand data. As one can see, the random forest naturally extracts calendar information contained in the weather reports, since months or week-end days are among the most important ones. For the former, this is due to the periodic behavior of electricity consumption, which is higher in winter and lower in summer. This is also why characteristic phenomena of summer and winter, such as \"thunderstorms\", \"snow\" or \"freezing\" also have a high feature importance. The fact that August has a much more important role than July also concurs with expert knowledge, especially for France: indeed it is the month when most people go on vacations, and thus when the load drops the most. As for the week-end names, it is due to the significantly different consumer behavior during Saturdays and especially Sundays when most of the businesses are closed and people are usually at home. Therefore the relevant words selected by the random forest are almost all in agreement with expert knowledge.", "We also performed the analysis of the relevant words for the LASSO. In order to do that, we examined the words $w$ with the largest associated coefficients $\\beta _w$ (in absolute value) in the regression. Since the TF-IDF matrix has positive coefficients, it is possible to interpret the sign of the coefficient $\\beta _w$ as its impact on the time series. For instance if $\\beta _w > 0$ then the presence of the word $w$ causes a rise the time series (respectively if $\\beta _w < 0$, it entails a decline). The results are plotted fig. FIGREF35 for the the UK. As one can see, the winter related words have positive coefficients, and thus increase the load demand as expected whereas the summer related ones decrease it. The value of the coefficients also reflects the impact on the load demand. For example January and February have the highest and very similar values, which concurs with the similarity between the months. Sunday has a much more negative coefficient than Saturday, since the demand significantly drops during the last day of the week. The important words also globally match between the LASSO and the RF, which is a proof of the consistency of our results (this is further explored afterwards in figure FIGREF43). Although not presented here, the results are almost identical for the French load, with approximately the same order of relevancy. The important words logically vary in function of the considered time series, but are always coherent. For instance for the wind one, terms such as \"gales\", \"windy\" or \"strong\" have the highest positive coefficients, as seen in the appendix figure FIGREF53. Those results show that a text based approach not only extracts the relevant information by itself, but it may eventually be used to understand which phenomena are relevant to explain the behavior of a time series, and to which extend." ], [ "Word vector embeddings such as Word2Vec and GloVe are known for their vectorial properties translating linguistic ones. However considering the objective function of our problem, there was no obvious reason for such attributes to appear in our own. Nevertheless for both languages we conducted an analysis of the geometric properties of our embedding matrix. We investigated the distances between word vectors, the relevant metric being the cosine distance given by:", "where $\\overrightarrow{w_1}$ and $\\overrightarrow{w_2}$ are given word vectors. Thus a cosine distance lower than 1 means similarity between word vectors, whereas a greater than 1 corresponds to opposition.", "The initial analyses of the embedding matrices for both the UK and France revealed that in general, words were grouped by context or influence on the electricity consumption. For instance, we observed that winter words were together and far away from summer ones. Week days were grouped as well and far from week-end days. However considering the vocabulary was reduced to $V^* = 52$ words, those results lacked of consistency. Therefore for both languages we decided to re-train the RNNs using the same architecture, but with a larger vocabulary of the $V=300$ most relevant words (still in the RF sense) and on all the available data (i.e. everything is used as training) to compensate for the increased size of the vocabulary. We then calculated the distance of a few prominent words to the others. The analysis of the average cosine distance over $B=10$ runs for three major words is given by tables TABREF38 and TABREF39, and three other examples are given in the appendix tables TABREF57 and TABREF58. The first row corresponds to the reference word vector $\\overrightarrow{w_1}$ used to calculate the distance from (thus the distance is always zero), while the following ones are the 9 closest to it. The two last rows correspond to words we deemed important to check the distance with (an antagonistic one or relevant one not in the top 9 for instance).", "The results of the experiments are very similar for both languages again. Indeed, the words are globally embedded in the vector space by topic: winter related words such as \"January\" (\"janvier\"), \"February\" (\"février\"), \"snow\" (\"neige\"), \"freezing\" (\"glacial\") are close to each other and almost opposite to summer related ones such as \"July\" (\"juillet\"), \"August\" (\"août\"), \"hot\" (\"chaud\"). For both cases the week days Monday (\"lundi\") to Friday (\"vendredi\") are grouped very closely to each other, while significantly separated from the week-end ones \"Saturday\" (\"samedi\") and \"Sunday\" (\"dimanche\"). Despite these observations, a few seemingly unrelated words enter the lists of top 10, especially for the English case (such as \"pressure\" or \"dusk\" for \"February\"). In fact the French language embedding seems of better quality, which is perhaps linked to the longer length of the French reports in average. This issue could probably be addressed with more data. Another observation made is that the importance of a word $w$ seems related to its euclidean norm in the embedding space ${\\overrightarrow{w}}_2$. For both languages the list of the 20 words with the largest norm is given fig. FIGREF40. As one can see, it globally matches the selected ones from the RF or the LASSO (especially for the French language), although the order is quite different. This is further supported by the Venn diagram of common words among the top 50 ones for each word selection method represented in figure FIGREF43 for France. Therefore this observation could also be used as feature selection procedure for the RNN or CNN in further work.", "In order to achieve a global view of the embeddings, the t-SNE algorithm BIBREF30 is applied to project an embedding matrix into a 2 dimensional space, for both languages. The observations for the few aforementioned words are confirmed by this representation, as plotted in figure FIGREF44. Thematic clusters can be observed, roughly corresponding to winter, summer, week-days, week-end days for both languages. Globally summer and winter seem opposed, although one should keep in mind that the t-SNE representation does not preserve the cosine distance. The clusters of the French embedding appear much more compact than the UK one, comforting the observations made when explicitly calculating the cosine distances." ], [ "In this study, a novel pipeline to predict three types of time series using exclusively a textual source was proposed. Making use of publicly available daily weather reports, we were able to predict the electricity consumption with less than 5% of MAPE for both France and the United-Kingdom. Moreover our average national temperature and wind speed predictions displayed sufficient accuracy to be used to replace missing data or as first approximation in traditional models in case of unavailability of meteorological features.", "The texts were encoded numerically using either TF-IDF or our own neural word embedding. A plethora of machine learning algorithms such as random forests or neural networks were applied on top of those representations. Our results were consistent over language, numerical representation of the text and prediction algorithm, proving the intrinsic value of the textual sources for the three considered time series. Contrarily to previous works in the field of textual data for time series forecasting, we went in depth and quantified the impact of words on the variations of the series. As such we saw that all the algorithms naturally extract calendar and meteorological information from the texts, and that words impact the time series in the expected way (e.g. winter words increase the consumption and summer ones decrease it). Despite being trained on a regular quadratic loss, our neural word embedding spontaneously builds geometric properties. Not only does the norm of a word vector reflect its significance, but the words are also grouped by topic with for example winter, summer or day of the week clusters.", "Note that this study was a preliminary work on the use of textual information for time series prediction, especially electricity demand one. The long-term goal is to include multiple sources of textual information to improve the accuracy of state-of-the-art methods or to build a text based forecaster which can be used to increase the diversity in a set of experts for electricity consumption BIBREF31. However due to the redundancy of the information of the considered weather reports with meteorological features, it may be necessary to consider alternative textual sources. The use of social media such as Facebook, Twitter or Instagram may give interesting insight and will therefore be investigated in future work." ], [ "Additional results for the prediction tasks on temperature and wind speed can be found in tables TABREF47 to TABREF50. An example of forecast for the French temperature is given in figure FIGREF51.", "While not strictly normally distributed, the residuals for the French electricity demand display an acceptable behavior. This holds also true for the British consumption, and both temperature time series, but is of lesser quality for the wind one.", "The the UK wind LASSO regression, the words with the highest coefficients $\\beta _w$ are indeed related to strong wind phenomena, whereas antagonistic ones such as \"fog\" or \"mist\" have strongly negative ones as expected (fig. FIGREF53).", "For both languages we represented the evolution of the (normalized) losses for the problem of load regression in fig. FIGREF54. The aspect is a typical one, with the validation loss slightly above the training one. The slightly erratic behavior of the former one is possibly due to a lack of data to train the embeddings.", "The cosine distances for three other major words and for both corpora have been calculated as well. The results are given in tables TABREF57 and TABREF58. For both languages, the three summer months are grouped together, and so are the two week-end days. However again the results are less clear for the English language. They are especially mediocre for \"hot\", considering that only \"warm\" seems truly relevant and that \"August\" for instance is quite far away. For the French language instead of \"hot\" the distances to \"thunderstorms\" were calculated. The results are quite satisfactory, with \"orageux\"/\"orageuse\" (\"thundery\") coming in the two first places and related meteorological phenomena (\"cumulus\" and \"grêle\", meaning \"hail\") relatively close as well. For the French case, Saturday and Sunday are very close to summer related words. This observation probably highlights the fact that the RNN groups load increasing and decreasing words in opposite parts of the embedding space." ] ] }
{ "question": [ "How big is dataset used for training/testing?", "Is there any example where geometric property is visible for context similarity between words?", "What geometric properties do embeddings display?", "How accurate is model trained on text exclusively?" ], "question_id": [ "07c59824f5e7c5399d15491da3543905cfa5f751", "77f04cd553df691e8f4ecbe19da89bc32c7ac734", "728a55c0f628f2133306b6bd88af00eb54017b12", "d5498d16e8350c9785782b57b1e5a82212dbdaad" ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "4,261 days for France and 4,748 for the UK", "evidence": [ "Our work aims at predicting time series using exclusively text. Therefore for both countries the inputs of all our models consist only of written daily weather reports. Under their raw shape, those reports take the form of PDF documents giving a short summary of the country's overall weather, accompanied by pressure, temperature, wind, etc. maps. Note that those reports are written a posteriori, although they could be written in a predictive fashion as well. The reports are published by Météo France and the Met Office, its British counterpart. They are publicly available on the respective websites of the organizations. Both corpora span on the same period as the corresponding time series and given their daily nature, it yields a total of 4,261 and 4,748 documents respectively. An excerpt for each language may be found in tables TABREF6 and TABREF7. The relevant text was extracted from the PDF documents using the Python library PyPDF2." ], "highlighted_evidence": [ "The reports are published by Météo France and the Met Office, its British counterpart. They are publicly available on the respective websites of the organizations. Both corpora span on the same period as the corresponding time series and given their daily nature, it yields a total of 4,261 and 4,748 documents respectively." ] } ], "annotation_id": [ "e6c530042231f1a95608b2495514fe8b5ad08d28" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "The initial analyses of the embedding matrices for both the UK and France revealed that in general, words were grouped by context or influence on the electricity consumption. For instance, we observed that winter words were together and far away from summer ones. Week days were grouped as well and far from week-end days. However considering the vocabulary was reduced to $V^* = 52$ words, those results lacked of consistency. Therefore for both languages we decided to re-train the RNNs using the same architecture, but with a larger vocabulary of the $V=300$ most relevant words (still in the RF sense) and on all the available data (i.e. everything is used as training) to compensate for the increased size of the vocabulary. We then calculated the distance of a few prominent words to the others. The analysis of the average cosine distance over $B=10$ runs for three major words is given by tables TABREF38 and TABREF39, and three other examples are given in the appendix tables TABREF57 and TABREF58. The first row corresponds to the reference word vector $\\overrightarrow{w_1}$ used to calculate the distance from (thus the distance is always zero), while the following ones are the 9 closest to it. The two last rows correspond to words we deemed important to check the distance with (an antagonistic one or relevant one not in the top 9 for instance)." ], "highlighted_evidence": [ "The initial analyses of the embedding matrices for both the UK and France revealed that in general, words were grouped by context or influence on the electricity consumption. For instance, we observed that winter words were together and far away from summer ones. " ] } ], "annotation_id": [ "5aa11104f6641837a83ea424f900ee683d194b79" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Winter and summer words formed two separate clusters. Week day and week-end day words also formed separate clusters.", "evidence": [ "The initial analyses of the embedding matrices for both the UK and France revealed that in general, words were grouped by context or influence on the electricity consumption. For instance, we observed that winter words were together and far away from summer ones. Week days were grouped as well and far from week-end days. However considering the vocabulary was reduced to $V^* = 52$ words, those results lacked of consistency. Therefore for both languages we decided to re-train the RNNs using the same architecture, but with a larger vocabulary of the $V=300$ most relevant words (still in the RF sense) and on all the available data (i.e. everything is used as training) to compensate for the increased size of the vocabulary. We then calculated the distance of a few prominent words to the others. The analysis of the average cosine distance over $B=10$ runs for three major words is given by tables TABREF38 and TABREF39, and three other examples are given in the appendix tables TABREF57 and TABREF58. The first row corresponds to the reference word vector $\\overrightarrow{w_1}$ used to calculate the distance from (thus the distance is always zero), while the following ones are the 9 closest to it. The two last rows correspond to words we deemed important to check the distance with (an antagonistic one or relevant one not in the top 9 for instance)." ], "highlighted_evidence": [ "For instance, we observed that winter words were together and far away from summer ones. Week days were grouped as well and far from week-end days." ] } ], "annotation_id": [ "f704fdce4c0a29cd04b3bd36b5062fd44e16c965" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Relative error is less than 5%", "evidence": [ "The main contribution of our paper is to suggest the use of a certain type of textual documents, namely daily weather report, to build forecasters of the daily national electricity load, average temperature and wind speed for both France and the United-Kingdom (UK). Consequently this work represents a significant break with traditional methods, and we do not intend to best state-of-the-art approaches. Textual information is naturally more fuzzy than numerical one, and as such the same accuracy is not expected from the presented approaches. With a single text, we were already able to predict the electricity consumption with a relative error of less than 5% for both data sets. Furthermore, the quality of our predictions of temperature and wind speed is satisfying enough to replace missing or unavailable data in traditional models. Two different approaches are considered to represent the text numerically, as well as multiple forecasting algorithms. Our empirical results are consistent across encoding, methods and language, thus proving the intrinsic value weather reports have for the prediction of the aforementioned time series. Moreover, a major distinction between previous works is our interpretation of the models. We quantify the impact of a word on the forecast and analyze the geometric properties of the word embedding we trained ourselves. Note that although multiple time series are discussed in our paper, the main focus of this paper remains electricity consumption. As such, emphasis is put on the predictive results on the load demand time series." ], "highlighted_evidence": [ "With a single text, we were already able to predict the electricity consumption with a relative error of less than 5% for both data sets." ] } ], "annotation_id": [ "08426e8d76bfe140f762a3949db74028e5b14163" ], "worker_id": [ "71f73551e7aabf873649e8fe97aefc54e6dd14f8" ] } ] }
{ "caption": [ "Figure 1: Net electricity consumption (Load) over time.", "Figure 2: Word counts for the two corpora after preprocessing.", "Table 3: Descriptive analysis of the two corpora (after preprocessing)", "Figure 3: Structure of our RNN. Dropout and batch normalization are not represented.", "Figure 4: Evolution of the OOB R2 during the selection procedure.", "Table 4: Forecast errors on the net load for the French Dataset.", "Table 5: Forecast errors on the net load for the British Dataset.", "Table 6: Best (individual, in terms of RMSE) result for each of the considered time series.", "Figure 5: Overlapping of prediction and real load (France)", "Figure 6: RF feature importance over the B = 10 runs.", "Figure 7: Coefficients βw in the british load LASSO regression.", "Table 7: Closest words (in the cosine sense) to ”february”,”snow” and ”tuesday” for the UK", "Table 8: Closest words (in the cosine sense) to ”february”,”snow” and ”tuesday” for France", "Figure 8: Word vector log-norm over B = 10.", "Figure 9: Venn diagram of common words among the top 50 ones for each selection procedure (France).", "Figure 10: 2D t-SNE projections of the embedding matrix for both languages.", "Table A.9: Forecast errors on the national temperature for France.", "Table A.10: Forecast errors on the national wind for France.", "Table A.11: Forecast errors on the national temperature for Great-Britain.", "Table A.12: Forecast errors on the national wind for Great-Britain.", "Figure A.11: Overlapping of prediction and national Temperature (France)", "Figure A.12: Residual analysis of the French aggregated predictor.", "Figure A.13: Coefficients βw in the British wind LASSO regression.", "Figure A.14: Loss (Mean Squared Error) evolution of the electricity demand RNN for both languages.", "Table A.13: Closest words (in the cosine sense) to ”August”,”Sunday” and ”Hot” for the UK", "Table A.14: Closest words (in the cosine sense) to ”August”,”Sunday and ”thunderstorms” for the France" ], "file": [ "3-Figure1-1.png", "5-Figure2-1.png", "5-Table3-1.png", "7-Figure3-1.png", "9-Figure4-1.png", "9-Table4-1.png", "10-Table5-1.png", "11-Table6-1.png", "11-Figure5-1.png", "12-Figure6-1.png", "13-Figure7-1.png", "14-Table7-1.png", "14-Table8-1.png", "15-Figure8-1.png", "15-Figure9-1.png", "16-Figure10-1.png", "17-TableA.9-1.png", "17-TableA.10-1.png", "17-TableA.11-1.png", "17-TableA.12-1.png", "18-FigureA.11-1.png", "18-FigureA.12-1.png", "19-FigureA.13-1.png", "19-FigureA.14-1.png", "20-TableA.13-1.png", "20-TableA.14-1.png" ] }
1911.12569
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis
In this paper, we propose a two-layered multi-task attention based neural network that performs sentiment analysis through emotion analysis. The proposed approach is based on Bidirectional Long Short-Term Memory and uses Distributional Thesaurus as a source of external knowledge to improve the sentiment and emotion prediction. The proposed system has two levels of attention to hierarchically build a meaningful representation. We evaluate our system on the benchmark dataset of SemEval 2016 Task 6 and also compare it with the state-of-the-art systems on Stance Sentiment Emotion Corpus. Experimental results show that the proposed system improves the performance of sentiment analysis by 3.2 F-score points on SemEval 2016 Task 6 dataset. Our network also boosts the performance of emotion analysis by 5 F-score points on Stance Sentiment Emotion Corpus.
{ "section_name": [ "Introduction", "Related Work", "Proposed Methodology", "Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: BiLSTM based word encoder", "Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Word Attention", "Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Sentence Attention", "Proposed Methodology ::: Two-Layered Multi-Task Attention Model ::: Final Output", "Proposed Methodology ::: Distributional Thesaurus", "Proposed Methodology ::: Word Embeddings", "Datasets, Experiments and Analysis", "Datasets, Experiments and Analysis ::: Datasets", "Datasets, Experiments and Analysis ::: Preprocessing", "Datasets, Experiments and Analysis ::: Implementation Details", "Datasets, Experiments and Analysis ::: Results and Analysis", "Datasets, Experiments and Analysis ::: Error Analysis", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "The emergence of social media sites with limited character constraint has ushered in a new style of communication. Twitter users within 280 characters per tweet share meaningful and informative messages. These short messages have a powerful impact on how we perceive and interact with other human beings. Their compact nature allows them to be transmitted efficiently and assimilated easily. These short messages can shape people's thought and opinion. This makes them an interesting and important area of study. Tweets are not only important for an individual but also for the companies, political parties or any organization. Companies can use tweets to gauge the performance of their products and predict market trends BIBREF0. The public opinion is particularly interesting for political parties as it gives them an idea of voter's inclination and their support. Sentiment and emotion analysis can help to gauge product perception, predict stock prices and model public opinions BIBREF1.", "Sentiment analysis BIBREF2 is an important area of research in natural language processing (NLP) where we automatically determine the sentiments (positive, negative, neutral). Emotion analysis focuses on the extraction of predefined emotion from documents. Discrete emotions BIBREF3, BIBREF4 are often classified into anger, anticipation, disgust, fear, joy, sadness, surprise and trust. Sentiments and emotions are subjective and hence they are understood similarly and often used interchangeably. This is also mostly because both emotions and sentiments refer to experiences that result from the combined influences of the biological, the cognitive, and the social BIBREF5. However, emotions are brief episodes and are shorter in length BIBREF6, whereas sentiments are formed and retained for a longer period. Moreover, emotions are not always target-centric whereas sentiments are directed. Another difference between emotion and sentiment is that a sentence or a document may contain multiple emotions but a single overall sentiment.", "Prior studies show that sentiment and emotion are generally tackled as two separate problems. Although sentiment and emotion are not exactly the same, they are closely related. Emotions, like joy and trust, intrinsically have an association with a positive sentiment. Similarly, anger, disgust, fear and sadness have a negative tone. Moreover, sentiment analysis alone is insufficient at times in imparting complete information. A negative sentiment can arise due to anger, disgust, fear, sadness or a combination of these. Information about emotion along with sentiment helps to better understand the state of the person or object. The close association of emotion with sentiment motivates us to build a system for sentiment analysis using the information obtained from emotion analysis.", "In this paper, we put forward a robust two-layered multi-task attention based neural network which performs sentiment analysis and emotion analysis simultaneously. The model uses two levels of attention - the first primary attention builds the best representation for each word using Distributional Thesaurus and the secondary attention mechanism creates the final sentence level representation. The system builds the representation hierarchically which gives it a good intuitive working insight. We perform several experiments to evaluate the usefulness of primary attention mechanism. Experimental results show that the two-layered multi-task system for sentiment analysis which uses emotion analysis as an auxiliary task improves over the existing state-of-the-art system of SemEval 2016 Task 6 BIBREF7.", "The main contributions of the current work are two-fold: a) We propose a novel two-layered multi-task attention based system for joint sentiment and emotion analysis. This system has two levels of attention which builds a hierarchical representation. This provides an intuitive explanation of its working; b) We empirically show that emotion analysis is relevant and useful in sentiment analysis. The multi-task system utilizing fine-grained information of emotion analysis performs better than the single task system of sentiment analysis." ], [ "A survey of related literature reveals the use of both classical and deep-learning approaches for sentiment and emotion analysis. The system proposed in BIBREF8 relied on supervised statistical text classification which leveraged a variety of surface form, semantic, and sentiment features for short informal texts. A Support Vector Machine (SVM) based system for sentiment analysis was used in BIBREF9, whereas an ensemble of four different sub-systems for sentiment analysis was proposed in BIBREF10. It comprised of Long Short-Term Memory (LSTM) BIBREF11, Gated Recurrent Unit (GRU) BIBREF12, Convolutional Neural Network (CNN) BIBREF13 and Support Vector Regression (SVR) BIBREF14. BIBREF15 reported the results for emotion analysis using SVR, LSTM, CNN and Bi-directional LSTM (Bi-LSTM) BIBREF16. BIBREF17 proposed a lexicon based feature extraction for emotion text classification. A rule-based approach was adopted by BIBREF18 to extract emotion-specific semantics. BIBREF19 used a high-order Hidden Markov Model (HMM) for emotion detection. BIBREF20 explored deep learning techniques for end-to-end trainable emotion recognition. BIBREF21 proposed a multi-task learning model for fine-grained sentiment analysis. They used ternary sentiment classification (negative, neutral, positive) as an auxiliary task for fine-grained sentiment analysis (very-negative, negative, neutral, positive, very-positive). A CNN based system was proposed by BIBREF22 for three phase joint multi-task training. BIBREF23 presented a multi-task learning based model for joint sentiment analysis and semantic embedding learning tasks. BIBREF24 proposed a multi-task setting for emotion analysis based on a vector-valued Gaussian Process (GP) approach known as coregionalisation BIBREF25. A hierarchical document classification system based on sentence and document representation was proposed by BIBREF26. An attention framework for sentiment regression is described in BIBREF27. BIBREF28 proposed a DeepEmoji system based on transfer learning for sentiment, emotion and sarcasm detection through emoji prediction. However, the DeepEmoji system treats these independently, one at a time.", "Our proposed system differs from the above works in the sense that none of these works addresses the problem of sentiment and emotion analysis concurrently. Our empirical analysis shows that performance of sentiment analysis is boosted significantly when this is jointly performed with emotion analysis. This may be because of the fine-grained characteristics of emotion analysis that provides useful evidences for sentiment analysis." ], [ "We propose a novel two-layered multi-task attention based neural network for sentiment analysis where emotion analysis is utilized to improve its efficiency. Figure FIGREF1 illustrates the overall architecture of the proposed multi-task system. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections." ], [ "Recurrent Neural Networks (RNN) are a class of networks which take sequential input and computes a hidden state vector for each time step. The current hidden state vector depends on the current input and the previous hidden state vector. This makes them good for handling sequential data. However, they suffer from a vanishing or exploding gradient problem when presented with long sequences. The gradient for back-propagating error either reduces to a very small number or increases to a very high value which hinders the learning process. Long Short Term Memory (LSTM) BIBREF11, a variant of RNN solves this problem by the gating mechanisms. The input, forget and output gates control the information flow.", "BiLSTM is a special type of LSTM which takes into account the output of two LSTMs - one working in the forward direction and one working in the backward direction. The presence of contextual information for both past and future helps the BiLSTM to make an informed decision. The concatenation of a hidden state vectors $\\overrightarrow{h_t}$ of the forward LSTM and $\\overleftarrow{h_t}$ of the backward LSTM at any time step t provides the complete information. Therefore, the output of the BiLSTM at any time step t is $h_t$ = [$\\overrightarrow{h_t}$, $\\overleftarrow{h_t}$]. The output of the BiLSTM is shared between the main task (Sentiment Analysis) and the auxiliary task (Emotion Analysis)." ], [ "The word level attention (primary attention) mechanism gives the model a flexibility to represent each word for each task differently. This improves the word representation as the model chooses the best representation for each word for each task. A Distributional Thesaurus (DT) identifies words that are semantically similar, based on whether they tend to occur in a similar context. It provides a word expansion list for words based on their contextual similarity. We use the top-4 words for each word as their candidate terms. We only use the top-4 words for each word as we observed that the expansion list with more words started to contain the antonyms of the current word which empirically reduced the system performance. Word embeddings of these four candidate terms and the hidden state vector $h_t$ of the input word are fed to the primary attention mechanism. The primary attention mechanism finds the best attention coefficient for each candidate term. At each time step $t$ we get V($x_t$) candidate terms for each input $x_t$ with $v_i$ being the embedding for each term (Distributional Thesaurus and word embeddings are described in the next section). The primary attention mechanism assigns an attention coefficient to each of the candidate terms having the index $i$ $\\in $ V($x_t$):", "where $W_w$ and $b_{w}$ are jointly learned parameters.", "Each embedding of the candidate term is weighted with the attention score $\\alpha _{ti}$ and then summed up. This produces $m_{t}$, the representation for the current input $x_{t}$ obtained from the Distributional Thesaurus using the candidate terms.", "Finally, $m_{t}$ and $h_{t}$ are concatenated to get $\\widehat{h_{t}}$, the final output of the primary attention mechanism." ], [ "The sentence attention (secondary attention) part focuses on each word of the sentence and assigns the attention coefficients. The attention coefficients are assigned on the basis of words' importance and their contextual relevance. This helps the model to build the overall sentence representation by capturing the context while weighing different word representations individually. The final sentence representation is obtained by multiplying each word vector representation with their attention coefficient and summing them over. The attention coefficient $\\alpha _t$ for each word vector representation and the sentence representation $\\widehat{H}$ are calculated as:", "where $W_s$ and $b_{s}$ are parameters to be learned.", "$\\widehat{H}$ denotes the sentence representation for sentiment analysis. Similarly, we calculate $\\bar{H}$ which represents the sentence for emotion classification. The system has the flexibility to compute different representations for sentiment and emotion analysis both." ], [ "The final outputs for both sentiment and emotion analysis are computed by feeding $\\widehat{H}$ and $\\bar{H}$ to two different one-layer feed forward neural networks. For our task, the feed forward network for sentiment analysis has two output units, whereas the feed forward network for emotion analysis has eight output nodes performing multi-label classification." ], [ "Distributional Thesaurus (DT) BIBREF31 ranks words according to their semantic similarity. It is a resource which produces a list of words in decreasing order of their similarity for each word. We use the DT to expand each word of the sentence. The top-4 words serve as the candidate terms for each word. For example, the candidate terms for the word good are: great, nice awesome and superb. DT offers the primary attention mechanism external knowledge in the form of candidate terms. It assists the system to perform better when presented with unseen words during testing as the unseen words could have been a part of the DT expansion list. For example, the system may not come across the word superb during training but it can appear in the test set. Since the system has already seen the word superb in the DT expansion list of the word good, it can handle this case efficiently. This fact is established by our evaluation results as the model performs better when the DT expansion and primary attentions are a part of the final multi-task system." ], [ "Word embeddings represent words in a low-dimensional numerical form. They are useful for solving many NLP problems. We use the pre-trained 300 dimensional Google Word2Vec BIBREF32 embeddings. The word embedding for each word in the sentence is fed to the BiLSTM network to get the current hidden state. Moreover, the primary attention mechanism is also applied to the word embeddings of the candidate terms for the current word." ], [ "In this section we present the details of the datasets used for the experiments, results that we obtain and the necessary analysis." ], [ "We evaluate our proposed approach for joint sentiment and emotion analysis on the benchmark dataset of SemEval 2016 Task 6 BIBREF7 and Stance Sentiment Emotion Corpus (SSEC) BIBREF15. The SSEC corpus is an annotation of the SemEval 2016 Task 6 corpus with emotion labels. The re-annotation of the SemEval 2016 Task 6 corpus helps to bridge the gap between the unavailability of a corpus with sentiment and emotion labels. The SemEval 2016 corpus contains tweets which are classified into positive, negative or other. It contains 2,914 training and 1,956 test instances. The SSEC corpus is annotated with anger, anticipation, disgust, fear, joy, sadness, surprise and trust labels. Each tweet could belong to one or more emotion classes and one sentiment class. Table TABREF15 shows the data statistics of SemEval 2016 task 6 and SSEC which are used for sentiment and emotion analysis, respectively." ], [ "The SemEval 2016 task 6 corpus contains tweets from Twitter. Since the tweets are derived from an environment with the constraint on the number of characters, there is an inherent problem of word concatenation, contractions and use of hashtags. Example: #BeautifulDay, we've, etc. Usernames and URLs do not impart any sentiment and emotion information (e.g. @John). We use the Python package ekphrasis BIBREF33 for handling these situations. Ekphrasis helps to split the concatenated words into individual words and expand the contractions. For example, #BeautifulDay to # Beautiful Day and we've to we have. We replace usernames with $<$user$>$, number with $<number>$ and URLs with $<$url$>$ token." ], [ "We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis." ], [ "We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18.", "The primary attention mechanism plays a key role in the overall system as it improves the score of both sentiment and emotion analysis in both single task as well as multi-task systems. The use of primary attention improves the performance of single task systems for sentiment and emotion analysis by 2.21 and 1.72 points, respectively.Similarly, when sentiment and emotion analysis are jointly performed the primary attention mechanism improves the score by 0.93 and 2.42 points for sentiment and emotion task, respectively. To further measure the usefulness of the primary attention mechanism and the Distributional Thesaurus, we remove it from the systems S2, E2, and M2 to get the systems S1, E1, and M1. In all the cases, with the removal of primary attention mechanism, the performance drops. This is clearly illustrated in Figure FIGREF21. These observations indicate that the primary attention mechanism is an important component of the two-layered multi-task attention based network for sentiment analysis. We also perform t-test BIBREF40 for computing statistical significance of the obtained results from the final two-layered multi-task system M2 for sentiment analysis by calculating the p-values and observe that the performance gain over M1 is significant with p-value = 0.001495. Similarly, we perform the statistical significance test for each emotion class. The p-values for anger, anticipation, fear, disgust, joy, sadness, surprise and trust are 0.000002, 0.000143, 0.00403, 0.000015, 0.004607, 0.069, 0.000001 and 0.000001, respectively. These results provide a good indication of statistical significance.", "Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis.", "We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise.", "Experimental results indicate that the multi-task system which uses fine-grained information of emotion analysis helps to boost the performance of sentiment analysis. The system M1 comprises of the system S1 performing the main task (sentiment analysis) with E1 undertaking the auxiliary task (emotion analysis). Similarly, the system M2 is made up of S2 and E2 where S2 performs the main task (sentiment analysis) and E2 commits to the auxiliary task (emotion analysis). We observe that in both the situations, the auxiliary task, i.e. emotional information increases the performance of the main task, i.e. sentiment analysis when these two are jointly performed. Experimental results help us to establish the fact that emotion analysis benefits sentiment analysis. The implicit sentiment attached to the emotion words assists the multi-task system. Emotion such as joy and trust are inherently associated with a positive sentiment whereas, anger, disgust, fear and sadness bear a negative sentiment. Figure FIGREF21 illustrates the performance of various models for sentiment analysis.", "As a concrete example which justifies the utility of emotion analysis in sentiment analysis is shown below.", "@realMessi he is a real sportsman and deserves to be the skipper.", "The gold labels for the example are anticipation, joy and trust emotion with a positive sentiment. Our system S2 (single task system for sentiment analysis with primary and secondary attention) had incorrectly labeled this example with a negative sentiment and the E2 system (single task system with both primary and secondary attention for emotion analysis) had tagged it with anticipation and joy only. However, M2 i.e. the multi-task system for joint sentiment and emotion analysis had correctly classified the sentiment as positive and assigned all the correct emotion tags. It predicted the trust emotion tag, in addition to anticipation and joy (which were predicted earlier by E2). This helped M2 to correctly identify the positive sentiment of the example. The presence of emotional information helped the system to alter its sentiment decision (negative by S2) as it had better understanding of the text.", "A sentiment directly does not invoke a particular emotion always and a sentiment can be associated with more than one emotion. However, emotions like joy and trust are associated with positive sentiment mostly whereas, anger, disgust and sadness are associated with negative sentiment particularly. This might be the reason of the extra sentiment information not helping the multi-task system for emotion analysis and hence, a decreased performance for emotion analysis in the multi-task setting." ], [ "We perform quantitative error analysis for both sentiment and emotion for the M2 model. Table TABREF23 shows the confusion matrix for sentiment analysis. anger,anticipation,fear,disgust,joy,sadness,surprise,trust consist of the confusion matrices for anger, anticipation, fear, disgust, joy, sadness, surprise and trust. We observe from Table TABREF23 that the system fails to label many instances with the emotion surprise. This may be due to the reason that this particular class is the most underrepresented in the training set. A similar trend can also be observed for the emotion fear and trust in Table TABREF23 and Table TABREF23, respectively. These three emotions have the least share of training instances, making the system less confident towards these emotions.", "Moreover, we closely analyze the outputs to understand the kind of errors that our proposed model faces. We observe that the system faces difficulties at times and wrongly predicts the sentiment class in the following scenarios:", "$\\bullet $ Often real-world phrases/sentences have emotions of conflicting nature. These conflicting nature of emotions are directly not evident from the surface form and are left unsaid as these are implicitly understood by humans. The system gets confused when presented with such instances.", "Text: When you become a father you realize that you are not the most important person in the room anymore... Your child is!", "Actual Sentiment: positive", "Actual Emotion: anticipation, joy, surprise, trust", "Predicted Sentiment: negative", "Predicted Emotion: anger, anticipation, sadness", "The realization of not being the most important person in a room invokes anger, anticipation and sadness emotions, and a negative sentiment. However, it is a natural feeling of overwhelmingly positive sentiment when you understand that your own child is the most significant part of your life.", "$\\bullet $ Occasionally, the system focuses on the less significant part of the sentences. Due to this the system might miss crucial information which can influence and even change the final sentiment or emotion. This sometimes lead to the incorrect prediction of the overall sentiment and emotion.", "Text: I've been called many things, quitter is not one of them...", "Actual Sentiment: positive", "Actual Emotion: anticipation, joy, trust", "Predicted Sentiment: negative", "Predicted Emotion: anticipation, sadness", "Here, the system focuses on the first part of the sentence where the speaker was called many things which denotes a negative sentiment. Hence, the system predicts a negative sentiment and, anticipation and sadness emotions. However, the speaker in the second part uplifts the overall tone by justifying that s/he has never been called a quitter. This changes the negative sentiment to a positive sentiment and the overall emotion." ], [ "In this paper, we have presented a novel two-layered multi-task attention based neural network which performs sentiment analysis through emotion analysis. The primary attention mechanism of the two-layered multi-task system relies on Distributional Thesaurus which acts as a source of external knowledge. The system hierarchically builds the final representation from the word level to the sentence level. This provides a working insight to the system and its ability to handle the unseen words. Evaluation on the benchmark dataset suggests an improvement of 3.2 F-score point for sentiment analysis and an overall performance boost of 5 F-score points for emotion analysis over the existing state-of-the-art systems. The system empirically establishes the fact that emotion analysis is both useful and relevant to sentiment analysis. The proposed system does not rely on any language dependent features or lexicons. This makes it extensible to other languages as well. In future, we would like to extend the two-layered multi-task attention based neural network to other languages." ], [ "Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia)." ] ] }
{ "question": [ "What was their result on Stance Sentiment Emotion Corpus?", "What performance did they obtain on the SemEval dataset?", "What are the state-of-the-art systems?", "How is multi-tasking performed?", "What are the datasets used for training?", "How many parameters does the model have?", "What is the previous state-of-the-art model?", "What is the previous state-of-the-art performance?" ], "question_id": [ "3e839783d8a4f2fe50ece4a9b476546f0842b193", "2869d19e54fb554fcf1d6888e526135803bb7d75", "894c086a2cbfe64aa094c1edabbb1932a3d7c38a", "722e9b6f55971b4c48a60f7a9fe37372f5bf3742", "9c2f306044b3d1b3b7fdd05d1c046e887796dd7a", "3d99bc8ab2f36d4742e408f211bec154bc6696f7", "9219eef636ddb020b9d394868959325562410f83", "ff83eea2df9976c1a01482818340871b17ad4f8c" ], "nlp_background": [ "", "", "", "five", "five", "five", "five", "five" ], "topic_background": [ "", "", "", "familiar", "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "", "", "", "no", "no", "no", "no", "no" ], "search_query": [ "sentiment", "sentiment", "sentiment", "Sentiment Analysis", "Sentiment Analysis", "Sentiment Analysis", "Sentiment Analysis", "Sentiment Analysis" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "F1 score of 66.66%", "evidence": [ "FLOAT SELECTED: TABLE II F-SCORE OF VARIOUS MODELS ON SENTIMENT AND EMOTION TEST DATASET.", "We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18.", "We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis." ], "highlighted_evidence": [ "FLOAT SELECTED: TABLE II F-SCORE OF VARIOUS MODELS ON SENTIMENT AND EMOTION TEST DATASET.", "We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18.", "F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. " ] } ], "annotation_id": [ "d06db6cb47479b16310c2b411473e15f7bf6a92d" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "F1 score of 82.10%", "evidence": [ "We implement our model in Python using Tensorflow on a single GPU. We experiment with six different BiLSTM based architectures. The three architectures correspond to BiLSTM based systems without primary attention i.e. only with secondary attention for sentiment analysis (S1), emotion analysis (E1) and the multi-task system (M1) for joint sentiment and emotion analysis. The remaining three architectures correspond to the systems for sentiment analysis (S2), emotion analysis (E2) and multi-task system (M2), with both primary and secondary attention. The weight matrices were initialized randomly using numbers form a truncated normal distribution. The batch size was 64 and the dropout BIBREF34 was 0.6 with the Adam optimizer BIBREF35. The hidden state vectors of both the forward and backward LSTM were 300-dimensional, whereas the context vector was 150-dimensional. Relu BIBREF36 was used as the activation for the hidden layers, whereas in the output layer we used sigmoid as the activation function. Sigmoid cross-entropy was used as the loss function. F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. Therefore, we report the F1-score for sentiment and precision, recall and F1-score for emotion analysis.", "We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18.", "FLOAT SELECTED: TABLE II F-SCORE OF VARIOUS MODELS ON SENTIMENT AND EMOTION TEST DATASET." ], "highlighted_evidence": [ "F1-score was reported for the sentiment analysis BIBREF7 and precision, recall and F1-score were used as the evaluation metric for emotion analysis BIBREF15. ", "We compare the performance of our proposed system with the state-of-the-art systems of SemEval 2016 Task 6 and the systems of BIBREF15. Experimental results show that the proposed system improves the existing state-of-the-art systems for sentiment and emotion analysis. We summarize the results of evaluation in Table TABREF18.", "FLOAT SELECTED: TABLE II F-SCORE OF VARIOUS MODELS ON SENTIMENT AND EMOTION TEST DATASET." ] } ], "annotation_id": [ "7f3ef3b4b9425404afc5b0f0614299cc2fda258f" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "For sentiment analysis UWB, INF-UFRGS-OPINION-MINING, LitisMind, pkudblab and SVM + n-grams + sentiment and for emotion analysis MaxEnt, SVM, LSTM, BiLSTM and CNN", "evidence": [ "FLOAT SELECTED: TABLE III COMPARISON WITH THE STATE-OF-THE-ART SYSTEMS OF SEMEVAL 2016 TASK 6 ON SENTIMENT DATASET.", "FLOAT SELECTED: TABLE IV COMPARISON WITH THE STATE-OF-THE-ART SYSTEMS PROPOSED BY [16] ON EMOTION DATASET. THE METRICS P, R AND F STAND FOR PRECISION, RECALL AND F1-SCORE.", "Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis.", "We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise." ], "highlighted_evidence": [ "FLOAT SELECTED: TABLE III COMPARISON WITH THE STATE-OF-THE-ART SYSTEMS OF SEMEVAL 2016 TASK 6 ON SENTIMENT DATASET.", "FLOAT SELECTED: TABLE IV COMPARISON WITH THE STATE-OF-THE-ART SYSTEMS PROPOSED BY [16] ON EMOTION DATASET. THE METRICS P, R AND F STAND FOR PRECISION, RECALL AND F1-SCORE.", "Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset.", "We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22" ] } ], "annotation_id": [ "08a5920d677c3b68fa489891947176aabc8aea5b" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks.", "Each of the shared representations is then fed to the primary attention mechanism" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We propose a novel two-layered multi-task attention based neural network for sentiment analysis where emotion analysis is utilized to improve its efficiency. Figure FIGREF1 illustrates the overall architecture of the proposed multi-task system. The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections." ], "highlighted_evidence": [ "The proposed system consists of a Bi-directional Long Short-Term Memory (BiLSTM) BIBREF16, a two-level attention mechanism BIBREF29, BIBREF30 and a shared representation for emotion and sentiment analysis tasks. The BiLSTM encodes the word representation of each word. This representation is shared between the subsystems of sentiment and emotion analysis. Each of the shared representations is then fed to the primary attention mechanism of both the subsystems. The primary attention mechanism finds the best representation for each word for each task. The secondary attention mechanism acts on top of the primary attention to extract the best sentence representation by focusing on the suitable context for each task. Finally, the representations of both the tasks are fed to two different feed-forward neural networks to produce two outputs - one for sentiment analysis and one for emotion analysis. Each component is explained in the subsequent subsections." ] } ], "annotation_id": [ "da6e104da9b4c6afc83c4800d11568afe8d568d7" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "SemEval 2016 Task 6 BIBREF7", "Stance Sentiment Emotion Corpus (SSEC) BIBREF15" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We evaluate our proposed approach for joint sentiment and emotion analysis on the benchmark dataset of SemEval 2016 Task 6 BIBREF7 and Stance Sentiment Emotion Corpus (SSEC) BIBREF15. The SSEC corpus is an annotation of the SemEval 2016 Task 6 corpus with emotion labels. The re-annotation of the SemEval 2016 Task 6 corpus helps to bridge the gap between the unavailability of a corpus with sentiment and emotion labels. The SemEval 2016 corpus contains tweets which are classified into positive, negative or other. It contains 2,914 training and 1,956 test instances. The SSEC corpus is annotated with anger, anticipation, disgust, fear, joy, sadness, surprise and trust labels. Each tweet could belong to one or more emotion classes and one sentiment class. Table TABREF15 shows the data statistics of SemEval 2016 task 6 and SSEC which are used for sentiment and emotion analysis, respectively." ], "highlighted_evidence": [ "We evaluate our proposed approach for joint sentiment and emotion analysis on the benchmark dataset of SemEval 2016 Task 6 BIBREF7 and Stance Sentiment Emotion Corpus (SSEC) BIBREF15." ] } ], "annotation_id": [ "463da0e392644787be01b0c603d433f5d3e32098" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "1e78ce2b71204f6727220e406bbcd71811faca2a" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "BIBREF7", "BIBREF39", "BIBREF37", "LitisMind", "Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features. Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis.", "We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15. Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. The findings of BIBREF15 also support this behavior (i.e. worst result for the surprise class). This could be attributed to the data scarcity and a very low agreement between the annotators for the emotion surprise." ], "highlighted_evidence": [ "Table TABREF19 shows the comparison of our proposed system with the existing state-of-the-art system of SemEval 2016 Task 6 for the sentiment dataset. BIBREF7 used feature-based SVM, BIBREF39 used keyword rules, LitisMind relied on hashtag rules on external data, BIBREF38 utilized a combination of sentiment classifiers and rules, whereas BIBREF37 used a maximum entropy classifier with domain-specific features.", "We also compare our system with the state-of-the-art systems proposed by BIBREF15 on the emotion dataset. The comparison is demonstrated in Table TABREF22. Maximum entropy, SVM, LSTM, Bi-LSTM, and CNN were the five individual systems used by BIBREF15." ] } ], "annotation_id": [ "403bf3135ace52b79ffbabe0d50d4cd367b61838" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "e21f12751aa4c12d358cec2f742eec769c765999" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Fig. 1. Two-layered multi-task attention based network", "TABLE I DATASET STATISTICS OF SEMEVAL 2016 TASK 6 AND SSEC USED FOR SENTIMENT AND EMOTION ANALYSIS, RESPECTIVELY.", "TABLE II F-SCORE OF VARIOUS MODELS ON SENTIMENT AND EMOTION TEST DATASET.", "TABLE III COMPARISON WITH THE STATE-OF-THE-ART SYSTEMS OF SEMEVAL 2016 TASK 6 ON SENTIMENT DATASET.", "Fig. 2. Comparison of various models (S1, S2, M1, M2) w.r.t different hidden state vector sizes of BiLSTM for sentiment analysis. Y-axis denotes the Fscores.", "TABLE IV COMPARISON WITH THE STATE-OF-THE-ART SYSTEMS PROPOSED BY [16] ON EMOTION DATASET. THE METRICS P, R AND F STAND FOR PRECISION, RECALL AND F1-SCORE.", "TABLE XI CONFUSION MATRIX FOR sadness" ], "file": [ "3-Figure1-1.png", "5-TableI-1.png", "5-TableII-1.png", "5-TableIII-1.png", "5-Figure2-1.png", "6-TableIV-1.png", "7-TableXI-1.png" ] }
1910.01363
Mapping (Dis-)Information Flow about the MH17 Plane Crash
Digital media enables not only fast sharing of information, but also disinformation. One prominent case of an event leading to circulation of disinformation on social media is the MH17 plane crash. Studies analysing the spread of information about this event on Twitter have focused on small, manually annotated datasets, or used proxys for data annotation. In this work, we examine to what extent text classifiers can be used to label data for subsequent content analysis, in particular we focus on predicting pro-Russian and pro-Ukrainian Twitter content related to the MH17 plane crash. Even though we find that a neural classifier improves over a hashtag based baseline, labeling pro-Russian and pro-Ukrainian content with high precision remains a challenging problem. We provide an error analysis underlining the difficulty of the task and identify factors that might help improve classification in future work. Finally, we show how the classifier can facilitate the annotation task for human annotators.
{ "section_name": [ "Introduction", "Introduction ::: MH17 Related (Dis-)Information Flow on Twitter", "Introduction ::: Contributions", "Competing Narratives about the MH17 Crash", "Dataset", "Classification Models", "Classification Models ::: Hashtag-Based Baseline", "Classification Models ::: Logistic Regression Classifier", "Classification Models ::: Convolutional Neural Network Classifier", "Experimental Setup", "Experimental Setup ::: Tweet Preprocessing", "Experimental Setup ::: Evaluation Metrics", "Results", "Results ::: Comparison Between Models", "Results ::: Per-Class Performance", "Data Augmentation Experiments using Cross-Lingual Transfer", "Error Analysis", "Error Analysis ::: Category I Errors", "Error Analysis ::: Category II Errors", "Error Analysis ::: Category III Errors", "Integrating Automatic Predictions into the Retweet Network", "Integrating Automatic Predictions into the Retweet Network ::: Predicting Polarized Edges", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "Digital media enables fast sharing of information, including various forms of false or deceptive information. Hence, besides bringing the obvious advantage of broadening information access for everyone, digital media can also be misused for campaigns that spread disinformation about specific events, or campaigns that are targeted at specific individuals or governments. Disinformation, in this case, refers to intentionally misleading content BIBREF0. A prominent case of a disinformation campaign are the efforts of the Russian government to control information during the Russia-Ukraine crisis BIBREF1. One of the most important events during the crisis was the crash of Malaysian Airlines (MH17) flight on July 17, 2014. The plane crashed on its way from Amsterdam to Kuala Lumpur over Ukrainian territory, causing the death of 298 civilians. The event immediately led to the circulation of competing narratives about who was responsible for the crash (see Section SECREF2), with the two most prominent narratives being that the plane was either shot down by the Ukrainian military, or by Russian separatists in Ukraine supported by the Russian government BIBREF2. The latter theory was confirmed by findings of an international investigation team. In this work, information that opposes these findings by promoting other theories about the crash is considered disinformation. When studying disinformation, however, it is important to acknowledge that our fact checkers (in this case the international investigation team) may be wrong, which is why we focus on both of the narratives in our study.", "MH17 is a highly important case in the context of international relations, because the tragedy has not only increased Western, political pressure against Russia, but may also continue putting the government's global image at stake. In 2020, at least four individuals connected to the Russian separatist movement will face murder charges for their involvement in the MH17 crash BIBREF3, which is why one can expect the waves of disinformation about MH17 to continue spreading. The purpose of this work is to develop an approach that may help both practitioners and scholars of political science, international relations and political communication to detect and measure the scope of MH17-related disinformation.", "Several studies analyse the framing of the crash and the spread of (dis)information about the event in terms of pro-Russian or pro-Ukrainian framing. These studies analyse information based on manually labeled content, such as television transcripts BIBREF2 or tweets BIBREF4, BIBREF5. Restricting the analysis to manually labeled content ensures a high quality of annotations, but prohibits analysis from being extended to the full amount of available data. Another widely used method for classifying misleading content is to use distant annotations, for example to classify a tweet based on the domain of a URL that is shared by the tweet, or a hashtag that is contained in the tweet BIBREF6, BIBREF7, BIBREF8. Often, this approach treats content from uncredible sources as misleading (e.g. misinformation, disinformation or fake news). This methods enables researchers to scale up the number of observations without having to evaluate the fact value of each piece of content from low-quality sources. However, the approach fails to address an important issue: Not all content from uncredible sources is necessarily misleading or false and not all content from credible sources is true. As often emphasized in the propaganda literature, established media outlets too are vulnerable to state-driven disinformation campaigns, even if they are regarded as credible sources BIBREF9, BIBREF10, BIBREF11.", "In order to scale annotations that go beyond metadata to larger datasets, Natural Language Processing (NLP) models can be used to automatically label text content. For example, several works developed classifiers for annotating text content with frame labels that can subsequently be used for large-scale content analysis BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. Similarly, automatically labeling attitudes expressed in text BIBREF20, BIBREF21, BIBREF22, BIBREF23 can aid the analysis of disinformation and misinformation spread BIBREF24. In this work, we examine to which extent such classifiers can be used to detect pro-Russian framing related to the MH17 crash, and to which extent classifier predictions can be relied on for analysing information flow on Twitter." ], [ "We focus our classification efforts on a Twitter dataset introduced in BIBREF4, that was collected to investigate the flow of MH17-related information on Twitter, focusing on the question who is distributing (dis-)information. In their analysis, the authors found that citizens are active distributors, which contradicts the widely adopted view that the information campaign is only driven by the state and that citizens do not have an active role.", "To arrive at this conclusion, the authors manually labeled a subset of the tweets in the dataset with pro-Russian/pro-Ukrainian frames and build a retweet network, which has Twitter users as nodes and edges between two nodes if a retweet occurred between the two associated users. An edge was considered as polarized (either pro-Russian or pro-Ukrainian), if at least one retweet between the two users connected by the edge was pro-Russian/pro-Ukrainian. Then, the amount of polarized edges between users with different profiles (e.g. citizen, journalist, state organ) was computed.", "Labeling more data via automatic classification (or computer-assisted annotation) of tweets could serve an analysis as the one presented in BIBREF4 in two ways. First, more edges could be labeled. Second, edges could be labeled with higher precision, i.e. by taking more tweets comprised by the edge into account. For example, one could decide to only label an edge as polarized if at least half of the retweets between the users were pro-Ukrainian/pro-Russian." ], [ "We evaluate different classifiers that predict frames for unlabeled tweets in BIBREF4's dataset, in order to increase the number of polarized edges in the retweet network derived from the data. This is challenging due to a skewed data distribution and the small amount of training data for the pro-Russian class. We try to combat the data sparsity using a data augmentation approach, but have to report a negative result as we find that data augmentation in this particular case does not improve classification results. While our best neural classifier clearly outperforms a hashtag-based baseline, generating high quality predictions for the pro-Russian class is difficult: In order to make predictions at a precision level of 80%, recall has to be decreased to 23%. Finally, we examine the applicability of the classifier for finding new polarized edges in a retweet network and show how, with manual filtering, the number of pro-Russian edges can be increased by 29%. We make our code, trained models and predictions publicly available." ], [ "We briefly summarize the timeline around the crash of MH17 and some of the dominant narratives present in the dataset. On July 17, 2014, the MH17 flight crashed over Donetsk Oblast in Ukraine. The region was at that time part of an armed conflict between pro-Russian separatists and the Ukrainian military, one of the unrests following the Ukrainian revolution and the annexation of Crimea by the Russian government. The territory in which the plane fell down was controlled by pro-Russian separatists.", "Right after the crash, two main narratives were propagated: Western media claimed that the plane was shot down by pro-Russian separatists, whereas the Russian government claimed that the Ukrainian military was responsible. Two organisations were tasked with investigating the causes of the crash, the Dutch Safety Board (DSB) and the Dutch-led joint investigation team (JIT). Their final reports were released in October 2015 and September 2016, respectively, and conclude that the plane had been shot down by a missile launched by a BUK surface-to-air system. The BUK was stationed in an area controlled by pro-Russian separatists when the missile was launched, and had been transported there from Russia and returned to Russia after the incident. These findings are denied by the Russian government until now. There are several other crash-related reports that are frequently mentioned throughout the dataset. One is a report by Almaz-Antey, the Russian company that manufactured the BUK, which rejects the DSB findings based on mismatch of technical evidence. Several reports backing up the Dutch findings were released by the investigative journalism website Bellingcat.", "The crash also sparked the circulation of several alternative theories, many of them promoted in Russian media BIBREF2, e.g. that the plane was downed by Ukrainian SU25 military jets, that the plane attack was meant to hit Putin’s plane that was allegedly traveling the same route earlier that day, and that the bodies found in the plane had already been dead before the crash." ], [ "For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016.", "BIBREF4 provide annotations for a subset of the English tweets contained in the dataset. A tweet is annotated with one of three classes that indicate the framing of the tweet with respect to responsibility for the plane crash. A tweet can either be pro-Russian (Ukrainian authorities, NATO or EU countries are explicitly or implicitly held responsible, or the tweet states that Russia is not responsible), pro-Ukrainian (the Russian Federation or Russian separatists in Ukraine are explicitly or implicitly held responsible, or the tweet states that Ukraine is not responsible) or neutral (neither Ukraine nor Russia or any others are blamed). Example tweets for each category can be found in Table TABREF9. These examples illustrate that the framing annotations do not reflect general polarity, but polarity with respect to responsibility to the crash. For example, even though the last example in the table is in general pro-Ukrainian, as it displays the separatists in a bad light, the tweet does not focus on responsibility for the crash. Hence the it is labeled as neutral. Table TABREF8 shows the label distribution of the annotated portion of the data as well as the total amount of original tweets, and original tweets plus their retweets/duplicates in the network. A retweet is a repost of another user's original tweet, indicated by a specific syntax (RT @username: ). We consider as duplicate a tweet with text that is identical to an original tweet after preprocessing (see Section SECREF18). For our classification experiments, we exclusively consider original tweets, but model predictions can then be propagated to retweets and duplicates." ], [ "For our classification experiments, we compare three classifiers, a hashtag-based baseline, a logistic regression classifier and a convolutional neural network (CNN)." ], [ "Hashtags are often used as a means to assess the content of a tweet BIBREF25, BIBREF26, BIBREF27. We identify hashtags indicative of a class in the annotated dataset using the pointwise mutual information (pmi) between a hashtag $hs$ and a class $c$, which is defined as", "We then predict the class for unseen tweets as the class that has the highest pmi score for the hashtags contained in the tweet. Tweets without hashtag (5% of the tweets in the development set) or with multiple hashtags leading to conflicting predictions (5% of the tweets in the development set) are labeled randomly. We refer to to this baseline as hs_pmi." ], [ "As non-neural baseline we use a logistic regression model. We compute input representations for tweets as the average over pre-trained word embedding vectors for all words in the tweet. We use fasttext embeddings BIBREF28 that were pre-trained on Wikipedia." ], [ "As neural classification model, we use a convolutional neural network (CNN) BIBREF29, which has previously shown good results for tweet classification BIBREF30, BIBREF27. The model performs 1d convolutions over a sequence of word embeddings. We use the same pre-trained fasttext embeddings as for the logistic regression model. We use a model with one convolutional layer and a relu activation function, and one max pooling layer. The number of filters is 100 and the filter size is set to 4." ], [ "We evaluate the classification models using 10-fold cross validation, i.e. we produce 10 different datasplits by randomly sampling 60% of the data for training, 20% for development and 20% for testing. For each fold, we train each of the models described in Section SECREF4 on the training set and measure performance on the test set. For the CNN and LogReg models, we upsample the training examples such that each class has as many instances as the largest class (Neutral). The final reported scores are averages over the 10 splits." ], [ "Before embedding the tweets, we replace urls, retweet syntax (RT @user_name: ) and @mentions (@user_name) by placeholders. We lowercase all text and tokenize sentences using the StandfordNLP pipeline BIBREF31. If a tweet contains multiple sentences, these are concatenated. Finally, we remove all tokens that contain non-alphanumeric symbols (except for dashes and hashtags) and strip the hashtags from each token, in order to increase the number of words that are represented by a pre-trained word embedding." ], [ "We report performance as F1-scores, which is the harmonic mean between precision and recall. As the class distribution is highly skewed and we are mainly interested in accurately classifying the classes with low support (pro-Russian and pro-Ukrainian), we report macro-averages over the classes. In addition to F1-scores, we report the area under the precision-recall curve (AUC). We compute an AUC score for each class by converting the classification task into a one-vs-all classification task." ], [ "The results of our classification experiments are presented in Table TABREF21. Figure FIGREF22 shows the per-class precision-recall curves for the LogReg and CNN models as well as the confusion matrices between classes." ], [ "We observe that the hashtag baseline performs poorly and does not improve over the random baseline. The CNN classifier outperforms the baselines as well as the LogReg model. It shows the highest improvement over the LogReg for the pro-Russian class. Looking at the confusion matrices, we observe that for the LogReg model, the fraction of True Positives is equal between the pro-Russian and the pro-Ukrainian class. The CNN model produces a higher amount of correct predictions for the pro-Ukrainian than for the pro-Russian class. The absolute number of pro-Russian True Positives is lower for the CNN, but so is in return the amount of misclassifications between the pro-Russian and pro-Ukrainian class." ], [ "With respect to the per class performance, we observe a similar trend across models, which is that the models perform best for the neutral class, whereas performance is lower for the pro-Ukrainian and pro-Russian classes. All models perform worst on the pro-Russian class, which might be due to the fact that it is the class with the fewest instances in the dataset.", "Considering these results, we conclude that the CNN is the best performing model and also the classifier that best serves our goals, as we want to produce accurate predictions for the pro-Russian and pro-Ukrainian class without confusing between them. Even though the CNN can improve over the other models, the classification performance for the pro-Russian and pro-Ukrainian class is rather low. One obvious reason for this might be the small amount of training data, in particular for the pro-Russian class.", "In the following, we briefly report a negative result on an attempt to combat the data sparseness with cross-lingual transfer. We then perform an error analysis on the CNN classifications to shed light on the difficulties of the task." ], [ "The annotations in the MH17 dataset are highly imbalanced, with as few as 512 annotated examples for the pro-Russian class. As the annotated examples were sampled from the dataset at random, we assume that there are only few tweets with pro-Russian stance in the dataset. This observation is in line with studies that showed that the amount of disinformation on Twitter is in fact small BIBREF6, BIBREF8. In order to find more pro-Russian training examples, we turn to a resource that we expect to contain large amounts of pro-Russian (dis)information. The Elections integrity dataset was released by Twitter in 2018 and contains the tweets and account information for 3,841 accounts that are believed to be Russian trolls financed by the Russian government. While most tweets posted after late 2014 are in English language and focus on topics around the US elections, the earlier tweets in the dataset are primarily in Russian language and focus on the Ukraine crisis BIBREF33. One feature of the dataset observed by BIBREF33 is that several hashtags show high peakedness BIBREF34, i.e. they are posted with high frequency but only during short intervals, while others are persistent during time.", "We find two hashtags in the Elections integrity dataset with high peakedness that were exclusively posted within 2 days after the MH17 crash and that seem to be pro-Russian in the context of responsibility for the MH17 crash: russian #КиевСкажиПравду (Kiew tell the truth) and russian #Киевсбилбоинг (Kiew made the plane go down). We collect all tweets with these two hashtags, resulting in 9,809 Russian tweets that we try to use as additional training data for the pro-Russian class in the MH17 dataset. We experiment with cross-lingual transfer by embedding tweets via aligned English and Russian word embeddings. However, so far results for the cross-lingual models do not improve over the CNN model trained on only English data. This might be due to the fact that the additional Russian tweets rather contain a general pro-Russian frame than specifically talking about the crash, but needs further investigation." ], [ "In order to integrate automatically labeled examples into a network analysis that studies the flow of polarized information in the network, we need to produce high precision predictions for the pro-Russian and the pro-Ukrainian class. Polarized tweets that are incorrectly classified as neutral will hurt an analysis much less than neutral tweets that are erroneously classified as pro-Russian or pro-Ukrainian. However, the worst type of confusion is between the pro-Russian and pro-Ukrainian class. In order to gain insights into why these confusions happen, we manually inspect incorrectly predicted examples that are confused between the pro-Russian and pro-Ukrainian class. We analyse the misclassifications in the development set of all 10 runs, which results in 73 False Positives of pro-Ukrainian tweets being classified as pro-Russian (referred to as pro-Russian False Positives), and 88 False Positives of pro-Russian tweets being classified as pro-Ukrainian (referred to as pro-Ukrainian False Positives). We can identify three main cases for which the model produces an error:", "the correct class can be directly inferred from the text content easily, even without background knowledge", "the correct class can be inferred from the text content, given that event-specific knowledge is provided", "the correct class can be inferred from the text content if the text is interpreted correctly", "For the pro-Russian False Positives, we find that 42% of the errors are category I and II errors, respectively, and 15% of category III. For the pro-Ukrainian False Positives, we find 48% category I errors, 33% category II errors and and 13% category III errors. Table TABREF28 presents examples for each of the error categories in both sets which we will discuss in the following." ], [ "Category I errors could easily be classified by humans following the annotation guidelines (see Section SECREF3). One difficulty can be seen in example f). Even though no background knowledge is needed to interpret the content, interpretation is difficult because of the convoluted syntax of the tweet. For the other examples it is unclear why the model would have difficulties with classifying them." ], [ "Category II errors can only be classified with event-specific background knowledge. Examples g), i) and k) relate to the theory that a Ukrainian SU25 fighter jet shot down the plane in air. Correct interpretation of these tweets depends on knowledge about the SU25 fighter jet. In order to correctly interpret example j) as pro-Russian, it has to be known that the bellingcat report is pro-Ukrainian. Example l) relates to the theory that the shoot down was a false flag operation run by Western countries and the bodies in the plane were already dead before the crash. In order to correctly interpret example m), the identity of Kolomoisky has to be known. He is an anti-separatist Ukrainian billionaire, hence his involvement points to the Ukrainian government being responsible for the crash." ], [ "Category III errors occur for examples that can only be classified by correctly interpreting the tweet authors' intention. Interpretation is difficult due to phenomena such as irony as in examples n) and o). While the irony is indicated in example n) through the use of the hashtag #LOL, there is no explicit indication in example o).", "Interpretation of example q) is conditioned on world knowledge as well as the understanding of the speakers beliefs. Example r) is pro-Russian as it questions the validity of the assumption AC360 is making, but we only know that because we know that the assumption is absurd. Example s) requires to evaluate that the speaker thinks people on site are trusted more than people at home.", "From the error analysis, we conclude that category I errors need further investigation, as here the model makes mistakes on seemingly easy instances. This might be due to the model not being able to correctly represent Twitter specific language or unknown words, such as Eukraine in example e). Category II and III errors are harder to avoid and could be improved by applying reasoning BIBREF36 or irony detection methods BIBREF37." ], [ "Finally, we apply the CNN classifier to label new edges in BIBREF4's retweet network, which is shown in Figure FIGREF35. The retweet network is a graph that contains users as nodes and an edge between two users if the users are retweeting each other. In order to track the flow of polarized information, BIBREF4 label an edge as polarized if at least one tweet contained in the edge was manually annotated as pro-Russian or pro-Ukrainian. While the network shows a clear polarization, only a small subset of the edges present in the network are labeled (see Table TABREF38).", "Automatic polarity prediction of tweets can help the analysis in two ways. Either, we can label a previously unlabeled edge, or we can verify/confirm the manual labeling of an edge, by labeling additional tweets that are comprised in the edge." ], [ "In order to get high precision predictions for unlabeled tweets, we choose the probability thresholds for predicting a pro-Russian or pro-Ukrainian tweet such that the classifier would achieve 80% precision on the test splits (recall at this precision level is 23%). Table TABREF38 shows the amount of polarized edges we can predict at this precision level. Upon manual inspection, we however find that the quality of predictions is lower than estimated. Hence, we manually re-annotate the pro-Russian and pro-Ukrainian predictions according to the official annotation guidelines used by BIBREF4. This way, we can label 77 new pro-Russian edges by looking at 415 tweets, which means that 19% of the candidates are hits. For the pro-Ukrainian class, we can label 110 new edges by looking at 611 tweets (18% hits). Hence even though the quality of the classifier predictions is too low to be integrated into the network analysis right away, the classifier drastically facilitates the annotation process for human annotators compared to annotating unfiltered tweets (from the original labels we infer that for unfiltered tweets, only 6% are hits for the pro-Russian class, and 11% for the pro-Ukrainian class)." ], [ "In this work, we investigated the usefulness of text classifiers to detect pro-Russian and pro-Ukrainian framing in tweets related to the MH17 crash, and to which extent classifier predictions can be relied on for producing high quality annotations. From our classification experiments, we conclude that the real-world applicability of text classifiers for labeling polarized tweets in a retweet network is restricted to pre-filtering tweets for manual annotation. However, if used as a filter, the classifier can significantly speed up the annotation process, making large-scale content analysis more feasible." ], [ "We thank the anonymous reviewers for their helpful comments. The research was carried out as part of the ‘Digital Disinformation’ project, which was directed by Rebecca Adler-Nissen and funded by the Carlsberg Foundation (project number CF16-0012)." ] ] }
{ "question": [ "How can the classifier facilitate the annotation task for human annotators?", "What recommendations are made to improve the performance in future?", "What type of errors do the classifiers use?", "What neural classifiers are used?", "What is the hashtags does the hashtag-based baseline use?", "What languages are included in the dataset?", "What dataset is used for this study?", "What proxies for data annotation were used in previous datasets?" ], "question_id": [ "0ee20a3a343e1e251b74a804e9aa1393d17b46d6", "f0e8f045e2e33a2129e67fb32f356242db1dc280", "b6c235d5986914b380c084d9535a7b01310c0278", "e9b1e8e575809f7b80b1125305cfa76ae4f5bdfb", "1e4450e23ec81fdd59821055f998fd9db0398b16", "02ce4c288df14a90a210cb39973c6ac0fb4cec59", "60726d9792d301d5ff8e37fbb31d5104a520dea3", "e39d90b8d959697d9780eddce3a343e60543be65" ], "nlp_background": [ "five", "five", "five", "five", "five", "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no", "no", "no", "no" ], "search_query": [ "twitter", "twitter", "twitter", "twitter", "twitter", "twitter", "twitter", "twitter" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "quality of the classifier predictions is too low to be integrated into the network analysis right away, the classifier drastically facilitates the annotation process for human annotators compared to annotating unfiltered tweets" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In order to get high precision predictions for unlabeled tweets, we choose the probability thresholds for predicting a pro-Russian or pro-Ukrainian tweet such that the classifier would achieve 80% precision on the test splits (recall at this precision level is 23%). Table TABREF38 shows the amount of polarized edges we can predict at this precision level. Upon manual inspection, we however find that the quality of predictions is lower than estimated. Hence, we manually re-annotate the pro-Russian and pro-Ukrainian predictions according to the official annotation guidelines used by BIBREF4. This way, we can label 77 new pro-Russian edges by looking at 415 tweets, which means that 19% of the candidates are hits. For the pro-Ukrainian class, we can label 110 new edges by looking at 611 tweets (18% hits). Hence even though the quality of the classifier predictions is too low to be integrated into the network analysis right away, the classifier drastically facilitates the annotation process for human annotators compared to annotating unfiltered tweets (from the original labels we infer that for unfiltered tweets, only 6% are hits for the pro-Russian class, and 11% for the pro-Ukrainian class)." ], "highlighted_evidence": [ "This way, we can label 77 new pro-Russian edges by looking at 415 tweets, which means that 19% of the candidates are hits. For the pro-Ukrainian class, we can label 110 new edges by looking at 611 tweets (18% hits). Hence even though the quality of the classifier predictions is too low to be integrated into the network analysis right away, the classifier drastically facilitates the annotation process for human annotators compared to annotating unfiltered tweets (from the original labels we infer that for unfiltered tweets, only 6% are hits for the pro-Russian class, and 11% for the pro-Ukrainian class)." ] } ], "annotation_id": [ "08c3233d207f3113b47c3fc38688f7387a759a32" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "applying reasoning BIBREF36 or irony detection methods BIBREF37" ], "yes_no": null, "free_form_answer": "", "evidence": [ "From the error analysis, we conclude that category I errors need further investigation, as here the model makes mistakes on seemingly easy instances. This might be due to the model not being able to correctly represent Twitter specific language or unknown words, such as Eukraine in example e). Category II and III errors are harder to avoid and could be improved by applying reasoning BIBREF36 or irony detection methods BIBREF37." ], "highlighted_evidence": [ "Category II and III errors are harder to avoid and could be improved by applying reasoning BIBREF36 or irony detection methods BIBREF37." ] } ], "annotation_id": [ "0bc3d60f3442499c1502fa4f95a954031cd35c7d" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "correct class can be directly inferred from the text content easily, even without background knowledge", "correct class can be inferred from the text content, given that event-specific knowledge is provided", "orrect class can be inferred from the text content if the text is interpreted correctly" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In order to integrate automatically labeled examples into a network analysis that studies the flow of polarized information in the network, we need to produce high precision predictions for the pro-Russian and the pro-Ukrainian class. Polarized tweets that are incorrectly classified as neutral will hurt an analysis much less than neutral tweets that are erroneously classified as pro-Russian or pro-Ukrainian. However, the worst type of confusion is between the pro-Russian and pro-Ukrainian class. In order to gain insights into why these confusions happen, we manually inspect incorrectly predicted examples that are confused between the pro-Russian and pro-Ukrainian class. We analyse the misclassifications in the development set of all 10 runs, which results in 73 False Positives of pro-Ukrainian tweets being classified as pro-Russian (referred to as pro-Russian False Positives), and 88 False Positives of pro-Russian tweets being classified as pro-Ukrainian (referred to as pro-Ukrainian False Positives). We can identify three main cases for which the model produces an error:", "the correct class can be directly inferred from the text content easily, even without background knowledge", "the correct class can be inferred from the text content, given that event-specific knowledge is provided", "the correct class can be inferred from the text content if the text is interpreted correctly" ], "highlighted_evidence": [ "We can identify three main cases for which the model produces an error:\n\nthe correct class can be directly inferred from the text content easily, even without background knowledge\n\nthe correct class can be inferred from the text content, given that event-specific knowledge is provided\n\nthe correct class can be inferred from the text content if the text is interpreted correctly" ] } ], "annotation_id": [ "849b7e8572efdbf22250c9cb7f8cd1d1fe1f6c38" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " convolutional neural network (CNN) BIBREF29" ], "yes_no": null, "free_form_answer": "", "evidence": [ "As neural classification model, we use a convolutional neural network (CNN) BIBREF29, which has previously shown good results for tweet classification BIBREF30, BIBREF27. The model performs 1d convolutions over a sequence of word embeddings. We use the same pre-trained fasttext embeddings as for the logistic regression model. We use a model with one convolutional layer and a relu activation function, and one max pooling layer. The number of filters is 100 and the filter size is set to 4." ], "highlighted_evidence": [ "As neural classification model, we use a convolutional neural network (CNN) BIBREF29, which has previously shown good results for tweet classification BIBREF30, BIBREF27." ] } ], "annotation_id": [ "2410966eb81c34a1bd9bc31704da8330b20a4a33" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [ "Hashtags are often used as a means to assess the content of a tweet BIBREF25, BIBREF26, BIBREF27. We identify hashtags indicative of a class in the annotated dataset using the pointwise mutual information (pmi) between a hashtag $hs$ and a class $c$, which is defined as\n\nWe then predict the class for unseen tweets as the class that has the highest pmi score for the hashtags contained in the tweet." ] } ], "annotation_id": [ "d62f5e78d994a65dd8efac9bd061f3611dee9a1a" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "English" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016.", "BIBREF4 provide annotations for a subset of the English tweets contained in the dataset. A tweet is annotated with one of three classes that indicate the framing of the tweet with respect to responsibility for the plane crash. A tweet can either be pro-Russian (Ukrainian authorities, NATO or EU countries are explicitly or implicitly held responsible, or the tweet states that Russia is not responsible), pro-Ukrainian (the Russian Federation or Russian separatists in Ukraine are explicitly or implicitly held responsible, or the tweet states that Ukraine is not responsible) or neutral (neither Ukraine nor Russia or any others are blamed). Example tweets for each category can be found in Table TABREF9. These examples illustrate that the framing annotations do not reflect general polarity, but polarity with respect to responsibility to the crash. For example, even though the last example in the table is in general pro-Ukrainian, as it displays the separatists in a bad light, the tweet does not focus on responsibility for the crash. Hence the it is labeled as neutral. Table TABREF8 shows the label distribution of the annotated portion of the data as well as the total amount of original tweets, and original tweets plus their retweets/duplicates in the network. A retweet is a repost of another user's original tweet, indicated by a specific syntax (RT @username: ). We consider as duplicate a tweet with text that is identical to an original tweet after preprocessing (see Section SECREF18). For our classification experiments, we exclusively consider original tweets, but model predictions can then be propagated to retweets and duplicates." ], "highlighted_evidence": [ "For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016.\n\nBIBREF4 provide annotations for a subset of the English tweets contained in the dataset." ] } ], "annotation_id": [ "6bf5d56664cc96b4943e14956ea74e08cbb704ad" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "MH17 Twitter dataset" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter. It contains tweets collected based on keyword search that were posted between July 17, 2014 (the day of the plane crash) and December 9, 2016." ], "highlighted_evidence": [ "For our classification experiments, we use the MH17 Twitter dataset introduced by BIBREF4, a dataset collected in order to study the flow of (dis)information about the MH17 plane crash on Twitter." ] } ], "annotation_id": [ "56b6e16294abec904432b761c879ee1d5e501287" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "widely used method for classifying misleading content is to use distant annotations, for example to classify a tweet based on the domain of a URL that is shared by the tweet, or a hashtag that is contained in the tweet", "Natural Language Processing (NLP) models can be used to automatically label text content" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Several studies analyse the framing of the crash and the spread of (dis)information about the event in terms of pro-Russian or pro-Ukrainian framing. These studies analyse information based on manually labeled content, such as television transcripts BIBREF2 or tweets BIBREF4, BIBREF5. Restricting the analysis to manually labeled content ensures a high quality of annotations, but prohibits analysis from being extended to the full amount of available data. Another widely used method for classifying misleading content is to use distant annotations, for example to classify a tweet based on the domain of a URL that is shared by the tweet, or a hashtag that is contained in the tweet BIBREF6, BIBREF7, BIBREF8. Often, this approach treats content from uncredible sources as misleading (e.g. misinformation, disinformation or fake news). This methods enables researchers to scale up the number of observations without having to evaluate the fact value of each piece of content from low-quality sources. However, the approach fails to address an important issue: Not all content from uncredible sources is necessarily misleading or false and not all content from credible sources is true. As often emphasized in the propaganda literature, established media outlets too are vulnerable to state-driven disinformation campaigns, even if they are regarded as credible sources BIBREF9, BIBREF10, BIBREF11.", "In order to scale annotations that go beyond metadata to larger datasets, Natural Language Processing (NLP) models can be used to automatically label text content. For example, several works developed classifiers for annotating text content with frame labels that can subsequently be used for large-scale content analysis BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. Similarly, automatically labeling attitudes expressed in text BIBREF20, BIBREF21, BIBREF22, BIBREF23 can aid the analysis of disinformation and misinformation spread BIBREF24. In this work, we examine to which extent such classifiers can be used to detect pro-Russian framing related to the MH17 crash, and to which extent classifier predictions can be relied on for analysing information flow on Twitter." ], "highlighted_evidence": [ "Another widely used method for classifying misleading content is to use distant annotations, for example to classify a tweet based on the domain of a URL that is shared by the tweet, or a hashtag that is contained in the tweet BIBREF6, BIBREF7, BIBREF8.", "In order to scale annotations that go beyond metadata to larger datasets, Natural Language Processing (NLP) models can be used to automatically label text content." ] } ], "annotation_id": [ "a79efbf296eccd7655be081b40be4c833baac233" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Table 1: Label distribution and dataset sizes. Tweets are considered original if their preprocessed text is unique. All tweets comprise original tweets, retweets and duplicates.", "Table 2: Example tweets for each of the three classes.", "Table 3: Classification results on the English MH17 dataset measured as F1 and area under the precision-recall curve (AUC).", "Figure 1: Confusion matrices for the CNN (left) and the logistic regression model (right). The y-axis shows the true label while the x-axis shows the model prediction.", "Table 4: Examples for the different error categories. Error category I are cases where the correct class can easily be inferred from the text. For error category II, the correct class can be inferred from the text with event-specific knowledge. For error category III, it is necessary to resolve humour/satire in order to infer the intended meaning that the speaker wants to communicate.", "Figure 2: The left plot shows the original k10 retweet network as computed by Golovchenko et al. (2018) together with the new edges that were added after manually re-annotating the classifier predictions. The right plot only visualizes the new edges that we could add by filtering the classifier predictions. Pro-Russian edges are colored in red, pro-Ukrainian edges are colored in dark blue and neutral edges are colored in grey. Both plots were made using The Force Atlas 2 layout in gephi (Bastian et al., 2009).", "Table 5: Number of labeled edges in the k10 network before and after augmentation with predicted labels. Candidates are previously unlabeled edges for which the model makes a confident prediction. The total number of edges in the network is 24,602." ], "file": [ "4-Table1-1.png", "5-Table2-1.png", "6-Table3-1.png", "7-Figure1-1.png", "8-Table4-1.png", "9-Figure2-1.png", "9-Table5-1.png" ] }
1901.04899
Conversational Intent Understanding for Passengers in Autonomous Vehicles
Understanding passenger intents and extracting relevant slots are important building blocks towards developing a contextual dialogue system responsible for handling certain vehicle-passenger interactions in autonomous vehicles (AV). When the passengers give instructions to AMIE (Automated-vehicle Multimodal In-cabin Experience), the agent should parse such commands properly and trigger the appropriate functionality of the AV system. In our AMIE scenarios, we describe usages and support various natural commands for interacting with the vehicle. We collected a multimodal in-cabin data-set with multi-turn dialogues between the passengers and AMIE using a Wizard-of-Oz scheme. We explored various recent Recurrent Neural Networks (RNN) based techniques and built our own hierarchical models to recognize passenger intents along with relevant slots associated with the action to be performed in AV scenarios. Our experimental results achieved F1-score of 0.91 on utterance-level intent recognition and 0.96 on slot extraction models.
{ "section_name": [ "Introduction", "Methodology", "Experimental Results", "Conclusion" ], "paragraphs": [ [ "Understanding passenger intents and extracting relevant slots are important building blocks towards developing a contextual dialogue system responsible for handling certain vehicle-passenger interactions in autonomous vehicles (AV). When the passengers give instructions to AMIE (Automated-vehicle Multimodal In-cabin Experience), the agent should parse such commands properly and trigger the appropriate functionality of the AV system. In our AMIE scenarios, we describe usages and support various natural commands for interacting with the vehicle. We collected a multimodal in-cabin data-set with multi-turn dialogues between the passengers and AMIE using a Wizard-of-Oz scheme. We explored various recent Recurrent Neural Networks (RNN) based techniques and built our own hierarchical models to recognize passenger intents along with relevant slots associated with the action to be performed in AV scenarios. Our experimental results achieved F1-score of 0.91 on utterance-level intent recognition and 0.96 on slot extraction models." ], [ "Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators.", "For slot filling and intent keywords extraction tasks, we experimented with seq2seq LSTMs and GRUs, and also Bidirectional LSTM/GRUs. The passenger utterance is fed into a Bi-LSTM network via an embedding layer as a sequence of words, which are transformed into word vectors. We also experimented with GloVe, word2vec, and fastText as pre-trained word embeddings. To prevent overfitting, a dropout layer is used for regularization. Best performing results are obtained with Bi-LSTMs and GloVe embeddings (6B tokens, 400K vocab size, dim 100).", "For utterance-level intent detection, we experimented with mainly 5 models: (1) Hybrid: RNN + Rule-based, (2) Separate: Seq2one Bi-LSTM + Attention, (3) Joint: Seq2seq Bi-LSTM for slots/intent keywords & utterance-level intents, (4) Hierarchical + Separate, (5) Hierarchical + Joint. For (1), we extract intent keywords/slots (Bi-LSTM) and map them into utterance-level intent types (rule-based via term frequencies for each intent). For (2), we feed the whole utterance as input sequence and intent-type as single target. For (3), we experiment with the joint learning models BIBREF0 , BIBREF1 , BIBREF2 where we jointly train word-level intent keywords/slots and utterance-level intents (adding <BOU>/<EOU> terms to the start/end of utterances with intent types). For (4) and (5), we experiment with the hierarchical models BIBREF3 , BIBREF4 , BIBREF5 where we extract intent keywords/slots first, and then only feed the predicted keywords/slots as a sequence into (2) and (3), respectively." ], [ "The slot extraction and intent keywords extraction results are given in Table TABREF1 and Table TABREF2 , respectively. Table TABREF3 summarizes the results of various approaches we investigated for utterance-level intent understanding. Table TABREF4 shows the intent-wise detection results for our AMIE scenarios with the best performing utterance-level intent recognizer." ], [ "After exploring various recent Recurrent Neural Networks (RNN) based techniques, we built our own hierarchical joint models to recognize passenger intents along with relevant slots associated with the action to be performed in AV scenarios. Our experimental results outperformed certain competitive baselines and achieved overall F1-scores of 0.91 for utterance-level intent recognition and 0.96 for slot extraction tasks." ] ] }
{ "question": [ "What are the supported natural commands?", "What is the size of their collected dataset?", "Did they compare against other systems?", "What intents does the paper explore?" ], "question_id": [ "c6e63e3b807474e29bfe32542321d015009e7148", "4ef2fd79d598accc54c084f0cca8ad7c1b3f892a", "40e3639b79e2051bf6bce300d06548e7793daee0", "8383e52b2adbbfb533fbe8179bc8dae11b3ed6da" ], "nlp_background": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "search_query": [ "", "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Set/Change Destination", "Set/Change Route", "Go Faster", "Go Slower", "Stop", "Park", "Pull Over", "Drop Off", "Open Door", "Other " ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators." ], "highlighted_evidence": [ "Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators." ] } ], "annotation_id": [ "ca8e0b7c0f1b3216656508fc0b7b097f3d0235b9" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "3347 unique utterances ", "evidence": [ "Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators." ], "highlighted_evidence": [ "We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators." ] } ], "annotation_id": [ "562d57fd3a19570effda503acce6ef14104b0bb5" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "The slot extraction and intent keywords extraction results are given in Table TABREF1 and Table TABREF2 , respectively. Table TABREF3 summarizes the results of various approaches we investigated for utterance-level intent understanding. Table TABREF4 shows the intent-wise detection results for our AMIE scenarios with the best performing utterance-level intent recognizer.", "FLOAT SELECTED: Table 3: Utterance-level Intent Recognition Results (10-fold CV)" ], "highlighted_evidence": [ "The slot extraction and intent keywords extraction results are given in Table TABREF1 and Table TABREF2 , respectively. Table TABREF3 summarizes the results of various approaches we investigated for utterance-level intent understanding. Table TABREF4 shows the intent-wise detection results for our AMIE scenarios with the best performing utterance-level intent recognizer.", "FLOAT SELECTED: Table 3: Utterance-level Intent Recognition Results (10-fold CV)" ] } ], "annotation_id": [ "08cc6df0130add74d12eaccb3f1199ec873259eb" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Set/Change Destination", "Set/Change Route", "Go Faster", "Go Slower", "Stop", "Park", "Pull Over", "Drop Off", "Open Door", "Other " ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. We obtained 1260 unique utterances having commands to AMIE from our in-cabin data-set. We expanded this data-set via Amazon Mechanical Turk and ended up with 3347 utterances having intents. The annotations for intents and slots are obtained on the transcribed utterances by majority voting of 3 annotators." ], "highlighted_evidence": [ "Our AV in-cabin data-set includes 30 hours of multimodal data collected from 30 passengers (15 female, 15 male) in 20 rides/sessions. 10 types of passenger intents are identified and annotated as: Set/Change Destination, Set/Change Route (including turn-by-turn instructions), Go Faster, Go Slower, Stop, Park, Pull Over, Drop Off, Open Door, and Other (turn music/radio on/off, open/close window/trunk, change AC/temp, show map, etc.). Relevant slots are identified and annotated as: Location, Position/Direction, Object, Time-Guidance, Person, Gesture/Gaze (this, that, over there, etc.), and None. In addition to utterance-level intent types and their slots, word-level intent keywords are annotated as Intent as well. " ] } ], "annotation_id": [ "63a0529a4906af245494bc3e0c499cd869c4e775" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ] }
{ "caption": [ "Table 1: Slot Extraction Results (10-fold CV)", "Table 3: Utterance-level Intent Recognition Results (10-fold CV)", "Table 2: Intent Keyword Extraction Results (10-fold CV)", "Table 4: Intent-wise Performance Results of Utterance-level Intent Recognition Models: Hierarchical & Joint (10-fold CV)" ], "file": [ "2-Table1-1.png", "2-Table3-1.png", "2-Table2-1.png", "3-Table4-1.png" ] }
1606.05320
Increasing the Interpretability of Recurrent Neural Networks Using Hidden Markov Models
As deep neural networks continue to revolutionize various application domains, there is increasing interest in making these powerful models more understandable and interpretable, and narrowing down the causes of good and bad predictions. We focus on recurrent neural networks (RNNs), state of the art models in speech recognition and translation. Our approach to increasing interpretability is by combining an RNN with a hidden Markov model (HMM), a simpler and more transparent model. We explore various combinations of RNNs and HMMs: an HMM trained on LSTM states; a hybrid model where an HMM is trained first, then a small LSTM is given HMM state distributions and trained to fill in gaps in the HMM's performance; and a jointly trained hybrid model. We find that the LSTM and HMM learn complementary information about the features in the text.
{ "section_name": [ "Introduction", "Methods", "LSTM models", "Hidden Markov models", "Hybrid models", "Experiments", "Conclusion and future work" ], "paragraphs": [ [ "Following the recent progress in deep learning, researchers and practitioners of machine learning are recognizing the importance of understanding and interpreting what goes on inside these black box models. Recurrent neural networks have recently revolutionized speech recognition and translation, and these powerful models could be very useful in other applications involving sequential data. However, adoption has been slow in applications such as health care, where practitioners are reluctant to let an opaque expert system make crucial decisions. If we can make the inner workings of RNNs more interpretable, more applications can benefit from their power.", "There are several aspects of what makes a model or algorithm understandable to humans. One aspect is model complexity or parsimony. Another aspect is the ability to trace back from a prediction or model component to particularly influential features in the data BIBREF0 BIBREF1 . This could be useful for understanding mistakes made by neural networks, which have human-level performance most of the time, but can perform very poorly on seemingly easy cases. For instance, convolutional networks can misclassify adversarial examples with very high confidence BIBREF2 , and made headlines in 2015 when the image tagging algorithm in Google Photos mislabeled African Americans as gorillas. It's reasonable to expect recurrent networks to fail in similar ways as well. It would thus be useful to have more visibility into where these sorts of errors come from, i.e. which groups of features contribute to such flawed predictions.", "Several promising approaches to interpreting RNNs have been developed recently. BIBREF3 have approached this by using gradient boosting trees to predict LSTM output probabilities and explain which features played a part in the prediction. They do not model the internal structure of the LSTM, but instead approximate the entire architecture as a black box. BIBREF4 showed that in LSTM language models, around 10% of the memory state dimensions can be interpreted with the naked eye by color-coding the text data with the state values; some of them track quotes, brackets and other clearly identifiable aspects of the text. Building on these results, we take a somewhat more systematic approach to looking for interpretable hidden state dimensions, by using decision trees to predict individual hidden state dimensions (Figure 2 ). We visualize the overall dynamics of the hidden states by coloring the training data with the k-means clusters on the state vectors (Figures 3 , 3 ).", "We explore several methods for building interpretable models by combining LSTMs and HMMs. The existing body of literature mostly focuses on methods that specifically train the RNN to predict HMM states BIBREF5 or posteriors BIBREF6 , referred to as hybrid or tandem methods respectively. We first investigate an approach that does not require the RNN to be modified in order to make it understandable, as the interpretation happens after the fact. Here, we model the big picture of the state changes in the LSTM, by extracting the hidden states and approximating them with a continuous emission hidden Markov model (HMM). We then take the reverse approach where the HMM state probabilities are added to the output layer of the LSTM (see Figure 1 ). The LSTM model can then make use of the information from the HMM, and fill in the gaps when the HMM is not performing well, resulting in an LSTM with a smaller number of hidden state dimensions that could be interpreted individually (Figures 3 , 3 )." ], [ "We compare a hybrid HMM-LSTM approach with a continuous emission HMM (trained on the hidden states of a 2-layer LSTM), and a discrete emission HMM (trained directly on data)." ], [ "We use a character-level LSTM with 1 layer and no dropout, based on the Element-Research library. We train the LSTM for 10 epochs, starting with a learning rate of 1, where the learning rate is halved whenever $\\exp (-l_t) > \\exp (-l_{t-1}) + 1$ , where $l_t$ is the log likelihood score at epoch $t$ . The $L_2$ -norm of the parameter gradient vector is clipped at a threshold of 5." ], [ "The HMM training procedure is as follows:", "Initialization of HMM hidden states:", "(Discrete HMM) Random multinomial draw for each time step (i.i.d. across time steps).", "(Continuous HMM) K-means clusters fit on LSTM states, to speed up convergence relative to random initialization.", "At each iteration:", "Sample states using Forward Filtering Backwards Sampling algorithm (FFBS, BIBREF7 ).", "Sample transition parameters from a Multinomial-Dirichlet posterior. Let $n_{ij}$ be the number of transitions from state $i$ to state $j$ . Then the posterior distribution of the $i$ -th row of transition matrix $T$ (corresponding to transitions from state $i$ ) is: $T_i \\sim \\text{Mult}(n_{ij} | T_i) \\text{Dir}(T_i | \\alpha )$ ", "where $\\alpha $ is the Dirichlet hyperparameter.", "(Continuous HMM) Sample multivariate normal emission parameters from Normal-Inverse-Wishart posterior for state $i$ : $ \\mu _i, \\Sigma _i \\sim N(y|\\mu _i, \\Sigma _i) N(\\mu _i |0, \\Sigma _i) \\text{IW}(\\Sigma _i) $ ", "(Discrete HMM) Sample the emission parameters from a Multinomial-Dirichlet posterior.", "Evaluation:", "We evaluate the methods on how well they predict the next observation in the validation set. For the HMM models, we do a forward pass on the validation set (no backward pass unlike the full FFBS), and compute the HMM state distribution vector $p_t$ for each time step $t$ . Then we compute the predictive likelihood for the next observation as follows: $ P(y_{t+1} | p_t) =\\sum _{x_t=1}^n \\sum _{x_{t+1}=1}^n p_{tx_t} \\cdot T_{x_t, x_{t+1}} \\cdot P(y_{t+1} | x_{t+1})$ ", "where $n$ is the number of hidden states in the HMM." ], [ "Our main hybrid model is put together sequentially, as shown in Figure 1 . We first run the discrete HMM on the data, outputting the hidden state distributions obtained by the HMM's forward pass, and then add this information to the architecture in parallel with a 1-layer LSTM. The linear layer between the LSTM and the prediction layer is augmented with an extra column for each HMM state. The LSTM component of this architecture can be smaller than a standalone LSTM, since it only needs to fill in the gaps in the HMM's predictions. The HMM is written in Python, and the rest of the architecture is in Torch.", "We also build a joint hybrid model, where the LSTM and HMM are simultaneously trained in Torch. We implemented an HMM Torch module, optimized using stochastic gradient descent rather than FFBS. Similarly to the sequential hybrid model, we concatenate the LSTM outputs with the HMM state probabilities." ], [ "We test the models on several text data sets on the character level: the Penn Tree Bank (5M characters), and two data sets used by BIBREF4 , Tiny Shakespeare (1M characters) and Linux Kernel (5M characters). We chose $k=20$ for the continuous HMM based on a PCA analysis of the LSTM states, as the first 20 components captured almost all the variance.", "Table 1 shows the predictive log likelihood of the next text character for each method. On all text data sets, the hybrid algorithm performs a bit better than the standalone LSTM with the same LSTM state dimension. This effect gets smaller as we increase the LSTM size and the HMM makes less difference to the prediction (though it can still make a difference in terms of interpretability). The hybrid algorithm with 20 HMM states does better than the one with 10 HMM states. The joint hybrid algorithm outperforms the sequential hybrid on Shakespeare data, but does worse on PTB and Linux data, which suggests that the joint hybrid is more helpful for smaller data sets. The joint hybrid is an order of magnitude slower than the sequential hybrid, as the SGD-based HMM is slower to train than the FFBS-based HMM.", "We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data." ], [ "Hybrid HMM-RNN approaches combine the interpretability of HMMs with the predictive power of RNNs. Sometimes, a small hybrid model can perform better than a standalone LSTM of the same size. We use visualizations to show how the LSTM and HMM components of the hybrid algorithm complement each other in terms of features learned in the data." ] ] }
{ "question": [ "What kind of features are used by the HMM models, and how interpretable are those?", "What kind of information do the HMMs learn that the LSTMs don't?", "Which methods do the authors use to reach the conclusion that LSTMs and HMMs learn complementary information?", "How large is the gap in performance between the HMMs and the LSTMs?" ], "question_id": [ "5f7850254b723adf891930c6faced1058b99bd57", "4d05a264b2353cff310edb480a917d686353b007", "7cdce4222cea6955b656c1a3df1129bb8119e2d0", "6ea63327ffbab2fc734dd5c2414e59d3acc56ea5" ], "nlp_background": [ "five", "five", "five", "five" ], "topic_background": [ "research", "research", "research", "research" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "interpretability", "interpretability", "interpretability", "interpretability" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "A continuous emission HMM uses the hidden states of a 2-layer LSTM as features and a discrete emission HMM uses data as features. \nThe interpretability of the model is shown in Figure 2. ", "evidence": [ "We compare a hybrid HMM-LSTM approach with a continuous emission HMM (trained on the hidden states of a 2-layer LSTM), and a discrete emission HMM (trained directly on data).", "We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data.", "FLOAT SELECTED: Figure 2: Visualizing HMM and LSTM states on Linux data for the hybrid with 10 LSTM state dimensions and 10 HMM states. The HMM and LSTM components learn some complementary features in the text related to spaces and comments." ], "highlighted_evidence": [ "We compare a hybrid HMM-LSTM approach with a continuous emission HMM (trained on the hidden states of a 2-layer LSTM), and a discrete emission HMM (trained directly on data).", "We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components.", "FLOAT SELECTED: Figure 2: Visualizing HMM and LSTM states on Linux data for the hybrid with 10 LSTM state dimensions and 10 HMM states. The HMM and LSTM components learn some complementary features in the text related to spaces and comments." ] } ], "annotation_id": [ "3be4a77ab3aaee94fae674de02f30c26a8ac92cc" ], "worker_id": [ "7803ba8358058c0f83a7d1e93e15ad3f404db5a5" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "The HMM can identify punctuation or pick up on vowels.", "evidence": [ "We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data.", "FLOAT SELECTED: Figure 2: Visualizing HMM and LSTM states on Linux data for the hybrid with 10 LSTM state dimensions and 10 HMM states. The HMM and LSTM components learn some complementary features in the text related to spaces and comments." ], "highlighted_evidence": [ "We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data.", "FLOAT SELECTED: Figure 2: Visualizing HMM and LSTM states on Linux data for the hybrid with 10 LSTM state dimensions and 10 HMM states. The HMM and LSTM components learn some complementary features in the text related to spaces and comments." ] } ], "annotation_id": [ "74af4b76c56784369d825b16869ad676ce461b5a" ], "worker_id": [ "7803ba8358058c0f83a7d1e93e15ad3f404db5a5" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "decision trees to predict individual hidden state dimensions", "apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Several promising approaches to interpreting RNNs have been developed recently. BIBREF3 have approached this by using gradient boosting trees to predict LSTM output probabilities and explain which features played a part in the prediction. They do not model the internal structure of the LSTM, but instead approximate the entire architecture as a black box. BIBREF4 showed that in LSTM language models, around 10% of the memory state dimensions can be interpreted with the naked eye by color-coding the text data with the state values; some of them track quotes, brackets and other clearly identifiable aspects of the text. Building on these results, we take a somewhat more systematic approach to looking for interpretable hidden state dimensions, by using decision trees to predict individual hidden state dimensions (Figure 2 ). We visualize the overall dynamics of the hidden states by coloring the training data with the k-means clusters on the state vectors (Figures 3 , 3 ).", "We interpret the HMM and LSTM states in the hybrid algorithm with 10 LSTM state dimensions and 10 HMM states in Figures 3 and 3 , showing which features are identified by the HMM and LSTM components. In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters. The HMM and LSTM states pick up on spaces, indentation, and special characters in the data (such as comment symbols in Linux data). We see some examples where the HMM and LSTM complement each other, such as learning different things about spaces and comments on Linux data, or punctuation on the Shakespeare data. In Figure 2 , we see that some individual LSTM hidden state dimensions identify similar features, such as comment symbols in the Linux data." ], "highlighted_evidence": [ "Building on these results, we take a somewhat more systematic approach to looking for interpretable hidden state dimensions, by using decision trees to predict individual hidden state dimensions (Figure 2 ). We visualize the overall dynamics of the hidden states by coloring the training data with the k-means clusters on the state vectors (Figures 3 , 3 ).", "In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters." ] } ], "annotation_id": [ "ec6cd705d22766e1274c29c47bbc0130b8ebe6e4" ], "worker_id": [ "b06f6ec0482033adb20e36a1fa5db6e23787c281" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "With similar number of parameters, the log likelihood is about 0.1 lower for LSTMs across datasets. When the number of parameters in LSTMs is increased, their log likelihood is up to 0.7 lower.", "evidence": [ "FLOAT SELECTED: Table 1: Predictive loglikelihood (LL) comparison, sorted by validation set performance." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Predictive loglikelihood (LL) comparison, sorted by validation set performance." ] } ], "annotation_id": [ "08dd9aab02deed98405f4acd28f2cd1bb2f50927" ], "worker_id": [ "b06f6ec0482033adb20e36a1fa5db6e23787c281" ] } ] }
{ "caption": [ "Figure 1: Hybrid HMM-LSTM algorithms (the dashed blocks indicate the components trained using SGD in Torch).", "Table 1: Predictive loglikelihood (LL) comparison, sorted by validation set performance.", "Figure 2: Visualizing HMM and LSTM states on Linux data for the hybrid with 10 LSTM state dimensions and 10 HMM states. The HMM and LSTM components learn some complementary features in the text related to spaces and comments.", "Figure 3: Decision tree predicting an individual hidden state dimension of the hybrid algorithm based on the preceding characters on the Linux data. Nodes with uninformative splits are represented with . . . ." ], "file": [ "2-Figure1-1.png", "3-Table1-1.png", "4-Figure2-1.png", "4-Figure3-1.png" ] }
1809.10644
Predictive Embeddings for Hate Speech Detection on Twitter
We present a neural-network based approach to classifying online hate speech in general, as well as racist and sexist speech in particular. Using pre-trained word embeddings and max/mean pooling from simple, fully-connected transformations of these embeddings, we are able to predict the occurrence of hate speech on three commonly used publicly available datasets. Our models match or outperform state of the art F1 performance on all three datasets using significantly fewer parameters and minimal feature preprocessing compared to previous methods.
{ "section_name": [ "Introduction", "Related Work", "Data", "Transformed Word Embedding Model (TWEM)", "Word Embeddings", "Pooling", "Output", "Experimental Setup", "Results and Discussion", "Error Analysis", "Conclusion", "Supplemental Material", "Preprocessing", "Embedding Analysis" ], "paragraphs": [ [ "The increasing popularity of social media platforms like Twitter for both personal and political communication BIBREF0 has seen a well-acknowledged rise in the presence of toxic and abusive speech on these platforms BIBREF1 , BIBREF2 . Although the terms of services on these platforms typically forbid hateful and harassing speech, enforcing these rules has proved challenging, as identifying hate speech speech at scale is still a largely unsolved problem in the NLP community. BIBREF3 , for example, identify many ambiguities in classifying abusive communications, and highlight the difficulty of clearly defining the parameters of such speech. This problem is compounded by the fact that identifying abusive or harassing speech is a challenge for humans as well as automated systems.", "Despite the lack of consensus around what constitutes abusive speech, some definition of hate speech must be used to build automated systems to address it. We rely on BIBREF4 's definition of hate speech, specifically: “language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group.”", "In this paper, we present a neural classification system that uses minimal preprocessing to take advantage of a modified Simple Word Embeddings-based Model BIBREF5 to predict the occurrence of hate speech. Our classifier features:", "In the following sections, we discuss related work on hate speech classification, followed by a description of the datasets, methods and results of our study." ], [ "Many efforts have been made to classify hate speech using data scraped from online message forums and popular social media sites such as Twitter and Facebook. BIBREF3 applied a logistic regression model that used one- to four-character n-grams for classification of tweets labeled as racist, sexist or neither. BIBREF4 experimented in classification of hateful as well as offensive but not hateful tweets. They applied a logistic regression classifier with L2 regularization using word level n-grams and various part-of-speech, sentiment, and tweet-level metadata features.", "Additional projects have built upon the data sets created by Waseem and/or Davidson. For example, BIBREF6 used a neural network approach with two binary classifiers: one to predict the presence abusive speech more generally, and another to discern the form of abusive speech.", " BIBREF7 , meanwhile, used pre-trained word2vec embeddings, which were then fed into a convolutional neural network (CNN) with max pooling to produce input vectors for a Gated Recurrent Unit (GRU) neural network. Other researchers have experimented with using metadata features from tweets. BIBREF8 built a classifier composed of two separate neural networks, one for the text and the other for metadata of the Twitter user, that were trained jointly in interleaved fashion. Both networks used in combination - and especially when trained using transfer learning - achieved higher F1 scores than either neural network classifier alone.", "In contrast to the methods described above, our approach relies on a simple word embedding (SWEM)-based architecture BIBREF5 , reducing the number of required parameters and length of training required, while still yielding improved performance and resilience across related classification tasks. Moreover, our network is able to learn flexible vector representations that demonstrate associations among words typically used in hateful communication. Finally, while metadata-based augmentation is intriguing, here we sought to develop an approach that would function well even in cases where such additional data was missing due to the deletion, suspension, or deactivation of accounts." ], [ "In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets.", "Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 ." ], [ "Our training set consists of INLINEFORM0 examples INLINEFORM1 where the input INLINEFORM2 is a sequence of tokens INLINEFORM3 , and the output INLINEFORM4 is the numerical class for the hate speech class. Each input instance represents a Twitter post and thus, is not limited to a single sentence.", "We modify the SWEM-concat BIBREF5 architecture to allow better handling of infrequent and unknown words and to capture non-linear word combinations." ], [ "Each token in the input is mapped to an embedding. We used the 300 dimensional embeddings for all our experiments, so each word INLINEFORM0 is mapped to INLINEFORM1 . We denote the full embedded sequence as INLINEFORM2 . We then transform each word embedding by applying 300 dimensional 1-layer Multi Layer Perceptron (MLP) INLINEFORM3 with a Rectified Liner Unit (ReLU) activation to form an updated embedding space INLINEFORM4 . We find this better handles unseen or rare tokens in our training data by projecting the pretrained embedding into a space that the encoder can understand." ], [ "We make use of two pooling methods on the updated embedding space INLINEFORM0 . We employ a max pooling operation on INLINEFORM1 to capture salient word features from our input; this representation is denoted as INLINEFORM2 . This forces words that are highly indicative of hate speech to higher positive values within the updated embedding space. We also average the embeddings INLINEFORM3 to capture the overall meaning of the sentence, denoted as INLINEFORM4 , which provides a strong conditional factor in conjunction with the max pooling output. This also helps regularize gradient updates from the max pooling operation." ], [ "We concatenate INLINEFORM0 and INLINEFORM1 to form a document representation INLINEFORM2 and feed the representation into a 50 node 2 layer MLP followed by ReLU Activation to allow for increased nonlinear representation learning. This representation forms the preterminal layer and is passed to a fully connected softmax layer whose output is the probability distribution over labels." ], [ "We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search.", "All of our results are produced from 10-fold cross validation to allow comparison with previous results. We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets.", "SR: Sexist/Racist BIBREF3 , HATE: Hate BIBREF4 HAR: Harassment BIBREF9 " ], [ "The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1. ", "Using the Approximate Randomization (AR) Test BIBREF14 , we perform significance testing using a 75/25 train and test split", "to compare against BIBREF3 and BIBREF4 , whose models we re-implemented. We found 0.001 significance compared to both methods. We also include in-depth precision and recall results for all three datasets in the supplement.", "Our results indicate better performance than several more complex approaches, including BIBREF4 's best model (which used word and part-of-speech ngrams, sentiment, readability, text, and Twitter specific features), BIBREF6 (which used two fold classification and a hybrid of word and character CNNs, using approximately twice the parameters we use excluding the word embeddings) and even recent work by BIBREF8 , (whose best model relies on GRUs, metadata including popularity, network reciprocity, and subscribed lists).", "On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters." ], [ "False negatives", "Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. Such examples may be innocuous in isolation but could potentially be sexist or racist in context. While this may be a limitation of considering only the content of the tweet, it could also be a mislabel.", "Debra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two.", "Along these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech:", "@LoveAndLonging ...how is that example \"sexism\"?", "@amberhasalamb ...in what way?", "Another case our classifier misses is problematic speech within a hashtag:", ":D @nkrause11 Dudes who go to culinary school: #why #findawife #notsexist :)", "This limitation could be potentially improved through the use of character convolutions or subword tokenization.", "False Positives", "In certain cases, our model seems to be learning user names instead of semantic content:", "RT @GrantLeeStone: @MT8_9 I don't even know what that is, or where it's from. Was that supposed to be funny? It wasn't.", "Since the bulk of our model's weights are in the embedding and embedding-transformation matrices, we cluster the SR vocabulary using these transformed embeddings to clarify our intuitions about the model ( TABREF14 ). We elaborate on our clustering approach in the supplement. We see that the model learned general semantic groupings of words associated with hate speech as well as specific idiosyncrasies related to the dataset itself (e.g. katieandnikki)" ], [ "Despite minimal tuning of hyper-parameters, fewer weight parameters, minimal text preprocessing, and no additional metadata, the model performs remarkably well on standard hate speech datasets. Our clustering analysis adds interpretability enabling inspection of results.", "Our results indicate that the majority of recent deep learning models in hate speech may rely on word embeddings for the bulk of predictive power and the addition of sequence-based parameters provide minimal utility. Sequence based approaches are typically important when phenomena such as negation, co-reference, and context-dependent phrases are salient in the text and thus, we suspect these cases are in the minority for publicly available datasets. We think it would be valuable to study the occurrence of such linguistic phenomena in existing datasets and construct new datasets that have a better representation of subtle forms of hate speech. In the future, we plan to investigate character based representations, using character CNNs and highway layers BIBREF15 along with word embeddings to allow robust representations for sparse words such as hashtags." ], [ "We experimented with several different preprocessing variants and were surprised to find that reducing preprocessing improved the performance on the task for all of our tasks. We go through each preprocessing variant with an example and then describe our analysis to compare and evaluate each of them." ], [ "Original text", "RT @AGuyNamed_Nick Now, I'm not sexist in any way shape or form but I think women are better at gift wrapping. It's the XX chromosome thing", "Tokenize (Basic Tokenize: Keeps case and words intact with limited sanitizing)", "RT @AGuyNamed_Nick Now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the XX chromosome thing", "Tokenize Lowercase: Lowercase the basic tokenize scheme", "rt @aguynamed_nick now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing", "Token Replace: Replaces entities and user names with placeholder)", "ENT USER now , I 'm not sexist in any way shape or form but I think women are better at gift wrapping . It 's the xx chromosome thing", "Token Replace Lowercase: Lowercase the Token Replace Scheme", "ENT USER now , i 'm not sexist in any way shape or form but i think women are better at gift wrapping . it 's the xx chromosome thing", "We did analysis on a validation set across multiple datasets to find that the \"Tokenize\" scheme was by far the best. We believe that keeping the case in tact provides useful information about the user. For example, saying something in all CAPS is a useful signal that the model can take advantage of." ], [ "Since our method was a simple word embedding based model, we explored the learned embedding space to analyze results. For this analysis, we only use the max pooling part of our architecture to help analyze the learned embedding space because it encourages salient words to increase their values to be selected. We projected the original pre-trained embeddings to the learned space using the time distributed MLP. We summed the embedding dimensions for each word and sorted by the sum in descending order to find the 1000 most salient word embeddings from our vocabulary. We then ran PCA BIBREF16 to reduce the dimensionality of the projected embeddings from 300 dimensions to 75 dimensions. This captured about 60% of the variance. Finally, we ran K means clustering for INLINEFORM0 clusters to organize the most salient embeddings in the projected space.", "The learned clusters from the SR vocabulary were very illuminating (see Table TABREF14 ); they gave insights to how hate speech surfaced in the datasets. One clear grouping we found is the misogynistic and pornographic group, which contained words like breasts, blonds, and skank. Two other clusters had references to geopolitical and religious issues in the Middle East and disparaging and resentful epithets that could be seen as having an intellectual tone. This hints towards the subtle pedagogic forms of hate speech that surface. We ran silhouette analysis BIBREF17 on the learned clusters to find that the clusters from the learned representations had a 35% higher silhouette coefficient using the projected embeddings compared to the clusters created from the original pre-trained embeddings. This reinforces the claim that our training process pushed hate-speech related words together, and words from other clusters further away, thus, structuring the embedding space effectively for detecting hate speech." ] ] }
{ "question": [ "Do they report results only on English data?", "Which publicly available datasets are used?", "What embedding algorithm and dimension size are used?", "What data are the embeddings trained on?", "how much was the parameter difference between their model and previous methods?", "how many parameters did their model use?", "which datasets were used?", "what was their system's f1 performance?", "what was the baseline?" ], "question_id": [ "50690b72dc61748e0159739a9a0243814d37f360", "8266642303fbc6a1138b4e23ee1d859a6f584fbb", "3685bf2409b23c47bfd681989fb4a763bcab6be2", "19225e460fff2ac3aebc7fe31fcb4648eda813fb", "f37026f518ab56c859f6b80b646d7f19a7b684fa", "1231934db6adda87c1b15e571468b8e9d225d6fe", "81303f605da57ddd836b7c121490b0ebb47c60e7", "a3f108f60143d13fe38d911b1cc3b17bdffde3bd", "118ff1d7000ea0d12289d46430154cc15601fd8e" ], "nlp_background": [ "five", "five", "five", "five", "", "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "", "", "", "", "" ], "paper_read": [ "no", "no", "no", "no", "", "", "", "", "" ], "search_query": [ "twitter", "twitter", "twitter", "twitter", "", "", "", "", "" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets.", "Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 .", "Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. Such examples may be innocuous in isolation but could potentially be sexist or racist in context. While this may be a limitation of considering only the content of the tweet, it could also be a mislabel.", "Debra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two.", "Along these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech:", "@LoveAndLonging ...how is that example \"sexism\"?", "@amberhasalamb ...in what way?" ], "highlighted_evidence": [ "In this paper, we use three data sets from the literature to train and evaluate our own classifier.", "Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 .", "Many of the false negatives we see are specific references to characters in the TV show “My Kitchen Rules”, rather than something about women in general. ", "While this may be a limitation of considering only the content of the tweet, it could also be a mislabel.\n\nDebra are now my most hated team on #mkr after least night's ep. Snakes in the grass those two.\n\nAlong these lines, we also see correct predictions of innocuous speech, but find data mislabeled as hate speech:\n\n@LoveAndLonging ...how is that example \"sexism\"?\n\n@amberhasalamb ...in what way?" ] } ], "annotation_id": [ "7acdce6a3960c4cb8094d6e4544c30573fbd7f65" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "BIBREF3", "BIBREF4", "BIBREF9" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this paper, we use three data sets from the literature to train and evaluate our own classifier. Although all address the category of hateful speech, they used different strategies of labeling the collected data. Table TABREF5 shows the characteristics of the datasets.", "Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 ." ], "highlighted_evidence": [ "In this paper, we use three data sets from the literature to train and evaluate our own classifier.", "Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 ." ] } ], "annotation_id": [ "80c406b3f6db9d8fc52494f64623dece1a1fb5a9" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "300 Dimensional Glove" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search." ], "highlighted_evidence": [ "We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task" ] } ], "annotation_id": [ "a304633262bac6ad36eebafd497fad08ae92472f" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Common Crawl " ], "yes_no": null, "free_form_answer": "", "evidence": [ "We tokenize the data using Spacy BIBREF10 . We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task. We experimented extensively with pre-processing variants and our results showed better performance without lemmatization and lower-casing (see supplement for details). We pad each input to 50 words. We train using RMSprop with a learning rate of .001 and a batch size of 512. We add dropout with a drop rate of 0.1 in the final layer to reduce overfitting BIBREF12 , batch size, and input length empirically through random hyperparameter search." ], "highlighted_evidence": [ "We use 300 Dimensional Glove Common Crawl Embeddings (840B Token) BIBREF11 and fine tune them for the task." ] } ], "annotation_id": [ "ef801e4f9403ce2032a60c72ab309d59ae99815b" ], "worker_id": [ "5053f146237e8fc8859ed3984b5d3f02f39266b7" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "our model requires 100k parameters , while BIBREF8 requires 250k parameters" ], "yes_no": null, "free_form_answer": "", "evidence": [ "On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters." ], "highlighted_evidence": [ "Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters." ] } ], "annotation_id": [ "629050e165fd7bce52139caf1d57c8bb2af6f6b1" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Excluding the embedding weights, our model requires 100k parameters" ], "yes_no": null, "free_form_answer": "", "evidence": [ "On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters." ], "highlighted_evidence": [ "Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters." ] } ], "annotation_id": [ "090362d69eea1dc52f6e26ca692dc5a45aab9ea2" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Sexist/Racist (SR) data set", "HATE dataset", "HAR" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets. Tweets were labeled as “Harrassing” or “Non-Harrassing”; hate speech was not explicitly labeled, but treated as an unlabeled subset of the broader “Harrassing” category BIBREF9 ." ], "highlighted_evidence": [ "Data collected by BIBREF3 , which we term the Sexist/Racist (SR) data set, was collected using an initial Twitter search followed by analysis and filtering by the authors and their team who identified 17 common phrases, hashtags, and users that were indicative of abusive speech. BIBREF4 collected the HATE dataset by searching for tweets using a lexicon provided by Hatebase.org. The final data set we used, which we call HAR, was collected by BIBREF9 ; we removed all retweets reducing the dataset to 20,000 tweets." ] } ], "annotation_id": [ "a7994610e5a9941b8fc4c4bff59ba0efbd157426" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Proposed model achieves 0.86, 0.924, 0.71 F1 score on SR, HATE, HAR datasets respectively.", "evidence": [ "FLOAT SELECTED: Table 2: F1 Results3", "The approach we have developed establishes a new state of the art for classifying hate speech, outperforming previous results by as much as 12 F1 points. Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: F1 Results3", "Table TABREF10 illustrates the robustness of our method, which often outperform previous results, measured by weighted F1." ] } ], "annotation_id": [ "d6c36ac05ab606c6508299255adf1a37eb474542" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "logistic regression" ], "yes_no": null, "free_form_answer": "", "evidence": [ "All of our results are produced from 10-fold cross validation to allow comparison with previous results. We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets." ], "highlighted_evidence": [ "We trained a logistic regression baseline model (line 1 in Table TABREF10 ) using character ngrams and word unigrams using TF*IDF weighting BIBREF13 , to provide a baseline since HAR has no reported results. For the SR and HATE datasets, the authors reported their trained best logistic regression model's results on their respective datasets." ] } ], "annotation_id": [ "ef284b3f6c2607cb62a2fbfa6b7d0bcfb580696d" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Table 1: Dataset Characteristics", "Table 2: F1 Results3", "Table 3: Projected Embedding Cluster Analysis from SR Dataset", "Table 5: SR Results", "Table 7: HAR Results", "Table 6: HATE Results", "Table 8: Projected Embedding Cluster Analysis from SR Dataset" ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "4-Table3-1.png", "6-Table5-1.png", "6-Table7-1.png", "6-Table6-1.png", "7-Table8-1.png" ] }
1606.02006
Incorporating Discrete Translation Lexicons into Neural Machine Translation
Neural machine translation (NMT) often makes mistakes in translating low-frequency content words that are essential to understanding the meaning of the sentence. We propose a method to alleviate this problem by augmenting NMT systems with discrete translation lexicons that efficiently encode translations of these low-frequency words. We describe a method to calculate the lexicon probability of the next word in the translation candidate by using the attention vector of the NMT model to select which source word lexical probabilities the model should focus on. We test two methods to combine this probability with the standard NMT probability: (1) using it as a bias, and (2) linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3 BLEU and 0.13-0.44 NIST score, and faster convergence time.
{ "section_name": [ "Introduction", "Neural Machine Translation", "Integrating Lexicons into NMT", "Converting Lexicon Probabilities into Conditioned Predictive Proabilities", "Combining Predictive Probabilities", "Constructing Lexicon Probabilities", "Automatically Learned Lexicons", "Manual Lexicons", "Hybrid Lexicons", "Experiment & Result", "Settings", "Effect of Integrating Lexicons", "Comparison of Integration Methods", "Additional Experiments", "Related Work", "Conclusion & Future Work", "Acknowledgment" ], "paragraphs": [ [ "Neural machine translation (NMT, § SECREF2 ; kalchbrenner13emnlp, sutskever14nips) is a variant of statistical machine translation (SMT; brown93cl), using neural networks. NMT has recently gained popularity due to its ability to model the translation process end-to-end using a single probabilistic model, and for its state-of-the-art performance on several language pairs BIBREF0 , BIBREF1 .", "One feature of NMT systems is that they treat each word in the vocabulary as a vector of continuous-valued numbers. This is in contrast to more traditional SMT methods such as phrase-based machine translation (PBMT; koehn03phrasebased), which represent translations as discrete pairs of word strings in the source and target languages. The use of continuous representations is a major advantage, allowing NMT to share statistical power between similar words (e.g. “dog” and “cat”) or contexts (e.g. “this is” and “that is”). However, this property also has a drawback in that NMT systems often mistranslate into words that seem natural in the context, but do not reflect the content of the source sentence. For example, Figure FIGREF2 is a sentence from our data where the NMT system mistakenly translated “Tunisia” into the word for “Norway.” This variety of error is particularly serious because the content words that are often mistranslated by NMT are also the words that play a key role in determining the whole meaning of the sentence.", "In contrast, PBMT and other traditional SMT methods tend to rarely make this kind of mistake. This is because they base their translations on discrete phrase mappings, which ensure that source words will be translated into a target word that has been observed as a translation at least once in the training data. In addition, because the discrete mappings are memorized explicitly, they can be learned efficiently from as little as a single instance (barring errors in word alignments). Thus we hypothesize that if we can incorporate a similar variety of information into NMT, this has the potential to alleviate problems with the previously mentioned fatal errors on low-frequency words.", "In this paper, we propose a simple, yet effective method to incorporate discrete, probabilistic lexicons as an additional information source in NMT (§ SECREF3 ). First we demonstrate how to transform lexical translation probabilities (§ SECREF7 ) into a predictive probability for the next word by utilizing attention vectors from attentional NMT models BIBREF2 . We then describe methods to incorporate this probability into NMT, either through linear interpolation with the NMT probabilities (§ UID10 ) or as the bias to the NMT predictive distribution (§ UID9 ). We construct these lexicon probabilities by using traditional word alignment methods on the training data (§ SECREF11 ), other external parallel data resources such as a handmade dictionary (§ SECREF13 ), or using a hybrid between the two (§ SECREF14 ).", "We perform experiments (§ SECREF5 ) on two English-Japanese translation corpora to evaluate the method's utility in improving translation accuracy and reducing the time required for training." ], [ "The goal of machine translation is to translate a sequence of source words INLINEFORM0 into a sequence of target words INLINEFORM1 . These words belong to the source vocabulary INLINEFORM2 , and the target vocabulary INLINEFORM3 respectively. NMT performs this translation by calculating the conditional probability INLINEFORM4 of the INLINEFORM5 th target word INLINEFORM6 based on the source INLINEFORM7 and the preceding target words INLINEFORM8 . This is done by encoding the context INLINEFORM9 a fixed-width vector INLINEFORM10 , and calculating the probability as follows: DISPLAYFORM0 ", "where INLINEFORM0 and INLINEFORM1 are respectively weight matrix and bias vector parameters.", "The exact variety of the NMT model depends on how we calculate INLINEFORM0 used as input. While there are many methods to perform this modeling, we opt to use attentional models BIBREF2 , which focus on particular words in the source sentence when calculating the probability of INLINEFORM1 . These models represent the current state of the art in NMT, and are also convenient for use in our proposed method. Specifically, we use the method of luong15emnlp, which we describe briefly here and refer readers to the original paper for details.", "First, an encoder converts the source sentence INLINEFORM0 into a matrix INLINEFORM1 where each column represents a single word in the input sentence as a continuous vector. This representation is generated using a bidirectional encoder INLINEFORM2 ", "Here the INLINEFORM0 function maps the words into a representation BIBREF3 , and INLINEFORM1 is a stacking long short term memory (LSTM) neural network BIBREF4 , BIBREF5 , BIBREF6 . Finally we concatenate the two vectors INLINEFORM2 and INLINEFORM3 into a bidirectional representation INLINEFORM4 . These vectors are further concatenated into the matrix INLINEFORM5 where the INLINEFORM6 th column corresponds to INLINEFORM7 .", "Next, we generate the output one word at a time while referencing this encoded input sentence and tracking progress with a decoder LSTM. The decoder's hidden state INLINEFORM0 is a fixed-length continuous vector representing the previous target words INLINEFORM1 , initialized as INLINEFORM2 . Based on this INLINEFORM3 , we calculate a similarity vector INLINEFORM4 , with each element equal to DISPLAYFORM0 ", " INLINEFORM0 can be an arbitrary similarity function, which we set to the dot product, following luong15emnlp. We then normalize this into an attention vector, which weights the amount of focus that we put on each word in the source sentence DISPLAYFORM0 ", "This attention vector is then used to weight the encoded representation INLINEFORM0 to create a context vector INLINEFORM1 for the current time step INLINEFORM2 ", "Finally, we create INLINEFORM0 by concatenating the previous hidden state INLINEFORM1 with the context vector, and performing an affine transform INLINEFORM2 ", "Once we have this representation of the current state, we can calculate INLINEFORM0 according to Equation ( EQREF3 ). The next word INLINEFORM1 is chosen according to this probability, and we update the hidden state by inputting the chosen word into the decoder LSTM DISPLAYFORM0 ", "If we define all the parameters in this model as INLINEFORM0 , we can then train the model by minimizing the negative log-likelihood of the training data INLINEFORM1 " ], [ "In § SECREF2 we described how traditional NMT models calculate the probability of the next target word INLINEFORM0 . Our goal in this paper is to improve the accuracy of this probability estimate by incorporating information from discrete probabilistic lexicons. We assume that we have a lexicon that, given a source word INLINEFORM1 , assigns a probability INLINEFORM2 to target word INLINEFORM3 . For a source word INLINEFORM4 , this probability will generally be non-zero for a small number of translation candidates, and zero for the majority of words in INLINEFORM5 . In this section, we first describe how we incorporate these probabilities into NMT, and explain how we actually obtain the INLINEFORM6 probabilities in § SECREF4 ." ], [ "First, we need to convert lexical probabilities INLINEFORM0 for the individual words in the source sentence INLINEFORM1 to a form that can be used together with INLINEFORM2 . Given input sentence INLINEFORM3 , we can construct a matrix in which each column corresponds to a word in the input sentence, each row corresponds to a word in the INLINEFORM4 , and the entry corresponds to the appropriate lexical probability: INLINEFORM5 ", "This matrix can be precomputed during the encoding stage because it only requires information about the source sentence INLINEFORM0 .", "Next we convert this matrix into a predictive probability over the next word: INLINEFORM0 . To do so we use the alignment probability INLINEFORM1 from Equation ( EQREF5 ) to weight each column of the INLINEFORM2 matrix: INLINEFORM3 ", "This calculation is similar to the way how attentional models calculate the context vector INLINEFORM0 , but over a vector representing the probabilities of the target vocabulary, instead of the distributed representations of the source words. The process of involving INLINEFORM1 is important because at every time step INLINEFORM2 , the lexical probability INLINEFORM3 will be influenced by different source words." ], [ "After calculating the lexicon predictive probability INLINEFORM0 , next we need to integrate this probability with the NMT model probability INLINEFORM1 . To do so, we examine two methods: (1) adding it as a bias, and (2) linear interpolation.", "In our first bias method, we use INLINEFORM0 to bias the probability distribution calculated by the vanilla NMT model. Specifically, we add a small constant INLINEFORM1 to INLINEFORM2 , take the logarithm, and add this adjusted log probability to the input of the softmax as follows: INLINEFORM3 ", "We take the logarithm of INLINEFORM0 so that the values will still be in the probability domain after the softmax is calculated, and add the hyper-parameter INLINEFORM1 to prevent zero probabilities from becoming INLINEFORM2 after taking the log. When INLINEFORM3 is small, the model will be more heavily biased towards using the lexicon, and when INLINEFORM4 is larger the lexicon probabilities will be given less weight. We use INLINEFORM5 for this paper.", "We also attempt to incorporate the two probabilities through linear interpolation between the standard NMT probability model probability INLINEFORM0 and the lexicon probability INLINEFORM1 . We will call this the linear method, and define it as follows: INLINEFORM2 ", "where INLINEFORM0 is an interpolation coefficient that is the result of the sigmoid function INLINEFORM1 . INLINEFORM2 is a learnable parameter, and the sigmoid function ensures that the final interpolation level falls between 0 and 1. We choose INLINEFORM3 ( INLINEFORM4 ) at the beginning of training.", "This notation is partly inspired by allamanis16icml and gu16acl who use linear interpolation to merge a standard attentional model with a “copy” operator that copies a source word as-is into the target sentence. The main difference is that they use this to copy words into the output while our method uses it to influence the probabilities of all target words." ], [ "In the previous section, we have defined some ways to use predictive probabilities INLINEFORM0 based on word-to-word lexical probabilities INLINEFORM1 . Next, we define three ways to construct these lexical probabilities using automatically learned lexicons, handmade lexicons, or a combination of both." ], [ "In traditional SMT systems, lexical translation probabilities are generally learned directly from parallel data in an unsupervised fashion using a model such as the IBM models BIBREF7 , BIBREF8 . These models can be used to estimate the alignments and lexical translation probabilities INLINEFORM0 between the tokens of the two languages using the expectation maximization (EM) algorithm.", "First in the expectation step, the algorithm estimates the expected count INLINEFORM0 . In the maximization step, lexical probabilities are calculated by dividing the expected count by all possible counts: INLINEFORM1 ", "The IBM models vary in level of refinement, with Model 1 relying solely on these lexical probabilities, and latter IBM models (Models 2, 3, 4, 5) introducing more sophisticated models of fertility and relative alignment. Even though IBM models also occasionally have problems when dealing with the rare words (e.g. “garbage collecting” effects BIBREF9 ), traditional SMT systems generally achieve better translation accuracies of low-frequency words than NMT systems BIBREF6 , indicating that these problems are less prominent than they are in NMT.", "Note that in many cases, NMT limits the target vocabulary BIBREF10 for training speed or memory constraints, resulting in rare words not being covered by the NMT vocabulary INLINEFORM0 . Accordingly, we allocate the remaining probability assigned by the lexicon to the unknown word symbol INLINEFORM1 : DISPLAYFORM0 " ], [ "In addition, for many language pairs, broad-coverage handmade dictionaries exist, and it is desirable that we be able to use the information included in them as well. Unlike automatically learned lexicons, however, handmade dictionaries generally do not contain translation probabilities. To construct the probability INLINEFORM0 , we define the set of translations INLINEFORM1 existing in the dictionary for particular source word INLINEFORM2 , and assume a uniform distribution over these words: INLINEFORM3 ", "Following Equation ( EQREF12 ), unknown source words will assign their probability mass to the INLINEFORM0 tag." ], [ "Handmade lexicons have broad coverage of words but their probabilities might not be as accurate as the learned ones, particularly if the automatic lexicon is constructed on in-domain data. Thus, we also test a hybrid method where we use the handmade lexicons to complement the automatically learned lexicon. Specifically, inspired by phrase table fill-up used in PBMT systems BIBREF11 , we use the probability of the automatically learned lexicons INLINEFORM1 by default, and fall back to the handmade lexicons INLINEFORM2 only for uncovered words: DISPLAYFORM0 " ], [ "In this section, we describe experiments we use to evaluate our proposed methods." ], [ "Dataset: We perform experiments on two widely-used tasks for the English-to-Japanese language pair: KFTT BIBREF12 and BTEC BIBREF13 . KFTT is a collection of Wikipedia article about city of Kyoto and BTEC is a travel conversation corpus. BTEC is an easier translation task than KFTT, because KFTT covers a broader domain, has a larger vocabulary of rare words, and has relatively long sentences. The details of each corpus are depicted in Table TABREF19 .", "We tokenize English according to the Penn Treebank standard BIBREF14 and lowercase, and tokenize Japanese using KyTea BIBREF15 . We limit training sentence length up to 50 in both experiments and keep the test data at the original length. We replace words of frequency less than a threshold INLINEFORM0 in both languages with the INLINEFORM1 symbol and exclude them from our vocabulary. We choose INLINEFORM2 for BTEC and INLINEFORM3 for KFTT, resulting in INLINEFORM4 k, INLINEFORM5 k for BTEC and INLINEFORM6 k, INLINEFORM7 k for KFTT.", "NMT Systems: We build the described models using the Chainer toolkit. The depth of the stacking LSTM is INLINEFORM0 and hidden node size INLINEFORM1 . We concatenate the forward and backward encodings (resulting in a 1600 dimension vector) and then perform a linear transformation to 800 dimensions.", "We train the system using the Adam BIBREF16 optimization method with the default settings: INLINEFORM0 . Additionally, we add dropout BIBREF17 with drop rate INLINEFORM1 at the last layer of each stacking LSTM unit to prevent overfitting. We use a batch size of INLINEFORM2 and we run a total of INLINEFORM3 iterations for all data sets. All of the experiments are conducted on a single GeForce GTX TITAN X GPU with a 12 GB memory cache.", "At test time, we use beam search with beam size INLINEFORM0 . We follow luong15acl in replacing every unknown token at position INLINEFORM1 with the target token that maximizes the probability INLINEFORM2 . We choose source word INLINEFORM3 according to the highest alignment score in Equation ( EQREF5 ). This unknown word replacement is applied to both baseline and proposed systems. Finally, because NMT models tend to give higher probabilities to shorter sentences BIBREF18 , we discount the probability of INLINEFORM4 token by INLINEFORM5 to correct for this bias.", "Traditional SMT Systems: We also prepare two traditional SMT systems for comparison: a PBMT system BIBREF19 using Moses BIBREF20 , and a hierarchical phrase-based MT system BIBREF21 using Travatar BIBREF22 , Systems are built using the default settings, with models trained on the training data, and weights tuned on the development data.", "Lexicons: We use a total of 3 lexicons for the proposed method, and apply bias and linear method for all of them, totaling 6 experiments. The first lexicon (auto) is built on the training data using the automatically learned lexicon method of § SECREF11 separately for both the BTEC and KFTT experiments. Automatic alignment is performed using GIZA++ BIBREF8 . The second lexicon (man) is built using the popular English-Japanese dictionary Eijiro with the manual lexicon method of § SECREF13 . Eijiro contains 104K distinct word-to-word translation entries. The third lexicon (hyb) is built by combining the first and second lexicon with the hybrid method of § SECREF14 .", "Evaluation: We use standard single reference BLEU-4 BIBREF23 to evaluate the translation performance. Additionally, we also use NIST BIBREF24 , which is a measure that puts a particular focus on low-frequency word strings, and thus is sensitive to the low-frequency words we are focusing on in this paper. We measure the statistical significant differences between systems using paired bootstrap resampling BIBREF25 with 10,000 iterations and measure statistical significance at the INLINEFORM0 and INLINEFORM1 levels.", "Additionally, we also calculate the recall of rare words from the references. We define “rare words” as words that appear less than eight times in the target training corpus or references, and measure the percentage of time they are recovered by each translation system." ], [ "In this section, we first a detailed examination of the utility of the proposed bias method when used with the auto or hyb lexicons, which empirically gave the best results, and perform a comparison among the other lexicon integration methods in the following section. Table TABREF20 shows the results of these methods, along with the corresponding baselines.", "First, compared to the baseline attn, our bias method achieved consistently higher scores on both test sets. In particular, the gains on the more difficult KFTT set are large, up to 2.3 BLEU, 0.44 NIST, and 30% Recall, demonstrating the utility of the proposed method in the face of more diverse content and fewer high-frequency words.", "Compared to the traditional pbmt systems hiero, particularly on KFTT we can see that the proposed method allows the NMT system to exceed the traditional SMT methods in BLEU. This is despite the fact that we are not performing ensembling, which has proven to be essential to exceed traditional systems in several previous works BIBREF6 , BIBREF0 , BIBREF1 . Interestingly, despite gains in BLEU, the NMT methods still fall behind in NIST score on the KFTT data set, demonstrating that traditional SMT systems still tend to have a small advantage in translating lower-frequency words, despite the gains made by the proposed method.", "In Table TABREF27 , we show some illustrative examples where the proposed method (auto-bias) was able to obtain a correct translation while the normal attentional model was not. The first example is a mistake in translating “extramarital affairs” into the Japanese equivalent of “soccer,” entirely changing the main topic of the sentence. This is typical of the errors that we have observed NMT systems make (the mistake from Figure FIGREF2 is also from attn, and was fixed by our proposed method). The second example demonstrates how these mistakes can then affect the process of choosing the remaining words, propagating the error through the whole sentence.", "Next, we examine the effect of the proposed method on the training time for each neural MT method, drawing training curves for the KFTT data in Figure FIGREF26 . Here we can see that the proposed bias training methods achieve reasonable BLEU scores in the upper 10s even after the first iteration. In contrast, the baseline attn method has a BLEU score of around 5 after the first iteration, and takes significantly longer to approach values close to its maximal accuracy. This shows that by incorporating lexical probabilities, we can effectively bootstrap the learning of the NMT system, allowing it to approach an appropriate answer in a more timely fashion.", "It is also interesting to examine the alignment vectors produced by the baseline and proposed methods, a visualization of which we show in Figure FIGREF29 . For this sentence, the outputs of both methods were both identical and correct, but we can see that the proposed method (right) placed sharper attention on the actual source word corresponding to content words in the target sentence. This trend of peakier attention distributions in the proposed method held throughout the corpus, with the per-word entropy of the attention vectors being 3.23 bits for auto-bias, compared with 3.81 bits for attn, indicating that the auto-bias method places more certainty in its attention decisions." ], [ "Finally, we perform a full comparison between the various methods for integrating lexicons into the translation process, with results shown in Table TABREF31 . In general the bias method improves accuracy for the auto and hyb lexicon, but is less effective for the man lexicon. This is likely due to the fact that the manual lexicon, despite having broad coverage, did not sufficiently cover target-domain words (coverage of unique words in the source vocabulary was 35.3% and 9.7% for BTEC and KFTT respectively).", "Interestingly, the trend is reversed for the linear method, with it improving man systems, but causing decreases when using the auto and hyb lexicons. This indicates that the linear method is more suited for cases where the lexicon does not closely match the target domain, and plays a more complementary role. Compared to the log-linear modeling of bias, which strictly enforces constraints imposed by the lexicon distribution BIBREF27 , linear interpolation is intuitively more appropriate for integrating this type of complimentary information.", "On the other hand, the performance of linear interpolation was generally lower than that of the bias method. One potential reason for this is the fact that we use a constant interpolation coefficient that was set fixed in every context. gu16acl have recently developed methods to use the context information from the decoder to calculate the different interpolation coefficients for every decoding step, and it is possible that introducing these methods would improve our results." ], [ "To test whether the proposed method is useful on larger data sets, we also performed follow-up experiments on the larger Japanese-English ASPEC dataset BIBREF28 that consist of 2 million training examples, 63 million tokens, and 81,000 vocabulary size. We gained an improvement in BLEU score from 20.82 using the attn baseline to 22.66 using the auto-bias proposed method. This experiment shows that our method scales to larger datasets." ], [ "From the beginning of work on NMT, unknown words that do not exist in the system vocabulary have been focused on as a weakness of these systems. Early methods to handle these unknown words replaced them with appropriate words in the target vocabulary BIBREF10 , BIBREF29 according to a lexicon similar to the one used in this work. In contrast to our work, these only handle unknown words and do not incorporate information from the lexicon in the learning procedure.", "There have also been other approaches that incorporate models that learn when to copy words as-is into the target language BIBREF30 , BIBREF31 , BIBREF32 . These models are similar to the linear approach of § UID10 , but are only applicable to words that can be copied as-is into the target language. In fact, these models can be thought of as a subclass of the proposed approach that use a lexicon that assigns a all its probability to target words that are the same as the source. On the other hand, while we are simply using a static interpolation coefficient INLINEFORM0 , these works generally have a more sophisticated method for choosing the interpolation between the standard and “copy” models. Incorporating these into our linear method is a promising avenue for future work.", "In addition mi16acl have also recently proposed a similar approach by limiting the number of vocabulary being predicted by each batch or sentence. This vocabulary is made by considering the original HMM alignments gathered from the training corpus. Basically, this method is a specific version of our bias method that gives some of the vocabulary a bias of negative infinity and all other vocabulary a uniform distribution. Our method improves over this by considering actual translation probabilities, and also considering the attention vector when deciding how to combine these probabilities.", "Finally, there have been a number of recent works that improve accuracy of low-frequency words using character-based translation models BIBREF33 , BIBREF34 , BIBREF35 . However, luong16acl have found that even when using character-based models, incorporating information about words allows for gains in translation accuracy, and it is likely that our lexicon-based method could result in improvements in these hybrid systems as well." ], [ "In this paper, we have proposed a method to incorporate discrete probabilistic lexicons into NMT systems to solve the difficulties that NMT systems have demonstrated with low-frequency words. As a result, we achieved substantial increases in BLEU (2.0-2.3) and NIST (0.13-0.44) scores, and observed qualitative improvements in the translations of content words.", "For future work, we are interested in conducting the experiments on larger-scale translation tasks. We also plan to do subjective evaluation, as we expect that improvements in content word translation are critical to subjective impressions of translation results. Finally, we are also interested in improvements to the linear method where INLINEFORM0 is calculated based on the context, instead of using a fixed value." ], [ "We thank Makoto Morishita and Yusuke Oda for their help in this project. We also thank the faculty members of AHC lab for their supports and suggestions.", "This work was supported by grants from the Ministry of Education, Culture, Sport, Science, and Technology of Japan and in part by JSPS KAKENHI Grant Number 16H05873." ] ] }
{ "question": [ "What datasets were used?", "What language pairs did they experiment with?" ], "question_id": [ "102a0439739428aac80ac11795e73ce751b93ea1", "d9c26c1bfb3830c9f3dbcccf4c8ecbcd3cb54404" ], "nlp_background": [ "", "" ], "topic_background": [ "", "" ], "paper_read": [ "", "" ], "search_query": [ "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "KFTT BIBREF12 and BTEC BIBREF13" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Dataset: We perform experiments on two widely-used tasks for the English-to-Japanese language pair: KFTT BIBREF12 and BTEC BIBREF13 . KFTT is a collection of Wikipedia article about city of Kyoto and BTEC is a travel conversation corpus. BTEC is an easier translation task than KFTT, because KFTT covers a broader domain, has a larger vocabulary of rare words, and has relatively long sentences. The details of each corpus are depicted in Table TABREF19 ." ], "highlighted_evidence": [ "Dataset: We perform experiments on two widely-used tasks for the English-to-Japanese language pair: KFTT BIBREF12 and BTEC BIBREF13 . KFTT is a collection of Wikipedia article about city of Kyoto and BTEC is a travel conversation corpus. BTEC is an easier translation task than KFTT, because KFTT covers a broader domain, has a larger vocabulary of rare words, and has relatively long sentences. " ] } ], "annotation_id": [ "70743a499fe0d2bd9b9796e4f08db0514a1de8e2" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "English-Japanese" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We perform experiments (§ SECREF5 ) on two English-Japanese translation corpora to evaluate the method's utility in improving translation accuracy and reducing the time required for training." ], "highlighted_evidence": [ "We perform experiments (§ SECREF5 ) on two English-Japanese translation corpora to evaluate the method's utility in improving translation accuracy and reducing the time required for training." ] } ], "annotation_id": [ "0906f74698760f34379f9f5bfb6487422140dc5c" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Figure 1: An example of a mistake made by NMT on low-frequency content words.", "Table 1: Corpus details.", "Table 2: Accuracies for the baseline attentional NMT (attn) and the proposed bias-based method using the automatic (auto-bias) or hybrid (hyb-bias) dictionaries. Bold indicates a gain over the attn baseline, † indicates a significant increase at p < 0.05, and ∗ indicates p < 0.10. Traditional phrase-based (pbmt) and hierarchical phrase based (hiero) systems are shown for reference.", "Figure 2: Training curves for the baseline attn and the proposed bias method.", "Table 3: Examples where the proposed auto-bias improved over the baseline system attn. Underlines indicate words were mistaken in the baseline output but correct in the proposed model’s output.", "Figure 3: Attention matrices for baseline attn and proposed bias methods. Lighter colors indicate stronger attention between the words, and boxes surrounding words indicate the correct alignments.", "Table 4: A comparison of the bias and linear lexicon integration methods on the automatic, manual, and hybrid lexicons. The first line without lexicon is the traditional attentional NMT." ], "file": [ "1-Figure1-1.png", "5-Table1-1.png", "6-Table2-1.png", "6-Figure2-1.png", "7-Table3-1.png", "7-Figure3-1.png", "8-Table4-1.png" ] }
1911.03243
Crowdsourcing a High-Quality Gold Standard for QA-SRL
Question-answer driven Semantic Role Labeling (QA-SRL) has been proposed as an attractive open and natural form of SRL, easily crowdsourceable for new corpora. Recently, a large-scale QA-SRL corpus and a trained parser were released, accompanied by a densely annotated dataset for evaluation. Trying to replicate the QA-SRL annotation and evaluation scheme for new texts, we observed that the resulting annotations were lacking in quality and coverage, particularly insufficient for creating gold standards for evaluation. In this paper, we present an improved QA-SRL annotation protocol, involving crowd-worker selection and training, followed by data consolidation. Applying this process, we release a new gold evaluation dataset for QA-SRL, yielding more consistent annotations and greater coverage. We believe that our new annotation protocol and gold standard will facilitate future replicable research of natural semantic annotations.
{ "section_name": [ "Introduction", "Background — QA-SRL ::: Specifications", "Background — QA-SRL ::: Corpora", "Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Screening and Training", "Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Annotation", "Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Guidelines Refinements", "Annotation and Evaluation Methods ::: Crowdsourcing Methodology ::: Data & Cost", "Annotation and Evaluation Methods ::: Evaluation Metrics", "Annotation and Evaluation Methods ::: Evaluation Metrics ::: Evaluating Redundant Annotations", "Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA)", "Dataset Quality Analysis ::: Dataset Assessment and Comparison", "Dataset Quality Analysis ::: Agreement with PropBank Data", "Baseline Parser Evaluation", "Baseline Parser Evaluation ::: Error Analysis", "Conclusion", "Supplemental Material ::: The Question Template", "Supplemental Material ::: Annotation Pipeline", "Supplemental Material ::: Redundant Parser Output" ], "paragraphs": [ [ "Semantic Role Labeling (SRL) provides explicit annotation of predicate-argument relations, which have been found useful in various downstream tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. Question-Answer driven Semantic Role Labeling (QA-SRL) BIBREF4 is an SRL scheme in which roles are captured by natural language questions, while arguments represent their answers, making the annotations intuitive, semantically rich, and easily attainable by laymen. For example, in Table TABREF4, the question Who cut something captures the traditional “agent” role.", "Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. As BIBREF5 acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s.", "In light of this and other annotation inconsistencies, we propose an improved QA-SRL crowdsourcing protocol for high-quality annotation, allowing for substantially more reliable performance evaluation of QA-SRL parsers. To address worker quality, we systematically screen workers, provide concise yet effective guidelines, and perform a short training procedure, all within a crowd-sourcing platform. To address coverage, we employ two independent workers plus an additional one for consolidation — similar to conventional expert-annotation practices. In addition to yielding 25% more roles, our coverage gain is demonstrated by evaluating against expertly annotated data and comparison with PropBank (Section SECREF4). To foster future research, we release an assessed high-quality gold dataset along with our reproducible protocol and evaluation scheme, and report the performance of the existing parser BIBREF5 as a baseline." ], [ "In QA-SRL, a role question adheres to a 7-slot template, with slots corresponding to a WH-word, the verb, auxiliaries, argument placeholders (SUBJ, OBJ), and prepositions, where some slots are optional BIBREF4 (see appendix for examples). Such question captures the corresponding semantic role with a natural easily understood expression. The set of all non-overlapping answers for the question is then considered as the set of arguments associated with that role. This broad question-based definition of roles captures traditional cases of syntactically-linked arguments, but also additional semantic arguments clearly implied by the sentence meaning (see example (2) in Table TABREF4)." ], [ "The original 2015 QA-SRL dataset BIBREF4 was annotated by non-expert workers after completing a brief training procedure. They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb. In subsequent work, BIBREF5 constructed a large-scale corpus and used it to train a parser. They crowdsourced 133K verbs with 2.0 QA pairs per verb on average. Since crowd-workers had no prior training, quality was established using an additional validation step, where workers had to ascertain the validity of the question, but not of its answers. Instead, the validator provided additional answers, independent of the other annotators. Each verb in the corpus was annotated by a single QA-generating worker and validated by two others.", "In a reserved part of the corpus (Dense), targeted for parser evaluation, verbs were densely validated with 5 workers, approving questions judged as valid by at least 4/5 validators. Notably, adding validators to the Dense annotation pipeline accounts mostly for precision errors, while role coverage solely relies upon the single generator's set of questions. As both 2015 and 2018 datasets use a single question generator, both struggle with maintaining coverage. Also noteworthy, is that while traditional SRL annotations contain a single authoritative and non-redundant annotation, the 2018 dataset provides the raw annotations of all annotators. These include many overlapping or noisy answers, without settling on consolidation procedures to provide a single gold reference.", "We found that these characteristics of the dataset impede its utility for future development of parsers." ], [ "Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback. 1 out of 3 participants were selected after exhibiting good performance, tested against expert annotations." ], [ "We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix." ], [ "We refine the previous guidelines by emphasizing several semantic features: correctly using modal verbs and negations in the question, and choosing answers that coincide with a single entity (example 1 in Table TABREF4)." ], [ "We annotated a sample taken from the Dense set on Wikinews and Wikipedia domains, each with 1000 sentences, equally divided between development and test. QA generating annotators are paid the same as in fitz2018qasrl, while the consolidator is rewarded 5¢ per verb and 3¢ per question. Per predicate, on average, our cost is 54.2¢, yielding 2.9 roles, compared to reported 2.3 valid roles with an approximated cost of 51¢ per predicate for Dense." ], [ "Evaluation in QA-SRL involves aligning predicted and ground truth argument spans and evaluating role label equivalence. Since detecting question paraphrases is still an open challenge, we propose both unlabeled and labeled evaluation metrics.", "Unlabeled Argument Detection (UA) Inspired by the method presented in BIBREF5, arguments are matched using a span matching criterion of intersection over union $\\ge 0.5$ . To credit each argument only once, we employ maximal bipartite matching between the two sets of arguments, drawing an edge for each pair that passes the above mentioned criterion. The resulting maximal matching determines the true-positive set, while remaining non-aligned arguments become false-positives or false-negatives.", "Labeled Argument Detection (LA) All aligned arguments from the previous step are inspected for label equivalence, similar to the joint evaluation reported in BIBREF5. There may be many correct questions for a role. For example, What was given to someone? and What has been given by someone? both refer to the same semantic role but diverge in grammatical tense, voice, and presence of a syntactical object or subject. Aiming to avoid judging non-equivalent roles as equivalent, we propose Strict-Match to be an equivalence on the following template slots: WH, SUBJ, OBJ, as well as on negation, voice, and modality extracted from the question. Final reported numbers on labelled argument detection rates are based on bipartite aligned arguments passing Strict-Match. We later manually estimate the rate of correct equivalences missed by this conservative method.", "As we will see, our evaluation heuristics, adapted from those in BIBREF5, significantly underestimate agreement between annotations, hence reflecting performance lower bounds. Devising more tight evaluation measures remains a challenge for future research." ], [ "We extend our metric for evaluating manual or automatic redundant annotations, like the Dense dataset or the parser in BIBREF5, which predicts argument spans independently of each other. To that end, we ignore predicted arguments that match ground-truth but are not selected by the bipartite matching due to redundancy. After connecting unmatched predicted arguments that overlap, we count one false positive for every connected component to avoid penalizing precision too harshly when predictions are redundant." ], [ "To estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate. Individual worker-vs-worker agreement yields 79.8 F1 over 10 experiments with 150 predicates, indicating high consistency across our annotators, inline with results by other structured semantic annotations (e.g. BIBREF6). Overall consistency of the dataset is assessed by measuring agreement between different consolidated annotations, obtained by disjoint triplets of workers, which achieves F1 of 84.1 over 4 experiments, each with 35 distinct predicates. Notably, consolidation boosts agreement, suggesting it is a necessity for semantic annotation consistency." ], [ "We assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. To construct the expert set, we blindly merged the Dense set with our worker annotations and manually corrected them. We further corrected the evaluation decisions, accounting for some automatic evaluation mistakes introduced by the span-matching and question paraphrasing criteria. As seen in Table TABREF19, our gold set yields comparable precision with significantly higher recall, which is in line with our 25% higher yield.", "Examining disagreements between our gold and Dense, we observe that our workers successfully produced more roles, both implied and explicit. To a lesser extent, they split more arguments into independent answers, as emphasized by our guidelines, an issue which was left under-specified in the previous annotation guidelines." ], [ "It is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. In Table TABREF22, we replicate the experiments in BIBREF4 for both our gold set and theirs, over a sample of 200 sentences from Wall Street Journal (agreement evaluation is automatic and the metric is somewhat similar to our UA). We report macro-averaged (over predicates) precision and recall for all roles, including core and adjuncts, while considering the PropBank data as the reference set. Our recall of the PropBank roles is notably high, reconfirming the coverage obtained by our annotation protocol.", "The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset." ], [ "To illustrate the effectiveness of our new gold-standard, we use its Wikinews development set to evaluate the currently available parser from BIBREF5. For each predicate, the parser classifies every span for being an argument, independently of the other spans. Unlike many other SRL systems, this policy often produces outputs with redundant arguments (see appendix for examples). Results for 1200 predicates are reported in Table TABREF23, demonstrating reasonable performance along with substantial room for improvement, especially with respect to coverage. As expected, the parser's recall against our gold is substantially lower than the 84.2 recall reported in BIBREF5 against Dense, due to the limited recall of Dense relative to our gold set." ], [ "We sample and evaluate 50 predicates to detect correct argument and paraphrase pairs that are skipped by the IOU and Strict-Match criteria. Based on this inspection, the parser completely misses 23% of the 154 roles present in the gold-data, out of which, 17% are implied. While the parser correctly predicts 82% of non-implied roles, it skips half of the implied ones." ], [ "We introduced a refined crowdsourcing pipeline and a corresponding evaluation methodology for QA-SRL. It enabled us to release a new gold standard for evaluations, notably of much higher coverage of core and implied roles than the previous Dense evaluation dataset. We believe that our annotation methodology and dataset would facilitate future research on natural semantic annotations and QA-SRL parsing." ], [ "For completeness, we include several examples with some questions restructured into its 7 template slots in Table TABREF26" ], [ "As described in section 3 The consolidator receives two sets of QA annotations and merges them according to the guidelines to produce an exhaustive and consistent QA set. See Table TABREF28 for examples." ], [ "As mentioned in the paper body, the Fitzgerald et al. parser generates redundant role questions and answers. The first two rows in Table TABREF30 illustrate different, partly redundant, argument spans for the same question. The next two rows illustrate two paraphrased questions for the same role. Generating such redundant output might complicate downstream use of the parser output as well as evaluation methodology." ] ] }
{ "question": [ "How much more coverage is in the new dataset?", "How was coverage measured?", "How was quality measured?", "How was the corpus obtained?", "How are workers trained?", "What is different in the improved annotation protocol?", "How was the previous dataset annotated?", "How big is the dataset?" ], "question_id": [ "04f72eddb1fc73dd11135a80ca1cf31e9db75578", "f74eaee72cbd727a6dffa1600cdf1208672d713e", "068dbcc117c93fa84c002d3424bafb071575f431", "96526a14820b7debfd6f7c5beeade0a854b93d1a", "32ba4d2d15194e889cbc9aa1d21ff1aa6fa27679", "78c010db6413202b4063dc3fb6e3cc59ec16e7e3", "a69af5937cab861977989efd72ad1677484b5c8c", "8847f2c676193189a0f9c0fe3b86b05b5657b76a" ], "nlp_background": [ "two", "two", "two", "two", "two", "two", "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "", "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "278 more annotations", "evidence": [ "The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset." ], "highlighted_evidence": [ "Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. " ] } ], "annotation_id": [ "12360275d5fa216c2ae92edd18d2b5a7e81fa3a9" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "QA pairs per predicate" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The original 2015 QA-SRL dataset BIBREF4 was annotated by non-expert workers after completing a brief training procedure. They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb. In subsequent work, BIBREF5 constructed a large-scale corpus and used it to train a parser. They crowdsourced 133K verbs with 2.0 QA pairs per verb on average. Since crowd-workers had no prior training, quality was established using an additional validation step, where workers had to ascertain the validity of the question, but not of its answers. Instead, the validator provided additional answers, independent of the other annotators. Each verb in the corpus was annotated by a single QA-generating worker and validated by two others." ], "highlighted_evidence": [ "They annotated 7.8K verbs, reporting an average of 2.4 QA pairs per predicate. Even though multiple annotators were shown to produce greater coverage, their released dataset was produced using only a single annotator per verb." ] } ], "annotation_id": [ "b8a2a6a6b76fdcdd7530bd3a87e4450e92da67ef" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Inter-annotator agreement, comparison against expert annotation, agreement with PropBank Data annotations.", "evidence": [ "Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA)", "To estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate. Individual worker-vs-worker agreement yields 79.8 F1 over 10 experiments with 150 predicates, indicating high consistency across our annotators, inline with results by other structured semantic annotations (e.g. BIBREF6). Overall consistency of the dataset is assessed by measuring agreement between different consolidated annotations, obtained by disjoint triplets of workers, which achieves F1 of 84.1 over 4 experiments, each with 35 distinct predicates. Notably, consolidation boosts agreement, suggesting it is a necessity for semantic annotation consistency.", "Dataset Quality Analysis ::: Dataset Assessment and Comparison", "We assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. To construct the expert set, we blindly merged the Dense set with our worker annotations and manually corrected them. We further corrected the evaluation decisions, accounting for some automatic evaluation mistakes introduced by the span-matching and question paraphrasing criteria. As seen in Table TABREF19, our gold set yields comparable precision with significantly higher recall, which is in line with our 25% higher yield.", "Dataset Quality Analysis ::: Agreement with PropBank Data", "It is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. In Table TABREF22, we replicate the experiments in BIBREF4 for both our gold set and theirs, over a sample of 200 sentences from Wall Street Journal (agreement evaluation is automatic and the metric is somewhat similar to our UA). We report macro-averaged (over predicates) precision and recall for all roles, including core and adjuncts, while considering the PropBank data as the reference set. Our recall of the PropBank roles is notably high, reconfirming the coverage obtained by our annotation protocol." ], "highlighted_evidence": [ "Dataset Quality Analysis ::: Inter-Annotator Agreement (IAA)\nTo estimate dataset consistency across different annotations, we measure F1 using our UA metric with 5 generators per predicate.", "Dataset Quality Analysis ::: Dataset Assessment and Comparison\nWe assess both our gold standard set and the recent Dense set against an integrated expert annotated sample of 100 predicates. ", "Dataset Quality Analysis ::: Agreement with PropBank Data\nIt is illuminating to observe the agreement between QA-SRL and PropBank (CoNLL-2009) annotations BIBREF7. " ] } ], "annotation_id": [ "a1ba8313ddccd343aaf9ee6ac69b3c8d7c00cbfa" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " trained annotators BIBREF4", "crowdsourcing BIBREF5 " ], "yes_no": null, "free_form_answer": "", "evidence": [ "Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. As BIBREF5 acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s." ], "highlighted_evidence": [ "Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability." ] } ], "annotation_id": [ "090ec541ca7e88cc908f7c23f2dc68b3eee4024b" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "extensive personal feedback" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback. 1 out of 3 participants were selected after exhibiting good performance, tested against expert annotations." ], "highlighted_evidence": [ "Our pool of annotators is selected after several short training rounds, with up to 15 predicates per round, in which they received extensive personal feedback." ] } ], "annotation_id": [ "b1a374fe6485a9c92479db7bca8c839850edbfe0" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "a trained worker consolidates existing annotations ", "evidence": [ "We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix." ], "highlighted_evidence": [ "We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. " ] } ], "annotation_id": [ "f2413e07629ffe74ac179dd6085da5781debcb51" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "the annotation machinery of BIBREF5" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. In this consolidation task, the worker validates questions, merges, splits or modifies answers for the same role according to guidelines, and removes redundant roles by picking the more naturally phrased questions. For example, in Table TABREF4 ex. 1, one worker could have chosen “47 people”, while another chose “the councillor”; in this case the consolidator would include both of those answers. In Section SECREF4, we show that this process yields better coverage. For example annotations, please refer to the appendix." ], "highlighted_evidence": [ "We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments. " ] } ], "annotation_id": [ "d7fde438a66548287215deabf15d328d3afbb7b3" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "1593 annotations" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The measured precision with respect to PropBank is low for adjuncts due to the fact that our annotators were capturing many correct arguments not covered in PropBank. To examine this, we analyzed 100 false positive arguments. Only 32 of those were due to wrong or incomplete QA annotations in our gold, while most others were outside of PropBank's scope, capturing either implied arguments or roles not covered in PropBank. Extrapolating from this manual analysis estimates our true precision (on all roles) to be about 91%, which is consistent with the 88% precision figure in Table TABREF19. Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. Overall, the comparison to PropBank reinforces the quality of our gold dataset and shows its better coverage relative to the 2015 dataset." ], "highlighted_evidence": [ "Compared with 2015, our QA-SRL gold yielded 1593 annotations, with 989 core and 604 adjuncts, while theirs yielded 1315 annotations, 979 core and 336 adjuncts. " ] } ], "annotation_id": [ "d6014ab0bc1d512e6e22ae906021cc4c94643c57" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] } ] }
{ "caption": [ "Table 1: Running examples of QA-SRL annotations; this set is a sample of the possible questions that can be asked. The bar (|) separates multiple selected answers.", "Table 2: Automatic and manually-corrected evaluation of our gold standard and Dense (Fitzgerald et al., 2018) against the expert annotated sample.", "Table 3: Performance analysis against PropBank. Precision, recall and F1 for all roles, core roles, and adjuncts.", "Table 4: Automatic and manual parser evaluation against 500 Wikinews sentences from the gold dataset. Manual is evaluated on 50 sampled predicates.", "Table 6: The consolidation task – A1, A2 refer to the original annotator QAs, C refers to the consolidator selected question and corrected answers.", "Table 7: The parser generates redundant arguments with different paraphrased questions." ], "file": [ "2-Table1-1.png", "3-Table2-1.png", "4-Table3-1.png", "4-Table4-1.png", "5-Table6-1.png", "5-Table7-1.png" ] }
1809.04686
Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine Translation
Transferring representations from large supervised tasks to downstream tasks has shown promising results in AI fields such as Computer Vision and Natural Language Processing (NLP). In parallel, the recent progress in Machine Translation (MT) has enabled one to train multilingual Neural MT (NMT) systems that can translate between multiple languages and are also capable of performing zero-shot translation. However, little attention has been paid to leveraging representations learned by a multilingual NMT system to enable zero-shot multilinguality in other NLP tasks. In this paper, we demonstrate a simple framework, a multilingual Encoder-Classifier, for cross-lingual transfer learning by reusing the encoder from a multilingual NMT system and stitching it with a task-specific classifier component. Our proposed model achieves significant improvements in the English setup on three benchmark tasks - Amazon Reviews, SST and SNLI. Further, our system can perform classification in a new language for which no classification data was seen during training, showing that zero-shot classification is possible and remarkably competitive. In order to understand the underlying factors contributing to this finding, we conducted a series of analyses on the effect of the shared vocabulary, the training data type for NMT, classifier complexity, encoder representation power, and model generalization on zero-shot performance. Our results provide strong evidence that the representations learned from multilingual NMT systems are widely applicable across languages and tasks.
{ "section_name": [ "Introduction", "Proposed Method", "Multilingual Representations Using NMT", "Multilingual Encoder-Classifier", "Corpora", "Model and Training Details", "Transfer Learning Results", "Zero-Shot Classification Results", "Analyses", "Conclusion" ], "paragraphs": [ [ "Transfer learning has been shown to work well in Computer Vision where pre-trained components from a model trained on ImageNet BIBREF0 are used to initialize models for other tasks BIBREF1 . In most cases, the other tasks are related to and share architectural components with the ImageNet task, enabling the use of such pre-trained models for feature extraction. With this transfer capability, improvements have been obtained on other image classification datasets, and on other tasks such as object detection, action recognition, image segmentation, etc BIBREF2 . Analogously, we propose a method to transfer a pre-trained component - the multilingual encoder from an NMT system - to other NLP tasks.", "In NLP, initializing word embeddings with pre-trained word representations obtained from Word2Vec BIBREF3 or GloVe BIBREF4 has become a common way of transferring information from large unlabeled data to downstream tasks. Recent work has further shown that we can improve over this approach significantly by considering representations in context, i.e. modeled depending on the sentences that contain them, either by taking the outputs of an encoder in MT BIBREF5 or by obtaining representations from the internal states of a bi-directional Language Model (LM) BIBREF6 . There has also been successful recent work in transferring sentence representations from resource-rich tasks to improve resource-poor tasks BIBREF7 , however, most of the above transfer learning examples have focused on transferring knowledge across tasks for a single language, in English.", "Cross-lingual or multilingual NLP, the task of transferring knowledge from one language to another, serves as a good test bed for evaluating various transfer learning approaches. For cross-lingual NLP, the most widely studied approach is to use multilingual embeddings as features in neural network models. However, research has shown that representations learned in context are more effective BIBREF5 , BIBREF6 ; therefore, we aim at doing better than just using multilingual embeddings in the cross-lingual tasks. Recent progress in multilingual NMT provides a compelling opportunity for obtaining contextualized multilingual representations, as multilingual NMT systems are capable of generalizing to an unseen language direction, i.e. zero-shot translation. There is also evidence that the encoder of a multilingual NMT system learns language agnostic, universal interlingua representations, which can be further exploited BIBREF8 .", "In this paper, we focus on using the representations obtained from a multilingual NMT system to enable cross-lingual transfer learning on downstream NLP tasks. Our contributions are three-fold:" ], [ "We propose an Encoder-Classifier model, where the Encoder, leveraging the representations learned by a multilingual NMT model, converts an input sequence ${\\mathbf {x}}$ into a set of vectors C, and the Classifier predicts a class label $y$ given the encoding of the input sequence, C." ], [ "Although there has been a large body of work in building multilingual NMT models which can translate between multiple languages at the same time BIBREF29 , BIBREF30 , BIBREF31 , BIBREF8 , zero-shot capabilities of such multilingual representations have only been tested for MT BIBREF8 . We propose a simple yet effective solution - reuse the encoder of a multilingual NMT model to initialize the encoder for other NLP tasks. To be able to achieve promising zero-shot classification performance, we consider two factors: (1) The ability to encode multiple source languages with the same encoder and (2) The ability to learn language agnostic representations of the source sequence. Based on the literature, both requirements can be satisfied by training a multilingual NMT model having a shared encoder BIBREF32 , BIBREF8 , and a separate decoder and attention mechanism for each target language BIBREF30 . After training such a multilingual NMT model, the decoder and the corresponding attention mechanisms (which are target-language specific) are discarded, while the multilingual encoder is used to initialize the encoder of our proposed Encoder-Classifier model." ], [ "In order to leverage pre-trained multilingual representations introduced in Section \"Analyses\" , our encoder strictly follows the structure of a regular Recurrent Neural Network (RNN) based NMT encoder BIBREF33 with a stacked layout BIBREF34 . Given an input sequence ${\\mathbf {x}} = (x_{1}, x_{2}, \\ldots , x_{T_x})$ of length $T_x$ , our encoder contextualizes or encodes the input sequence into a set of vectors C, by first applying a bi-directional RNN BIBREF35 , followed by a stack of uni-directional RNNs. The hidden states of the final layer RNN, $h_i^l$ , form the set C $~=\\lbrace h_i^l \\rbrace _{i=1}^{T_x}$ of context vectors which will be used by the classifier, where $l$ denotes the number of RNN layers in the stacked encoder.", "The task of the classifier is to predict a class label $y$ given the context set C. To ease this classification task given a variable length input set C, a common approach in the literature is to extract a single sentence vector $\\mathbf {q}$ by making use of pooling over time BIBREF36 . Further, to increase the modeling capacity, the pooling operation can be parameterized using pre- and post-pooling networks. Formally, given the context set C, we extract a sentence vector $\\mathbf {q}$ in three steps, using three networks, (1) pre-pooling feed-forward network $f_{pre}$ , (2) pooling network $f_{pool}$ and (3) post-pooling feed-forward network $f_{post}$ , $\n\\mathbf {q} = f_{post}( f_{pool} ( f_{pre} (\\textbf {C}) ) ).\n$ ", " Finally, given the sentence vector $\\mathbf {q}$ , a class label $y$ is predicted by employing a softmax function." ], [ "We evaluate the proposed method on three common NLP tasks: Amazon Reviews, SST and SNLI. We utilize parallel data to train our multilingual NMT system, as detailed below.", "For the MT task, we use the WMT 2014 En $\\leftrightarrow $ Fr parallel corpus. The dataset contains 36 million En $\\rightarrow $ Fr sentence pairs. We swapped the source and target sentences to obtain parallel data for the Fr $\\rightarrow $ En translation task. We use these two datasets (72 million sentence pairs) to train a single multilingual NMT model to learn both these translation directions simultaneously. We generated a shared sub-word vocabulary BIBREF37 , BIBREF38 of 32K units from all source and target training data. We use this sub-word vocabulary for all of our experiments below.", "The Amazon reviews dataset BIBREF39 is a multilingual sentiment classification dataset, providing data for four languages - English (En), French (Fr), German (De), and Japanese. We use the English and French datasets in our experiments. The dataset contains 6,000 documents in the train and test portions for each language. Each review consists of a category label, a title, a review, and a star rating (5-point scale). We only use the review text in our experiments. Following BIBREF39 , we mapped the reviews with lower scores (1 and 2) to negative examples and the reviews with higher scores (4 and 5) to positive examples, thereby turning it into a binary classification problem. Reviews with score 3 are dropped. We split the training dataset into 10% for development and the rest for training, and we truncate each example and keep the first 200 words in the review. Note that, since the data for each language was obtained by crawling different product pages, the data is not aligned across languages.", "The sentiment classification task proposed in BIBREF9 is also a binary classification problem where each sentence and phrase is associated with either a positive or a negative sentiment. We ignore phrase-level annotations and sentence-level neutral examples in our experiments. The dataset contains 6920, 872, and 1821 examples for training, development and testing, respectively. Since SST does not provide a multilingual test set, we used the public translation engine Google Translate to translate the SST test set to French. Previous work by BIBREF40 has shown that replacing the human translated test set with a synthetic set (obtained by using Google Translate) produces only a small difference of around 1% absolute accuracy on their human-translated French SNLI test set. Therefore, the performance measured on our `pseudo' French SST test set is expected to be a good indicator of zero-shot performance.", "Natural language inference is a task that aims to determine whether a natural language hypothesis $\\mathbf {h}$ can justifiably be inferred from a natural language premise $\\mathbf {p}$ . SNLI BIBREF10 is one of the largest datasets for a natural language inference task in English and contains multiple sentence pairs with a sentence-level entailment label. Each pair of sentences can have one of three labels - entailment, contradiction, and neutral, which are annotated by multiple humans. The dataset contains 550K training, 10K validation, and 10K testing examples. To enable research on multilingual SNLI, BIBREF40 chose a subset of the SNLI test set (1332 sentences) and professionally translated it into four major languages - Arabic, French, Russian, and Spanish. We use the French test set for evaluation in Section \"Zero-Shot Classification Results\" and \"Analyses\" ." ], [ "Here, we first describe the model and training details of the base multilingual NMT model whose encoder is reused in all other tasks. Then we provide details about the task-specific classifiers. For each task, we provide the specifics of $f_{pre}$ , $f_{pool}$ and $f_{post}$ nets that build the task-specific classifier.", "All the models in our experiments are trained using Adam optimizer BIBREF42 with label smoothing BIBREF43 and unless otherwise stated below, layer normalization BIBREF44 is applied to all LSTM gates and feed-forward layer inputs. We apply L2 regularization to the model weights and dropout to layer activations and sub-word embeddings. Hyper-parameters, such as mixing ratio $\\lambda $ of L2 regularization, dropout rates, label smoothing uncertainty, batch sizes, learning rate of optimizers and initialization ranges of weights are tuned on the development sets provided for each task separately.", "Our multilingual NMT model consists of a shared multilingual encoder and two decoders, one for English and the other for French. The multilingual encoder uses one bi-directional LSTM, followed by three stacked layers of uni-directional LSTMs in the encoder. Each decoder consists of four stacked LSTM layers, with the first LSTM layers intertwined with additive attention networks BIBREF33 to learn a source-target alignment function. All the uni-directional LSTMs are equipped with residual connections BIBREF45 to ease the optimization, both in the encoder and the decoders. LSTM hidden units and the shared source-target embedding dimensions are set to 512.", "Similar to BIBREF30 , multilingual NMT model is trained in a multi-task learning setup, where each decoder is augmented with a task-specific loss, minimizing the negative conditional log-likelihood of the target sequence given the source sequence. During training, mini-batches of En $\\rightarrow $ Fr and Fr $\\rightarrow $ En examples are interleaved. We picked the best model based on the best average development set BLEU score on both of the language pairs.", "The Encoder-Classifier model here uses the encoder defined previously. With regards to the classifier, the pre- and post-pooling networks ( $f_{pre}$ , $f_{post}$ ) are both one-layer feed forward networks to cast the dimension size from 512 to 128 and from 128 to 32, respectively. We used max-pooling operator for the $f_{pool}$ network to pool the activation over time.", "We extended the proposed Encoder-Classifier model to a multi-source model BIBREF46 since SNLI is an inference task of relations between two input sentences, “premise\" and “hypothesis\". For the two sources, we use two separate encoders, which are initialized with the same pre-trained multilingual NMT encoder, to obtain their representations. Following our notation, the encoder outputs are processed using $f_{pre}$ , $f_{pool}$ and $f_{post}$ nets, again with two separate network blocks. Specifically, $f_{pre}$ consists of a co-attention layer BIBREF47 followed by a two-layer feed-forward neural network with residual connections. We use max pooling over time for $f_{pool}$ and again a two-layer feed-forward neural network with residual connections as $f_{post}$ . After processing two sentence encodings using two network blocks, we obtain two vectors representing premise $\\mathbf {h}_{premise}$ and hypothesis $\\mathbf {h}_{hypothesis}$ . Following BIBREF48 , we compute two types of relational vectors with $\\mathbf {h}_{-} = |\\mathbf {h}_{premise} - \\mathbf {h}_{hypothesis}|,$ and $\\mathbf {h}_{\\times } = \\mathbf {h}_{premise} \\odot \\mathbf {h}_{hypothesis}$ , where $f_{pool}$0 denotes the element-wise multiplication between two vectors. The final relation vector is obtained by concatenating $f_{pool}$1 and $f_{pool}$2 . For both “premise\" and “hypothesis\" feed-forward networks we used 512 hidden dimensions.", "For Amazon Reviews, SST and SNLI tasks, we picked the best model based on the highest development set accuracy." ], [ "In this section, we report our results for the three tasks - Amazon Reviews (English and French), SST, and SNLI. For each task, we first build a baseline system using the proposed Encoder-Classifier architecture described in Section \"Proposed Method\" where the encoder is initialized randomly. Next, we experiment with using the pre-trained multilingual NMT encoder to initialize the system as described in Section \"Analyses\" . Finally, we perform an experiment where we freeze the encoder after initialization and only update the classifier component of the system.", "Table 1 summarizes the accuracy of our proposed system for these three different approaches and the state-of-the-art results on all the tasks. The first row in the table shows the baseline accuracy of our system for all four datasets. The second row shows the result from initializing with a pre-trained multilingual NMT encoder. It can be seen that this provides a significant improvement in accuracy, an average of 4.63%, across all the tasks. This illustrates that the multilingual NMT encoder has successfully learned transferable contextualized representations that are leveraged by the classifier component of our proposed system. These results are in line with the results in BIBREF5 where the authors used the representations from the top NMT encoder layer as an additional input to the task-specific system. However, in our setup we reused all of the layers of the encoder as a single pre-trained component in the task-specific system. The third row shows the results from freezing the pre-trained encoder after initialization and only training the classifier component. For the Amazon English and French tasks, freezing the encoder after initialization significantly improves the performance further. We hypothesize that since the Amazon dataset is a document level classification task, the long input sequences are very different from the short sequences consumed by the NMT system and hence freezing the encoder seems to have a positive effect. This hypothesis is also supported by the SNLI and SST results, which contain sentence-level input sequences, where we did not find any significant difference between freezing and not freezing the encoder." ], [ "In this section, we explore the zero-shot classification task in French for our systems. We assume that we do not have any French training data for all the three tasks and test how well our proposed method can generalize to the unseen French language without any further training. Specifically, we reuse the three proposed systems from Table 1 after being trained only on the English classification task and test the systems on data from an unseen language (e.g. French). A reasonable upper bound to which zero-shot performance should be compared to is bridging - translating a French test text to English and then applying the English classifier on the translated text. If we assume the translation to be perfect, we should expect this approach to perform as well as the English classifier.", "The Amazon Reviews and SNLI tasks have a French test set available, and we evaluate the performance of the bridged and zero-shot systems on each French set. However, the SST dataset does not have a French test set, hence the `pseudo French' test set described in Section UID14 is used to evaluate the zero-shot performance. We use the English accuracy scores from the SST column in Table 1 as a high-quality proxy for the SST bridged system. We do this since translating the `pseudo French' back to English will result in two distinct translation steps and hence more errors.", "Table 2 summarizes all of our zero-shot results for French classification on the three tasks. It can be seen that just by using the pre-trained NMT encoder, the zero-shot performance increases drastically from almost random to within 10% of the bridged system. Freezing the encoder further pushes this performance closer to the bridged system. On the Amazon Review task, our zero-shot system is within 2% of the best bridged system. On the SST task, our zero-shot system obtains an accuracy of 83.14% which is within 1.5% of the bridged equivalent (in this case the English system).", "Finally, on SNLI, we compare our best zero-shot system with bilingual and multilingual embedding based methods evaluated on the same French test set in BIBREF40 . As illustrated in Table 3 , our best zero-shot system obtains the highest accuracy of 73.88%. INVERT BIBREF23 uses inverted indexing over a parallel corpus to obtain crosslingual word representations. BiCVM BIBREF25 learns bilingual compositional representations from sentence-aligned parallel corpora. In RANDOM BIBREF24 , bilingual embeddings are trained on top of parallel sentences with randomly shuffled tokens using skip-gram with negative sampling, and RATIO is similar to RANDOM with the one difference being that the tokens in the parallel sentences are not randomly shuffled. Our system significantly outperforms all methods listed in the second column by 10.66% to 15.24% and demonstrates the effectiveness of our proposed approach." ], [ "In this section, we try to analyze why our simple Encoder-Classifier system is effective at zero-shot classification. We perform a series of experiments to better understand this phenomenon. In particular, we study (1) the effect of shared sub-word vocabulary, (2) the amount of multilingual training data to measure the influence of multilinguality, (3) encoder/classifier capacity to measure the influence of representation power, and (4) model behavior on different training phases to assess the relation between generalization performance on English and zero-shot performance on French." ], [ "In this paper, we have demonstrated a simple yet effective approach to perform cross-lingual transfer learning using representations from a multilingual NMT model. Our proposed approach of reusing the encoder from a multilingual NMT system as a pre-trained component provides significant improvements on three downstream tasks. Further, our approach enables us to perform surprisingly competitive zero-shot classification on an unseen language and outperforms cross-lingual embedding base methods. Finally, we end with a series of analyses which shed light on the factors that contribute to the zero-shot phenomenon. We hope that these results showcase the efficacy of multilingual NMT to learn transferable contextualized representations for many downstream tasks." ] ] }
{ "question": [ "Do the other multilingual baselines make use of the same amount of training data?", "How big is the impact of training data size on the performance of the multilingual encoder?", "What data were they used to train the multilingual encoder?" ], "question_id": [ "05196588320dfb0b9d9be7d64864c43968d329bc", "e930f153c89dfe9cff75b7b15e45cd3d700f4c72", "545ff2f76913866304bfacdb4cc10d31dbbd2f37" ], "nlp_background": [ "infinity", "infinity", "infinity" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "somewhat", "somewhat", "somewhat" ], "search_query": [ "multilingual classification", "multilingual classification", "multilingual classification" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "09194b62d31ef50c74d81ba330cf0d816da83d95" ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "f57f20aa015b4c9c640ce2729851ea8a9d45c360" ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "WMT 2014 En-Fr parallel corpus", "evidence": [ "For the MT task, we use the WMT 2014 En $\\leftrightarrow $ Fr parallel corpus. The dataset contains 36 million En $\\rightarrow $ Fr sentence pairs. We swapped the source and target sentences to obtain parallel data for the Fr $\\rightarrow $ En translation task. We use these two datasets (72 million sentence pairs) to train a single multilingual NMT model to learn both these translation directions simultaneously. We generated a shared sub-word vocabulary BIBREF37 , BIBREF38 of 32K units from all source and target training data. We use this sub-word vocabulary for all of our experiments below." ], "highlighted_evidence": [ "For the MT task, we use the WMT 2014 En $\\leftrightarrow $ Fr parallel corpus. The dataset contains 36 million En $\\rightarrow $ Fr sentence pairs. We swapped the source and target sentences to obtain parallel data for the Fr $\\rightarrow $ En translation task. We use these two datasets (72 million sentence pairs) to train a single multilingual NMT model to learn both these translation directions simultaneously. " ] } ], "annotation_id": [ "103d0d2040a12a509171cbe3ce33664e976243bb" ], "worker_id": [ "f840a836eee0180d2c976457f8b3052d8e78050c" ] } ] }
{ "caption": [ "Table 1: Transfer learning results of the classification accuracy on all the datasets. Amazon (En) and Amazon (Fr) are the English and French versions of the task, training the models on the data for each language. The state-of-the-art results are cited from Fernndez, Esuli, and Sebastiani (2016) for both Amazon Reviews tasks and McCann et al. (2017) for SST and SNLI.", "Table 2: Zero-Shot performance on all French test sets. ∗Note that we use the English accuracy in the bridged column for SST.", "Table 3: Comparison of our best zero-shot result on the French SNLI test set to other baselines. See text for details.", "Table 4: Results of the control experiment on zero-shot performance on the Amazon German test set.", "Table 5: Effect of machine translation data over our proposed Encoder-Classifier on the SNLI tasks. The results of SNLI (Fr) shows the zero-shot performance of our system.", "Table 6: Zero-shot analyses of classifier network model capacity. The SNLI (Fr) results report the zero-shot performance.", "Figure 1: Correlation between test-loss, test-accuracy (the English SNLI) and zero-shot accuracy (the French test set).", "Table 7: Effect of parameter smoothing on the English SNLI test set and zero-shot performance on the French test set." ], "file": [ "4-Table1-1.png", "5-Table2-1.png", "5-Table3-1.png", "6-Table4-1.png", "6-Table5-1.png", "7-Table6-1.png", "7-Figure1-1.png", "7-Table7-1.png" ] }
1703.09684
An Analysis of Visual Question Answering Algorithms
In visual question answering (VQA), an algorithm must answer text-based questions about images. While multiple datasets for VQA have been created since late 2014, they all have flaws in both their content and the way algorithms are evaluated on them. As a result, evaluation scores are inflated and predominantly determined by answering easier questions, making it difficult to compare different methods. In this paper, we analyze existing VQA algorithms using a new dataset. It contains over 1.6 million questions organized into 12 different categories. We also introduce questions that are meaningless for a given image to force a VQA system to reason about image content. We propose new evaluation schemes that compensate for over-represented question-types and make it easier to study the strengths and weaknesses of algorithms. We analyze the performance of both baseline and state-of-the-art VQA models, including multi-modal compact bilinear pooling (MCB), neural module networks, and recurrent answering units. Our experiments establish how attention helps certain categories more than others, determine which models work better than others, and explain how simple models (e.g. MLP) can surpass more complex models (MCB) by simply learning to answer large, easy question categories.
{ "section_name": [ "Introduction", "Prior Natural Image VQA Datasets", "Synthetic Datasets that Fight Bias", "TDIUC for Nuanced VQA Analysis", "Importing Questions from Existing Datasets", "Generating Questions using Image Annotations", "Manual Annotation", "Post Processing", "Proposed Evaluation Metric", "Algorithms for VQA", "Experiments", "Easy Question-Types for Today's Methods", "Effects of the Proposed Accuracy Metrics", "Can Algorithms Predict Rare Answers?", "Effects of Including Absurd Questions", "Effects of Balancing Object Presence", "Advantages of Attentive Models", "Compositional and Modular Approaches", "Conclusion", "Additional Details About TDIUC", "Questions using Visual Genome Annotations", "Answer Distribution", "Train and Test Split", "Additional Experimental Results" ], "paragraphs": [ [ "In open-ended visual question answering (VQA) an algorithm must produce answers to arbitrary text-based questions about images BIBREF0 , BIBREF1 . VQA is an exciting computer vision problem that requires a system to be capable of many tasks. Truly solving VQA would be a milestone in artificial intelligence, and would significantly advance human computer interaction. However, VQA datasets must test a wide range of abilities for progress to be adequately measured.", "VQA research began in earnest in late 2014 when the DAQUAR dataset was released BIBREF0 . Including DAQUAR, six major VQA datasets have been released, and algorithms have rapidly improved. On the most popular dataset, `The VQA Dataset' BIBREF1 , the best algorithms are now approaching 70% accuracy BIBREF2 (human performance is 83%). While these results are promising, there are critical problems with existing datasets in terms of multiple kinds of biases. Moreover, because existing datasets do not group instances into meaningful categories, it is not easy to compare the abilities of individual algorithms. For example, one method may excel at color questions compared to answering questions requiring spatial reasoning. Because color questions are far more common in the dataset, an algorithm that performs well at spatial reasoning will not be appropriately rewarded for that feat due to the evaluation metrics that are used.", "Contributions: Our paper has four major contributions aimed at better analyzing and comparing VQA algorithms: 1) We create a new VQA benchmark dataset where questions are divided into 12 different categories based on the task they solve; 2) We propose two new evaluation metrics that compensate for forms of dataset bias; 3) We balance the number of yes/no object presence detection questions to assess whether a balanced distribution can help algorithms learn better; and 4) We introduce absurd questions that force an algorithm to determine if a question is valid for a given image. We then use the new dataset to re-train and evaluate both baseline and state-of-the-art VQA algorithms. We found that our proposed approach enables more nuanced comparisons of VQA algorithms, and helps us understand the benefits of specific techniques better. In addition, it also allowed us to answer several key questions about VQA algorithms, such as, `Is the generalization capacity of the algorithms hindered by the bias in the dataset?', `Does the use of spatial attention help answer specific question-types?', `How successful are the VQA algorithms in answering less-common questions?', and 'Can the VQA algorithms differentiate between real and absurd questions?'" ], [ "Six datasets for VQA with natural images have been released between 2014–2016: DAQUAR BIBREF0 , COCO-QA BIBREF3 , FM-IQA BIBREF4 , The VQA Dataset BIBREF1 , Visual7W BIBREF5 , and Visual Genome BIBREF6 . FM-IQA needs human judges and has not been widely used, so we do not discuss it further. Table 1 shows statistics for the other datasets. Following others BIBREF7 , BIBREF8 , BIBREF9 , we refer to the portion of The VQA Dataset containing natural images as COCO-VQA. Detailed dataset reviews can be found in BIBREF10 and BIBREF11 .", "All of the aforementioned VQA datasets are biased. DAQUAR and COCO-QA are small and have a limited variety of question-types. Visual Genome, Visual7W, and COCO-VQA are larger, but they suffer from several biases. Bias takes the form of both the kinds of questions asked and the answers that people give for them. For COCO-VQA, a system trained using only question features achieves 50% accuracy BIBREF7 . This suggests that some questions have predictable answers. Without a more nuanced analysis, it is challenging to determine what kinds of questions are more dependent on the image. For datasets made using Mechanical Turk, annotators often ask object recognition questions, e.g., `What is in the image?' or `Is there an elephant in the image?'. Note that in the latter example, annotators rarely ask that kind of question unless the object is in the image. On COCO-VQA, 79% of questions beginning with `Is there a' will have `yes' as their ground truth answer.", "In 2017, the VQA 2.0 BIBREF12 dataset was introduced. In VQA 2.0, the same question is asked for two different images and annotators are instructed to give opposite answers, which helped reduce language bias. However, in addition to language bias, these datasets are also biased in their distribution of different types of questions and the distribution of answers within each question-type. Existing VQA datasets use performance metrics that treat each test instance with equal value (e.g., simple accuracy). While some do compute additional statistics for basic question-types, overall performance is not computed from these sub-scores BIBREF1 , BIBREF3 . This exacerbates the issues with the bias because the question-types that are more likely to be biased are also more common. Questions beginning with `Why' and `Where' are rarely asked by annotators compared to those beginning with `Is' and 'Are'. For example, on COCO-VQA, improving accuracy on `Is/Are' questions by 15% will increase overall accuracy by over 5%, but answering all `Why/Where' questions correctly will increase accuracy by only 4.1% BIBREF10 . Due to the inability of the existing evaluation metrics to properly address these biases, algorithms trained on these datasets learn to exploit these biases, resulting in systems that work poorly when deployed in the real-world.", "For related reasons, major benchmarks released in the last decade do not use simple accuracy for evaluating image recognition and related computer vision tasks, but instead use metrics such as mean-per-class accuracy that compensates for unbalanced categories. For example, on Caltech-101 BIBREF13 , even with balanced training data, simple accuracy fails to address the fact that some categories were much easier to classify than others (e.g., faces and planes were easy and also had the largest number of test images). Mean per-class accuracy compensates for this by requiring a system to do well on each category, even when the amount of test instances in categories vary considerably.", "Existing benchmarks do not require reporting accuracies across different question-types. Even when they are reported, the question-types can be too coarse to be useful, e.g., `yes/no', `number' and `other' in COCO-VQA. To improve the analysis of the VQA algorithms, we categorize the questions into meaningful types, calculate the sub-scores, and incorporate them in our evaluation metrics." ], [ "Previous works have studied bias in VQA and proposed countermeasures. In BIBREF14 , the Yin and Yang dataset was created to study the effect of having an equal number of binary (yes/no) questions about cartoon images. They found that answering questions from a balanced dataset was harder. This work is significant, but it was limited to yes/no questions and their approach using cartoon imagery cannot be directly extended to real-world images.", "One of the goals of this paper is to determine what kinds of questions an algorithm can answer easily. In BIBREF15 , the SHAPES dataset was proposed, which has similar objectives. SHAPES is a small dataset, consisting of 64 images that are composed by arranging colored geometric shapes in different spatial orientations. Each image has the same 244 yes/no questions, resulting in 15,616 questions. Although SHAPES serves as an important adjunct evaluation, it alone cannot suffice for testing a VQA algorithm. The major limitation of SHAPES is that all of its images are of 2D shapes, which are not representative of real-world imagery. Along similar lines, Compositional Language and Elementary Visual Reasoning (CLEVR) BIBREF16 also proposes use of 3D rendered geometric objects to study reasoning capacities of a model. CLEVR is larger than SHAPES and makes use of 3D rendered geometric objects. In addition to shape and color, it adds material property to the objects. CLEVR has five types of questions: attribute query, attribute comparison, integer comparison, counting, and existence.", "Both SHAPES and CLEVR were specifically tailored for compositional language approaches BIBREF15 and downplay the importance of visual reasoning. For instance, the CLEVR question, `What size is the cylinder that is left of the brown metal thing that is left of the big sphere?' requires demanding language reasoning capabilities, but only limited visual understanding is needed to parse simple geometric objects. Unlike these three synthetic datasets, our dataset contains natural images and questions. To improve algorithm analysis and comparison, our dataset has more (12) explicitly defined question-types and new evaluation metrics." ], [ "In the past two years, multiple publicly released datasets have spurred the VQA research. However, due to the biases and issues with evaluation metrics, interpreting and comparing the performance of VQA systems can be opaque. We propose a new benchmark dataset that explicitly assigns questions into 12 distinct categories. This enables measuring performance within each category and understand which kind of questions are easy or hard for today's best systems. Additionally, we use evaluation metrics that further compensate for the biases. We call the dataset the Task Driven Image Understanding Challenge (TDIUC). The overall statistics and example images of this dataset are shown in Table 1 and Fig. 2 respectively.", "TDIUC has 12 question-types that were chosen to represent both classical computer vision tasks and novel high-level vision tasks which require varying degrees of image understanding and reasoning. The question-types are:", "The number of each question-type in TDIUC is given in Table 2 . The questions come from three sources. First, we imported a subset of questions from COCO-VQA and Visual Genome. Second, we created algorithms that generated questions from COCO's semantic segmentation annotations BIBREF17 , and Visual Genome's objects and attributes annotations BIBREF6 . Third, we used human annotators for certain question-types. In the following sections, we briefly describe each of these methods." ], [ "We imported questions from COCO-VQA and Visual Genome belonging to all question-types except `object utilities and affordances'. We did this by using a large number of templates and regular expressions. For Visual Genome, we imported questions that had one word answers. For COCO-VQA, we imported questions with one or two word answers and in which five or more annotators agreed.", "For color questions, a question would be imported if it contained the word `color' in it and the answer was a commonly used color. Questions were classified as activity or sports recognition questions if the answer was one of nine common sports or one of fifteen common activities and the question contained common verbs describing actions or sports, e.g., playing, throwing, etc. For counting, the question had to begin with `How many' and the answer had to be a small countable integer (1-16). The other categories were determined using regular expressions. For example, a question of the form `Are feeling ?' was classified as sentiment understanding and `What is to the right of/left of/ behind the ?' was classified as positional reasoning. Similarly, `What <OBJECT CATEGORY> is in the image?' and similar templates were used to populate subordinate object recognition questions. This method was used for questions about the season and weather as well, e.g., `What season is this?', `Is this rainy/sunny/cloudy?', or `What is the weather like?' were imported to scene classification." ], [ "Images in the COCO dataset and Visual Genome both have individual regions with semantic knowledge attached to them. We exploit this information to generate new questions using question templates. To introduce variety, we define multiple templates for each question-type and use the annotations to populate them. For example, for counting we use 8 templates, e.g., `How many <objects> are there?', `How many <objects> are in the photo?', etc. Since the COCO and Visual Genome use different annotation formats, we discuss them separately.", "Sport recognition, counting, subordinate object recognition, object presence, scene understanding, positional reasoning, and absurd questions were created from COCO, similar to the scheme used in BIBREF18 . For counting, we count the number of object instances in an image annotation. To minimize ambiguity, this was only done if objects covered an area of at least 2,000 pixels.", "For subordinate object recognition, we create questions that require identifying an object's subordinate-level object classification based on its larger semantic category. To do this, we use COCO supercategories, which are semantic concepts encompassing several objects under a common theme, e.g., the supercategory `furniture' contains chair, couch, etc. If the image contains only one type of furniture, then a question similar to `What kind of furniture is in the picture?' is generated because the answer is not ambiguous. Using similar heuristics, we create questions about identifying food, electronic appliances, kitchen appliances, animals, and vehicles.", "For object presence questions, we find images with objects that have an area larger than 2,000 pixels and produce a question similar to `Is there a <object> in the picture?' These questions will have `yes' as an answer. To create negative questions, we ask questions about COCO objects that are not present in an image. To make this harder, we prioritize the creation of questions referring to absent objects that belong to the same supercategory of objects that are present in the image. A street scene is more likely to contain trucks and cars than it is to contain couches and televisions. Therefore, it is more difficult to answer `Is there a truck?' in a street scene than it is to answer `Is there a couch?'", "For sport recognition questions, we detect the presence of specific sports equipment in the annotations and ask questions about the type of sport being played. Images must only contain sports equipment for one particular sport. A similar approach was used to create scene understanding questions. For example, if a toilet and a sink are present in annotations, the room is a bathroom and an appropriate scene recognition question can be created. Additionally, we use the supercategories `indoor' and `outdoor' to ask questions about where a photo was taken.", "For creating positional reasoning questions, we use the relative locations of bounding boxes to create questions similar to `What is to the left/right of <object>?' This can be ambiguous due to overlapping objects, so we employ the following heuristics to eliminate ambiguity: 1) The vertical separation between the two bounding boxes should be within a small threshold; 2) The objects should not overlap by more than the half the length of its counterpart; and 3) The objects should not be horizontally separated by more than a distance threshold, determined by subjectively judging optimal separation to reduce ambiguity. We tried to generate above/below questions, but the results were unreliable.", "Absurd questions test the ability of an algorithm to judge when a question is not answerable based on the image's content. To make these, we make a list of the objects that are absent from a given image, and then we find questions from rest of TDIUC that ask about these absent objects, with the exception of yes/no and counting questions. This includes questions imported from COCO-VQA, auto-generated questions, and manually created questions. We make a list of all possible questions that would be `absurd' for each image and we uniformly sample three questions per image. In effect, we will have same question repeated multiple times throughout the dataset, where it can either be a genuine question or a nonsensical question. The algorithm must answer `Does Not Apply' if the question is absurd.", "Visual Genome's annotations contain region descriptions, relationship graphs, and object boundaries. However, the annotations can be both non-exhaustive and duplicated, which makes using them to automatically make QA pairs difficult. We only use Visual Genome to make color and positional reasoning questions. The methods we used are similar to those used with COCO, but additional precautions were needed due to quirks in their annotations. Additional details are provided in the Appendix." ], [ "Creating sentiment understanding and object utility/affordance questions cannot be readily done using templates, so we used manual annotation to create these. Twelve volunteer annotators were trained to generate these questions, and they used a web-based annotation tool that we developed. They were shown random images from COCO and Visual Genome and could also upload images." ], [ "Post processing was performed on questions from all sources. All numbers were converted to text, e.g., 2 became two. All answers were converted to lowercase, and trailing punctuation was stripped. Duplicate questions for the same image were removed. All questions had to have answers that appeared at least twice. The dataset was split into train and test splits with 70% for train and 30% for test." ], [ "One of the main goals of VQA research is to build computer vision systems capable of many tasks, instead of only having expertise at one specific task (e.g., object recognition). For this reason, some have argued that VQA is a kind of Visual Turing Test BIBREF0 . However, if simple accuracy is used for evaluating performance, then it is hard to know if a system succeeds at this goal because some question-types have far more questions than others. In VQA, skewed distributions of question-types are to be expected. If each test question is treated equally, then it is difficult to assess performance on rarer question-types and to compensate for bias. We propose multiple measures to compensate for bias and skewed distributions.", "To compensate for the skewed question-type distribution, we compute accuracy for each of the 12 question-types separately. However, it is also important to have a final unified accuracy metric. Our overall metrics are the arithmetic and harmonic means across all per question-type accuracies, referred to as arithmetic mean-per-type (Arithmetic MPT) accuracy and harmonic mean-per-type accuracy (Harmonic MPT). Unlike the Arithmetic MPT, Harmonic MPT measures the ability of a system to have high scores across all question-types and is skewed towards lowest performing categories.", "We also use normalized metrics that compensate for bias in the form of imbalance in the distribution of answers within each question-type, e.g., the most repeated answer `two' covers over 35% of all the counting-type questions. To do this, we compute the accuracy for each unique answer separately within a question-type and then average them together for the question-type. To compute overall performance, we compute the arithmetic normalized mean per-type (N-MPT) and harmonic N-MPT scores. A large discrepancy between unnormalized and normalized scores suggests an algorithm is not generalizing to rarer answers." ], [ "While there are alternative formulations (e.g., BIBREF4 , BIBREF19 ), the majority of VQA systems formulate it as a classification problem in which the system is given an image and a question, with the answers as categories. BIBREF1 , BIBREF3 , BIBREF2 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF9 , BIBREF27 , BIBREF28 , BIBREF8 , BIBREF19 , BIBREF29 . Almost all systems use CNN features to represent the image and either a recurrent neural network (RNN) or a bag-of-words model for the question. We briefly review some of these systems, focusing on the models we compare in experiments. For a more comprehensive review, see BIBREF10 and BIBREF11 .", "Two simple VQA baselines are linear or multi-layer perceptron (MLP) classifiers that take as input the question and image embeddings concatenated to each other BIBREF1 , BIBREF7 , BIBREF8 , where the image features come from the last hidden layer of a CNN. These simple approaches often work well and can be competitive with complex attentive models BIBREF7 , BIBREF8 .", "Spatial attention has been heavily investigated in VQA models BIBREF2 , BIBREF20 , BIBREF28 , BIBREF30 , BIBREF27 , BIBREF24 , BIBREF21 . These systems weigh the visual features based on their relevance to the question, instead of using global features, e.g., from the last hidden layer of a CNN. For example, to answer `What color is the bear?' they aim emphasize the visual features around the bear and suppress other features.", "The MCB system BIBREF2 won the CVPR-2016 VQA Workshop Challenge. In addition to using spatial attention, it implicitly computes the outer product between the image and question features to ensure that all of their elements interact. Explicitly computing the outer product would be slow and extremely high dimensional, so it is done using an efficient approximation. It uses an long short-term memory (LSTM) networks to embed the question.", "The neural module network (NMN) is an especially interesting compositional approach to VQA BIBREF15 , BIBREF31 . The main idea is to compose a series of discrete modules (sub-networks) that can be executed collectively to answer a given question. To achieve this, they use a variety of modules, e.g., the find(x) module outputs a heat map for detecting $x$ . To arrange the modules, the question is first parsed into a concise expression (called an S-expression), e.g., `What is to the right of the car?' is parsed into (what car);(what right);(what (and car right)). Using these expressions, modules are composed into a sequence to answer the query.", "The multi-step recurrent answering units (RAU) model for VQA is another state-of-the-art method BIBREF32 . Each inference step in RAU consists of a complete answering block that takes in an image, a question, and the output from the previous LSTM step. Each of these is part of a larger LSTM network that progressively reasons about the question." ], [ "We trained multiple baseline models as well as state-of-the-art VQA methods on TDIUC. The methods we use are:", "For image features, ResNet-152 BIBREF33 with $448 \\times 448$ images was used for all models.", "QUES and IMG provide information about biases in the dataset. QUES, Q+I, and MLP all use 4800-dimensional skip-thought vectors BIBREF34 to embed the question, as was done in BIBREF7 . For image features, these all use the `pool5' layer of ResNet-152 normalized to unit length. MLP is a 4-layer net with a softmax output layer. The 3 ReLU hidden layers have 6000, 4000, and 2000 units, respectively. During training, dropout (0.3) was used for the hidden layers.", "For MCB, MCB-A, NMN and RAU, we used publicly available code to train them on TDIUC. The experimental setup and hyperparamters were kept unchanged from the default choices in the code, except for upgrading NMN and RAU's visual representation to both use ResNet-152.", "Results on TDIUC for these models are given in Table 3 . Accuracy scores are given for each of the 12 question-types in Table 3 , and scores that are normalized by using mean-per-unique-answer are given in appendix Table 5 ." ], [ "By inspecting Table 3 , we can see that some question-types are comparatively easy ( $>90$ %) under MPT: scene recognition, sport recognition, and object presence. High accuracy is also achieved on absurd, which we discuss in greater detail in Sec. \"Effects of Including Absurd Questions\" . Subordinate object recognition is moderately high ( $>80$ %), despite having a large number of unique answers. Accuracy on counting is low across all methods, despite a large number of training data. For the remaining question-types, more analysis is needed to pinpoint whether the weaker performance is due to lower amounts of training data, bias, or limitations of the models. We next investigate how much of the good performance is due to bias in the answer distribution, which N-MPT compensates for." ], [ "One of our major aims was to compensate for the fact that algorithms can achieve high scores by simply learning to answer more populated and easier question-types. For existing datasets, earlier work has shown that simple baseline methods routinely exceed more complex methods using simple accuracy BIBREF7 , BIBREF8 , BIBREF19 . On TDIUC, MLP surpasses MCB and NMN in terms of simple accuracy, but a closer inspection reveals that MLP's score is highly determined by performance on categories with a large number of examples, such as `absurd' and `object presence.' Using MPT, we find that both NMN and MCB outperform MLP. Inspecting normalized scores for each question-type (Appendix Table 5 ) shows an even more pronounced differences, which is also reflected in arithmetic N-MPT score presented in Table 3 . This indicates that MLP is prone to overfitting. Similar observations can be made for MCB-A compared to RAU, where RAU outperforms MCB-A using simple accuracy, but scores lower on all the metrics designed to compensate for the skewed answer distribution and bias.", "Comparing the unnormalized and normalized metrics can help us determine the generalization capacity of the VQA algorithms for a given question-type. A large difference in these scores suggests that an algorithm is relying on the skewed answer distribution to obtain high scores. We found that for MCB-A, the accuracy on subordinate object recognition drops from 85.54% with unnormalized to 23.22% with normalized, and for scene recognition it drops from 93.06% (unnormalized) to 38.53% (normalized). Both these categories have a heavily skewed answer distribution; the top-25 answers in subordinate object recognition and the top-5 answers in scene recognition cover over 80% of all questions in their respective question-types. This shows that question-types that appear to be easy may simply be due to the algorithms learning the answer statistics. A truly easy question-type will have similar performance for both unnormalized and normalized metrics. For example, sport recognition shows only 17.39% drop compared to a 30.21% drop for counting, despite counting having same number of unique answers and far more training data. By comparing relative drop in performance between normalized and unnormalized metric, we can also compare the generalization capability of the algorithms, e.g., for subordinate object recognition, RAU has higher unnormalized score (86.11%) compared to MCB-A (85.54%). However, for normalized scores, MCB-A has significantly higher performance (23.22%) than RAU (21.67%). This shows RAU may be more dependent on the answer distribution. Similar observations can be made for MLP compared to MCB." ], [ "In the previous section, we saw that the VQA models struggle to correctly predict rarer answers. Are the less repeated questions actually harder to answer, or are the algorithms simply biased toward more frequent answers? To study this, we created a subset of TDIUC that only consisted of questions that have answers repeated less than 1000 times. We call this dataset TDIUC-Tail, which has 46,590 train and 22,065 test questions. Then, we trained MCB on: 1) the full TDIUC dataset; and 2) TDIUC-Tail. Both versions were evaluated on the validation split of TDIUC-Tail.", "We found that MCB trained only on TDIUC-Tail outperformed MCB trained on all of TDIUC across all question-types (details are in appendix Table 6 and 7 ). This shows that MCB is capable of learning to correctly predict rarer answers, but it is simply biased towards predicting more common answers to maximize overall accuracy. Using normalized accuracy disincentivizes the VQA algorithms' reliance on the answer statistics, and for deploying a VQA system it may be useful to optimize directly for N-MPT." ], [ "Absurd questions force a VQA system to look at the image to answer the question. In TDIUC, these questions are sampled from the rest of the dataset, and they have a high prior probability of being answered `Does not apply.' This is corroborated by the QUES model, which achieves a high accuracy on absurd; however, for the same questions when they are genuine for an image, it only achieves 6.77% accuracy on these questions. Good absurd performance is achieved by sacrificing performance on other categories. A robust VQA system should be able to detect absurd questions without then failing on others. By examining the accuracy on real questions that are identical to absurd questions, we can quantify an algorithm's ability to differentiate the absurd questions from the real ones. We found that simpler models had much lower accuracy on these questions, (QUES: 6.77%, Q+I: 34%), compared to more complex models (MCB: 62.44%, MCB-A: 68.83%).", "To further study this, we we trained two VQA systems, Q+I and MCB, both with and without absurd. The results are presented in Table 3 . For Q+I trained without absurd questions, accuracies for other categories increase considerably compared to Q+I trained with full TDIUC, especially for question-types that are used to sample absurd questions, e.g., activity recognition (24% when trained with absurd and 48% without). Arithmetic MPT accuracy for the Q+I model that is trained without absurd (57.03%) is also substantially greater than MPT for the model trained with absurd (51.45% for all categories except absurd). This suggests that Q+I is not properly discriminating between absurd and real questions and is biased towards mis-identifying genuine questions as being absurd. In contrast, MCB, a more capable model, produces worse results for absurd, but the version trained without absurd shows much smaller differences than Q+I, which shows that MCB is more capable of identifying absurd questions." ], [ "In Sec. \"Can Algorithms Predict Rare Answers?\" , we saw that a skewed answer distribution can impact generalization. This effect is strong even for simple questions and affects even the most sophisticated algorithms. Consider MCB-A when it is trained on both COCO-VQA and Visual Genome, i.e., the winner of the CVPR-2016 VQA Workshop Challenge. When it is evaluated on object presence questions from TDIUC, which contains 50% `yes' and 50% `no' questions, it correctly predicts `yes' answers with 86.3% accuracy, but only 11.2% for questions with `no' as an answer. However, after training it on TDIUC, MCB-A is able to achieve 95.02% for `yes' and 92.26% for `no.' MCB-A performed poorly by learning the biases in the COCO-VQA dataset, but it is capable of performing well when the dataset is unbiased. Similar observations about balancing yes/no questions were made in BIBREF14 . Datasets could balance simple categories like object presence, but extending the same idea to all other categories is a challenging task and undermines the natural statistics of the real-world. Adopting mean-per-class and normalized accuracy metrics can help compensate for this problem." ], [ "By breaking questions into types, we can assess which types benefit the most from attention. We do this by comparing the MCB model with and without attention, i.e., MCB and MCB-A. As seen in Table 3 , attention helped improve results on several question categories. The most pronounced increases are for color recognition, attribute recognition, absurd, and counting. All of these question-types require the algorithm to detect specified object(s) (or lack thereof) to be answered correctly. MCB-A computes attention using local features from different spatial locations, instead of global image features. This aids in localizing individual objects. The attention mechanism learns the relative importance of these features. RAU also utilizes spatial attention and shows similar increments." ], [ "NMN, and, to a lesser extent, RAU propose compositional approaches for VQA. For COCO-VQA, NMN has performed worse than some MLP models BIBREF7 using simple accuracy. We hoped that it would achieve better performance than other models for questions that require logically analyzing an image in a step-by-step manner, e.g., positional reasoning. However, while NMN did perform better than MLP using MPT and N-MPT metric, we did not see any substantial benefits in specific question-types. This may be because NMN is limited by the quality of the `S-expression' parser, which produces incorrect or misleading parses in many cases. For example, `What color is the jacket of the man on the far left?' is parsed as (color jacket);(color leave);(color (and jacket leave)). This expression not only fails to parse `the man', which is a crucial element needed to correctly answer the question, but also wrongly interprets `left' as past tense of leave.", "RAU performs inference over multiple hops, and because each hop contains a complete VQA system, it can learn to solve different tasks in each step. Since it is trained end-to-end, it does not need to rely on rigid question parses. It showed very good performance in detecting absurd questions and also performed well on other categories." ], [ "We introduced TDIUC, a VQA dataset that consists of 12 explicitly defined question-types, including absurd questions, and we used it to perform a rigorous analysis of recent VQA algorithms. We proposed new evaluation metrics to compensate for biases in VQA datasets. Results show that the absurd questions and the new evaluation metrics enable a deeper understanding of VQA algorithm behavior." ], [ "In this section, we will provide additional details about the TDIUC dataset creation and additional statistics that were omitted from the main paper due to inadequate space." ], [ "As mentioned in the main text, Visual Genome's annotations are both non-exhaustive and duplicated. This makes using them to automatically make question-answer (QA) pairs difficult. Due to these issues, we only used them to make two types of questions: Color Attributes and Positional Reasoning. Moreover, a number of restrictions needed to be placed, which are outlined below.", "For making Color Attribute questions, we make use of the attributes metadata in the Visual Genome annotations to populate the template `What color is the <object>?' However, Visual Genome metadata can contain several color attributes for the same object as well as different names for the same object. Since the annotators type the name of the object manually rather than choosing from a predetermined set of objects, the same object can be referred by different names, e.g., `xbox controller,' `game controller,' `joystick,' and `controller' can all refer to same object in an image. The object name is sometimes also accompanied by its color, e.g., `white horse' instead of `horse' which makes asking the Color Attribute question `What color is the white horse?' pointless. One potential solution is to use the wordnet `synset' which accompanies every object annotation in the Visual Genome annotations. Synsets are used to group different variations of the common objects names under a single noun from wordnet. However, we found that the synset matching was erroneous in numerous instances, where the object category was misrepresented by the given synset. For example, A `controller' is matched with synset `accountant' even when the `controller' is referring to a game controller. Similarly, a `cd' is matched with synset of `cadmium.' To avoid these problems we made a set of stringent requirements before making questions:", "The chosen object should only have a single attribute that belongs to a set of commonly used colors.", "The chosen object name or synset must be one of the 91 common objects in the MS-COCO annotations.", "There must be only one instance of the chosen object.", "Using these criteria, we found that we could safely ask the question of the form `What color is the <object>?'.", "Similarly, for making Positional Reasoning questions, we used the relationships metadata in the Visual Genome annotations. The relationships metadata connects two objects by a relationship phrase. Many of these relationships describe the positions of the two objects, e.g., A is `on right' of B, where `on right' is one of the example relationship clause from Visual Genome, with the object A as the subject and the object B as the object. This can be used to generate Positional Reasoning questions. Again, we take several measures to avoid ambiguity. First, we only use objects that appear once in the image because `What is to the left of A' can be ambiguous if there are two instances of the object A. However, since visual genome annotations are non-exhaustive, there may still (rarely) be more than one instance of object A that was not annotated. To disambiguate such cases, we use the attributes metadata to further specify the object wherever possible, e.g., instead of asking `What is to the right of the bus?', we ask `What is to the right of the green bus?'", "Due to a these stringent criteria, we could only create a small number of questions using Visual Genome annotations compared to other sources. The number of questions produced via each source is shown in Table 4 ." ], [ "Figure 3 shows the answer distribution for the different question-types. We can see that some categories, such as counting, scene recognition and sentiment understanding, have a very large share of questions represented by only a few top answers. In such cases, the performance of a VQA algorithm can be inflated unless the evaluation metric compensates for this bias. In other cases, such as positional reasoning and object utility and affordances, the answers are much more varied, with top-50 answers covering less than 60% of all answers.", "We have completely balanced answer distribution for object presence questions, where exactly 50% of questions being answered `yes' and the remaining 50% of the questions are answered `no'. For other categories, we have tried to design our question generation algorithms so that a single answer does not have a significant majority within a question type. For example, while scene understanding has top-4 answers covering over 85% of all the questions, there are roughly as many `no' questions (most common answer) as there are `yes' questions (second most-common answer). Similar distributions can be seen for counting, where `two' (most-common answer) is repeated almost as many times as `one' (second most-common answer). By having at least the top-2 answers split almost equally, we remove the incentive for an algorithm to perform well using simple mode guessing, even when using the simple accuracy metric." ], [ "In the paper, we mentioned that we split the entire collection into 70% train and 30% test/validation. To do this, we not only need to have a roughly equal distribution of question types and answers, but also need to make sure that the multiple questions for same image do not end up in two different splits, i.e., the same image cannot occur in both the train and the test partitions. So, we took following measures to split the questions into train-test splits. First, we split all the images into three separate clusters.", "Manually uploaded images, which includes all the images manually uploaded by our volunteer annotators.", "Images from the COCO dataset, including all the images for questions generated from COCO annotations and those imported from COCO-VQA dataset. In addition, a large number of Visual Genome questions also refer to COCO images. So, some questions that are generated and imported from Visual Genome are also included in this cluster.", "Images exclusively in the Visual Genome dataset, which includes images for a part of the questions imported from Visual Genome and those generated using that dataset.", "We follow simple rules to split each of these clusters of images into either belonging to the train or test splits.", "All the questions belonging to images coming from the `train2014' split of COCO images are assigned to the train split and all the questions belonging to images from the `val2014' split are assigned to test split.", "For manual and Visual Genome images, we randomly split 70% of images to train and rest to test." ], [ "In this section, we present additional experimental results that were omitted from the main paper due to inadequate space. First, the detailed normalized scores for each of the question-types is presented in Table 3 . To compute these scores, the accuracy for each unique answer is calculated separately within a question-type and averaged. Second, we present the results from the experiment in section \"Can Algorithms Predict Rare Answers?\" in table 6 (Unnormalized) and table 7 (Normalized). The results are evaluated on TDIUC-Tail, which is a subset of TDIUC that only consists of questions that have answers repeated less than 1000 times (uncommon answers). Note that the TDIUC-Tail excludes the absurd and the object presence question-types, as they do not contain any questions with uncommon answers. The algorithms are identical in both Table 6 and 7 and are named as follows:" ] ] }
{ "question": [ "From when are many VQA datasets collected?" ], "question_id": [ "cf93a209c8001ffb4ef505d306b6ced5936c6b63" ], "nlp_background": [ "five" ], "topic_background": [ "familiar" ], "paper_read": [ "no" ], "search_query": [ "Question Answering" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "late 2014", "evidence": [ "VQA research began in earnest in late 2014 when the DAQUAR dataset was released BIBREF0 . Including DAQUAR, six major VQA datasets have been released, and algorithms have rapidly improved. On the most popular dataset, `The VQA Dataset' BIBREF1 , the best algorithms are now approaching 70% accuracy BIBREF2 (human performance is 83%). While these results are promising, there are critical problems with existing datasets in terms of multiple kinds of biases. Moreover, because existing datasets do not group instances into meaningful categories, it is not easy to compare the abilities of individual algorithms. For example, one method may excel at color questions compared to answering questions requiring spatial reasoning. Because color questions are far more common in the dataset, an algorithm that performs well at spatial reasoning will not be appropriately rewarded for that feat due to the evaluation metrics that are used." ], "highlighted_evidence": [ "VQA research began in earnest in late 2014 when the DAQUAR dataset was released BIBREF0" ] } ], "annotation_id": [ "0953d83d785f0b7533669425168108b142cdd82b" ], "worker_id": [ "2a18a3656984d04249f100633e4c1003417a2255" ] } ] }
{ "caption": [ "Figure 1: A good VQA benchmark tests a wide range of computer vision tasks in an unbiased manner. In this paper, we propose a new dataset with 12 distinct tasks and evaluation metrics that compensate for bias, so that the strengths and limitations of algorithms can be better measured.", "Figure 2: Images from TDIUC and their corresponding question-answer pairs.", "Table 1: Comparison of previous natural image VQA datasets with TDIUC. For COCO-VQA, the explicitly defined number of question-types is used, but a much finer granularity would be possible if they were individually classified. MC/OE refers to whether open-ended or multiple-choice evaluation is used.", "Table 2: The number of questions per type in TDIUC.", "Table 3: Results for all VQA models. The unnormalized accuracy for each question-type is shown. Overall performance is reported using 5 metrics. Overall (Arithmetic MPT) and Overall (Harmonic MPT) are averages of these sub-scores, providing a clearer picture of performance across question-types than simple accuracy. Overall Arithmetic N-MPT and Harmonic NMPT normalize across unique answers to better analyze the impact of answer imbalance (see Sec. 4). Normalized scores for individual question-types are presented in the appendix table 5. * denotes training without absurd questions.", "Table 4: The number of questions produced via each source.", "Figure 3: Answer distributions for the answers for each of the question-types. This shows the relative frequency of each unique answer within a question-type, so for some question-types, e.g., counting, even slim bars contain a fairly large number of instances with that answer. Similarly, for less populated question-types such as utility and affordances, even large bars represents only a small number of training examples.", "Table 5: Results for all the VQA models. The normalized accuracy for each question-type is shown here. The models are identical to the ones in Table 3 in main paper. Overall performance is, again, reported using all 5 metrics. Overall (Arithmetic N-MPT) and Overall (Harmonic N-MPT) are averages of the reported sub-scores. Similarly, Arithmetic MPT and Harmonic MPT are averages of sub-scores reported in Table 3 in the main paper. * denotes training without absurd questions.", "Table 6: Results on TDIUC-Tail for MCB model when trained on full TDIUC dataset vs when trained only on TDIUC-Tail. The un-normalized scores for each questiontypes and five different overall scores are shown here", "Table 7: Results on TDIUC-Tail for MCB model when trained on full TDIUC dataset vs when trained only on TDIUC-Tail. The normalized scores for each questiontypes and five different overall scores are shown here" ], "file": [ "1-Figure1-1.png", "3-Figure2-1.png", "4-Table1-1.png", "5-Table2-1.png", "7-Table3-1.png", "10-Table4-1.png", "11-Figure3-1.png", "12-Table5-1.png", "12-Table6-1.png", "12-Table7-1.png" ] }
1911.11744
Imitation Learning of Robot Policies by Combining Language, Vision and Demonstration
In this work we propose a novel end-to-end imitation learning approach which combines natural language, vision, and motion information to produce an abstract representation of a task, which in turn is used to synthesize specific motion controllers at run-time. This multimodal approach enables generalization to a wide variety of environmental conditions and allows an end-user to direct a robot policy through verbal communication. We empirically validate our approach with an extensive set of simulations and show that it achieves a high task success rate over a variety of conditions while remaining amenable to probabilistic interpretability.
{ "section_name": [ "Introduction", "Introduction ::: Problem Statement:", "Background", "Multimodal Policy Generation via Imitation", "Results", "Conclusion and Future Work" ], "paragraphs": [ [ "A significant challenge when designing robots to operate in the real world lies in the generation of control policies that can adapt to changing environments. Programming such policies is a labor and time-consuming process which requires substantial technical expertise. Imitation learning BIBREF0, is an appealing methodology that aims at overcoming this challenge – instead of complex programming, the user only provides a set of demonstrations of the intended behavior. These demonstrations are consequently distilled into a robot control policy by learning appropriate parameter settings of the controller. Popular approaches to imitation, such as Dynamic Motor Primitives (DMPs) BIBREF1 or Gaussian Mixture Regression (GMR) BIBREF2 largely focus on motion as the sole input and output modality, i.e., joint angles, forces or positions. Critical semantic and visual information regarding the task, such as the appearance of the target object or the type of task performed, is not taken into account during training and reproduction. The result is often a limited generalization capability which largely revolves around adaptation to changes in the object position. While imitation learning has been successfully applied to a wide range of tasks including table-tennis BIBREF3, locomotion BIBREF4, and human-robot interaction BIBREF5 an important question is how to incorporate language and vision into a differentiable end-to-end system for complex robot control.", "In this paper, we present an imitation learning approach that combines language, vision, and motion in order to synthesize natural language-conditioned control policies that have strong generalization capabilities while also capturing the semantics of the task. We argue that such a multi-modal teaching approach enables robots to acquire complex policies that generalize to a wide variety of environmental conditions based on descriptions of the intended task. In turn, the network produces control parameters for a lower-level control policy that can be run on a robot to synthesize the corresponding motion. The hierarchical nature of our approach, i.e., a high-level policy generating the parameters of a lower-level policy, allows for generalization of the trained task to a variety of spatial, visual and contextual changes." ], [ "In order to outline our problem statement, we contrast our approach to Imitation learning BIBREF0 which considers the problem of learning a policy $\\mathbf {\\pi }$ from a given set of demonstrations ${\\cal D}=\\lbrace \\mathbf {d}^0,.., \\mathbf {d}^m\\rbrace $. Each demonstration spans a time horizon $T$ and contains information about the robot's states and actions, e.g., demonstrated sensor values and control inputs at each time step. Robot states at each time step within a demonstration are denoted by $\\mathbf {x}_t$. In contrast to other imitation learning approaches, we assume that we have access to the raw camera images of the robot $_t$ at teach time step, as well as access to a verbal description of the task in natural language. This description may provide critical information about the context, goals or objects involved in the task and is denoted as $\\mathbf {s}$. Given this information, our overall objective is to learn a policy $\\mathbf {\\pi }$ which imitates the demonstrated behavior, while also capturing semantics and important visual features. After training, we can provide the policy $\\mathbf {\\pi }(\\mathbf {s},)$ with a different, new state of the robot and a new verbal description (instruction) as parameters. The policy will then generate the control signals needed to perform the task which takes the new visual input and semantic context int o account." ], [ "A fundamental challenge in imitation learning is the extraction of policies that do not only cover the trained scenarios, but also generalize to a wide range of other situations. A large body of literature has addressed the problem of learning robot motor skills by imitation BIBREF6, learning functional BIBREF1 or probabilistic BIBREF7 representations. However, in most of these approaches, the state vector has to be carefully designed in order to ensure that all necessary information for adaptation is available. Neural approaches to imitation learning BIBREF8 circumvent this problem by learning suitable feature representations from rich data sources for each task or for a sequence of tasks BIBREF9, BIBREF10, BIBREF11. Many of these approaches assume that either a sufficiently large set of motion primitives is already available or that a taxonomy of the task is available, i.e., semantics and motions are not trained in conjunction. The importance of maintaining this connection has been shown in BIBREF12, allowing the robot to adapt to untrained variations of the same task. To learn entirely new tasks, meta-learning aims at learning policy parameters that can quickly be fine-tuned to new tasks BIBREF13. While very successful in dealing with visual and spatial information, these approaches do not incorporate any semantic or linguistic component into the learning process. Language has shown to successfully generate task descriptions in BIBREF14 and several works have investigated the idea of combining natural language and imitation learning: BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19. However, most approaches do not utilize the inherent connection between semantic task descriptions and low-level motions to train a model.", "Our work is most closely related to the framework introduced in BIBREF20, which also focuses on the symbol grounding problem. More specifically, the work in BIBREF20 aims at mapping perceptual features in the external world to constituents in an expert-provided natural language instruction. Our work approaches the problem of generating dynamic robot policies by fundamentally combining language, vision, and motion control in to a single differentiable neural network that can learn the cross-modal relationships found in the data with minimal human feature engineering. Unlike previous work, our proposed model is capable of directly generating complex low-level control policies from language and vision that allow the robot to reassemble motions shown during training." ], [ "", "We motivate our approach with a simple example: consider a binning task in which a 6 DOF robot has to drop an object into one of several differently shaped and colored bowls on a table. To teach this task, the human demonstrator does not only provide a kinesthetic demonstration of the desired trajectory, but also a verbal command, e.g., “Move towards the blue bowl” to the robot. In this example, the trajectory generation would have to be conditioned on the blue bowl's position which, however, has to be extracted from visual sensing. Our approach automatically detects and extracts these relationships between vision, language, and motion modalities in order to make best usage of contextual information for better generalization and disambiguation.", "Figure FIGREF2 (left) provides an overview of our method. Our goal is to train a deep neural network that can take as input a task description $\\mathbf {s}$ and and image $$ and consequently generates robot controls. In the remainder of this paper, we will refer to our network as the mpn. Rather than immediately producing control signals, the mpn will generate the parameters for a lower-level controller. This distinction allows us to build upon well-established control schemes in robotics and optimal control. In our specific case, we use the widely used Dynamic Motor Primitives BIBREF1 as a lower-level controller for control signal generation.", "In essence, our network can be divided into three parts. The first part, the semantic network, is used to create a task embedding $$ from the input sentence $$ and environment image $$. In a first step, the sentence $$ is tokenized and converted into a sentence matrix ${W} \\in \\mathbb {R}^{l_s \\times l_w} = f_W()$ by utilizing pre-trained Glove word embeddings BIBREF21 where $l_s$ is the padded-fixed-size length of the sentence and $l_w$ is the size of the glove word vectors. To extract the relationships between the words, we use use multiple CNNs $_s = f_L()$ with filter size $n \\times l_w$ for varying $n$, representing different $n$-gram sizes BIBREF22. The final representation is built by flattening the individual $n$-grams with max-pooling of size $(l_s - n_i + 1)\\times l_w$ and concatenating the results before using a single perceptron to detect relationships between different $n$-grams. In order to combine the sentence embedding $_s$ with the image, it is concatenated as a fourth channel to the input image $$. The task embedding $$ is produced with three blocks of convolutional layers, composed of two regular convolutions, followed by a residual convolution BIBREF23 each.", "In the second part, the policy translation network is used to generate the task parameters $\\Theta \\in \\mathcal {R}^{o \\times b}$ and $\\in \\mathcal {R}^{o}$ given a task embedding $$ where $o$ is the number of output dimensions and $b$ the number of basis functions in the DMP:", "where $f_G()$ and $f_H()$ are multilayer-perceptrons that use $$ after being processed in a single perceptron with weight $_G$ and bias $_G$. These parameters are then used in the third part of the network, which is a DMP BIBREF0, allowing us leverage a large body of research regarding their behavior and stability, while also allowing other extensions of DMPs BIBREF5, BIBREF24, BIBREF25 to be incorporated to our framework." ], [ "We evaluate our model in a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command. Each environment contains between three and five objects differentiated by their size (small, large), shape (round, square) and color (red, green, blue, yellow, pink), totalling in 20 different objects. Depending on the generated scenario, combinations of these three features are necessary to distinguish the targets from each other, allowing for tasks of varying complexity.", "To train our model, we generated a dataset of 20,000 demonstrated 7 DOF trajectories (6 robot joints and 1 gripper dimension) in our simulated environment together with a sentence generator capable of creating natural task descriptions for each scenario. In order to create the language generator, we conducted an human-subject study to collect sentence templates of a placement task as well as common words and synonyms for each of the used features. By utilising these data, we are able to generate over 180,000 unique sentences, depending on the generated scenario.", "The generated parameters of the low-level DMP controller – the weights and goal position – must be sufficiently accurate in order to successfully deliver the object to the specified bin. On the right side of Figure FIGREF4, the generated weights for the DMP are shown for two tasks in which the target is close and far away from the robot, located at different sides of the table, indicating the robots ability to generate differently shaped trajectories. The accuracy of the goal position can be seen in Figure FIGREF4(left) which shows another aspect of our approach: By using stochastic forward passes BIBREF26 the model can return an estimate for the validity of a requested task in addition to the predicted goal configuration. The figure shows that the goal position of a red bowl has a relatively small distribution independently of the used sentence or location on the table, where as an invalid target (green) produces a significantly larger distribution, indicating that the requested task may be invalid.", "To test our model, we generated 500 new scenario testing each of the three features to identify the correct target among other bowls. A task is considered to be successfully completed when the cube is withing the boundaries of the targeted bowl. Bowls have a bounding box of 12.5 and 17.5cm edge length for the small and large variant, respectively. Our experiments showed that using the objects color or shape to uniquely identify an object allows the robot successfully complete the binning task in 97.6% and 96.0% of the cases. However, using the shape alone as a unique identifier, the task could only be completed in 79.0% of the cases. We suspect that the loss of accuracy is due to the low image resolution of the input image, preventing the network from reliably distinguishing the object shapes. In general, our approach is able to actuate the robot with an target error well below 5cm, given the target was correctly identified." ], [ "In this work, we presented an imitation learning approach combining language, vision, and motion. A neural network architecture called Multimodal Policy Network was introduced which is able to learn the cross-modal relationships in the training data and achieve high generalization and disambiguation performance as a result. Our experiments showed that the model is able to generalize towards different locations and sentences while maintaining a high success rate of delivering an object to a desired bowl. In addition, we discussed an extensions of the method that allow us to obtain uncertainty information from the model by utilizing stochastic network outputs to get a distribution over the belief.", "The modularity of our architecture allows us to easily exchange parts of the network. This can be utilized for transfer learning between different tasks in the semantic network or transfer between different robots by transferring the policy translation network to different robots in simulation, or to bridge the gap between simulation and reality." ] ] }
{ "question": [ "What is task success rate achieved? ", "What simulations are performed by the authors to validate their approach?", "Does proposed end-to-end approach learn in reinforcement or supervised learning manner?" ], "question_id": [ "fb5ce11bfd74e9d7c322444b006a27f2ff32a0cf", "1e2ffa065b640e912d6ed299ff713a12195e12c4", "28b2a20779a78a34fb228333dc4b93fd572fda15" ], "nlp_background": [ "zero", "zero", "zero" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "vision", "vision", "vision" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "96-97.6% using the objects color or shape and 79% using shape alone", "evidence": [ "To test our model, we generated 500 new scenario testing each of the three features to identify the correct target among other bowls. A task is considered to be successfully completed when the cube is withing the boundaries of the targeted bowl. Bowls have a bounding box of 12.5 and 17.5cm edge length for the small and large variant, respectively. Our experiments showed that using the objects color or shape to uniquely identify an object allows the robot successfully complete the binning task in 97.6% and 96.0% of the cases. However, using the shape alone as a unique identifier, the task could only be completed in 79.0% of the cases. We suspect that the loss of accuracy is due to the low image resolution of the input image, preventing the network from reliably distinguishing the object shapes. In general, our approach is able to actuate the robot with an target error well below 5cm, given the target was correctly identified." ], "highlighted_evidence": [ "Our experiments showed that using the objects color or shape to uniquely identify an object allows the robot successfully complete the binning task in 97.6% and 96.0% of the cases. However, using the shape alone as a unique identifier, the task could only be completed in 79.0% of the cases." ] } ], "annotation_id": [ "098e4ae256790d70e0f02709f0be0779e99b3770" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We evaluate our model in a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command. Each environment contains between three and five objects differentiated by their size (small, large), shape (round, square) and color (red, green, blue, yellow, pink), totalling in 20 different objects. Depending on the generated scenario, combinations of these three features are necessary to distinguish the targets from each other, allowing for tasks of varying complexity." ], "highlighted_evidence": [ "We evaluate our model in a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command. Each environment contains between three and five objects differentiated by their size (small, large), shape (round, square) and color (red, green, blue, yellow, pink), totalling in 20 different objects. Depending on the generated scenario, combinations of these three features are necessary to distinguish the targets from each other, allowing for tasks of varying complexity." ] } ], "annotation_id": [ "2ca85ad9225e9b23024ec88341907e642add1d14" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "supervised learning", "evidence": [ "To train our model, we generated a dataset of 20,000 demonstrated 7 DOF trajectories (6 robot joints and 1 gripper dimension) in our simulated environment together with a sentence generator capable of creating natural task descriptions for each scenario. In order to create the language generator, we conducted an human-subject study to collect sentence templates of a placement task as well as common words and synonyms for each of the used features. By utilising these data, we are able to generate over 180,000 unique sentences, depending on the generated scenario.", "To test our model, we generated 500 new scenario testing each of the three features to identify the correct target among other bowls. A task is considered to be successfully completed when the cube is withing the boundaries of the targeted bowl. Bowls have a bounding box of 12.5 and 17.5cm edge length for the small and large variant, respectively. Our experiments showed that using the objects color or shape to uniquely identify an object allows the robot successfully complete the binning task in 97.6% and 96.0% of the cases. However, using the shape alone as a unique identifier, the task could only be completed in 79.0% of the cases. We suspect that the loss of accuracy is due to the low image resolution of the input image, preventing the network from reliably distinguishing the object shapes. In general, our approach is able to actuate the robot with an target error well below 5cm, given the target was correctly identified." ], "highlighted_evidence": [ "To train our model, we generated a dataset of 20,000 demonstrated 7 DOF trajectories (6 robot joints and 1 gripper dimension) in our simulated environment together with a sentence generator capable of creating natural task descriptions for each scenario. In order to create the language generator, we conducted an human-subject study to collect sentence templates of a placement task as well as common words and synonyms for each of the used features. By utilising these data, we are able to generate over 180,000 unique sentences, depending on the generated scenario.", "To test our model, we generated 500 new scenario testing each of the three features to identify the correct target among other bowls. " ] } ], "annotation_id": [ "7cf03a2b99adacddc3a1b69170a30c77f738599d" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Figure 1: Network architecture overview. The network consists of two parts, a high-level semantic network and a low-level control network. Both networks are working seamlessly together and are utilized in an End-to-End fashion.", "Figure 2: Results for placing an object into bowls at different locations: (Left) Stochastic forward passes allow the model to estimate its certainty about the validity of a task. (Right) Generated weights Θ for four joints of the DMP shown for two objects close and far away of the robot." ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png" ] }
1910.03467
Overcoming the Rare Word Problem for Low-Resource Language Pairs in Neural Machine Translation
Among the six challenges of neural machine translation (NMT) coined by (Koehn and Knowles, 2017), rare-word problem is considered the most severe one, especially in translation of low-resource languages. In this paper, we propose three solutions to address the rare words in neural machine translation systems. First, we enhance source context to predict the target words by connecting directly the source embeddings to the output of the attention component in NMT. Second, we propose an algorithm to learn morphology of unknown words for English in supervised way in order to minimize the adverse effect of rare-word problem. Finally, we exploit synonymous relation from the WordNet to overcome out-of-vocabulary (OOV) problem of NMT. We evaluate our approaches on two low-resource language pairs: English-Vietnamese and Japanese-Vietnamese. In our experiments, we have achieved significant improvements of up to roughly +1.0 BLEU points in both language pairs.
{ "section_name": [ "Introduction", "Neural Machine Translation", "Rare Word translation", "Rare Word translation ::: Low-frequency Word Translation", "Rare Word translation ::: Reducing Unknown Words", "Rare Word translation ::: Dealing with OOV using WordNet", "Experiments", "Experiments ::: Datasets", "Experiments ::: Preprocessing", "Experiments ::: Systems and Training", "Experiments ::: Results", "Experiments ::: Results ::: Japanese-Vietnamese Translation", "Experiments ::: Results ::: English-Vietnamese Translation", "Related Works", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "NMT systems have achieved better performance compared to statistical machine translation (SMT) systems in recent years not only on available data language pairs BIBREF1, BIBREF2, but also on low-resource language pairs BIBREF3, BIBREF4. Nevertheless, NMT still exists many challenges which have adverse effects on its effectiveness BIBREF0. One of these challenges is that NMT has biased tend in translating high-frequency words, thus words which have lower frequencies are often translated incorrectly. This challenge has also been confirmed again in BIBREF3, and they have proposed two strategies to tackle this problem with modifications on the model's output distribution: one for normalizing some matrices by fixing them to constants after several training epochs and another for adding a direct connection from source embeddings through a simple feed forward neural network (FFNN). These approaches increase the size and the training time of their NMT systems. In this work, we follow their second approach but simplify the computations by replacing FFNN with two single operations.", "Despite above approaches can improve the prediction of rare words, however, NMT systems often use limited vocabularies in their sizes, from 30K to 80K most frequent words of the training data, in order to reduce computational complexity and the sizes of the models BIBREF5, BIBREF6, so the rare-word translation are still problematic in NMT. Even when we use a larger vocabulary, this situation still exists BIBREF7. A word which has not seen in the vocabulary of the input text (called unknown word) are presented by the $unk$ symbol in NMT systems. Inspired by alignments and phrase tables in phrase-based machine translation (SMT) as suggested by BIBREF8, BIBREF6 proposed to address OOV words using an annotated training corpus. They then used a dictionary generated from alignment model or maps between source and target words to determine the translations of $unks$ if translations are not found. BIBREF9 proposed to reduce unknown words using Gage's Byte Pair Encoding (BPE) algorithm BIBREF10, but NMT systems are less effective for low-resource language pairs due to the lack of data and also for other languages that sub-word are not the optimal translation unit. In this paper, we employ several techniques inspired by the works from NMT and the traditional SMT mentioned above. Instead of a loosely unsupervised approach, we suggest a supervised approach to solve this trouble using synonymous relation of word pairs from WordNet on Japanese$\\rightarrow $Vietnamese and English$\\rightarrow $Vietnamese systems. To leverage effectiveness of this relation in English, we transform variants of words in the source texts to their original forms by separating their affixes collected by hand.", "Our contributes in this work are:", "", "We release the state-of-the-art for Japanese-Vietnamese NMT systems.", "We proposed the approach to deal with the rare word translation by integrating source embeddings to the attention component of NMT.", "We present a supervised algorithm to reduce the number of unknown words for the English$\\rightarrow $Vietnamese translation system.", "We demonstrate the effectiveness of leveraging linguistic information from WordNet to alleviate the rare-word problem in NMT." ], [ "Our NMT system use a bidirectional recurrent neural network (biRNN) as an encoder and a single-directional RNN as a decoder with input feeding of BIBREF11 and the attention mechanism of BIBREF5. The Encoder's biRNN are constructed by two RNNs with the hidden units in the LSTM cell, one for forward and the other for backward of the source sentence $\\mathbf {x}=(x_1, ...,x_n)$. Every word $x_i$ in sentence is first encoded into a continuous representation $E_s(x_i)$, called the source embedding. Then $\\mathbf {x}$ is transformed into a fixed-length hidden vector $\\mathbf {h}_i$ representing the sentence at the time step $i$, which called the annotation vector, combined by the states of forward $\\overrightarrow{\\mathbf {h}}_i$ and backward $\\overleftarrow{\\mathbf {h}}_i$:", "$\\overrightarrow{\\mathbf {h}}_i=f(E_s(x_i),\\overrightarrow{\\mathbf {h}}_{i-1})$", "$\\overleftarrow{\\mathbf {h}}_i=f(E_s(x_i),\\overleftarrow{\\mathbf {h}}_{i+1})$", "The decoder generates the target sentence $\\mathbf {y}={(y_1, ..., y_m)}$, and at the time step $j$, the predicted probability of the target word $y_j$ is estimated as follows:", "where $\\mathbf {z}_j$ is the output hidden states of the attention mechanism and computed by the previous output hidden states $\\mathbf {z}_{j-1}$, the embedding of previous target word $E_t(y_{j-1})$ and the context $\\mathbf {c}_j$:", "$\\mathbf {z}_j=g(E_t(y_{j-1}), \\mathbf {z}_{j-1}, \\mathbf {c}_j)$", "The source context $\\mathbf {c}_j$ is the weighted sum of the encoder's annotation vectors $\\mathbf {h}_i$:", "$\\mathbf {c}_j=\\sum ^n_{i=1}\\alpha _{ij}\\mathbf {h}_i$", "where $\\alpha _{ij}$ are the alignment weights, denoting the relevance between the current target word $y_j$ and all source annotation vectors $\\mathbf {h}_i$." ], [ "In this section, we present the details about our approaches to overcome the rare word situation. While the first strategy augments the source context to translate low-frequency words, the remaining strategies reduce the number of OOV words in the vocabulary." ], [ "The attention mechanism in RNN-based NMT maps the target word into source context corresponding through the annotation vectors $\\mathbf {h}_i$. In the recurrent hidden unit, $\\mathbf {h}_i$ is computed from the previous state $\\mathbf {h}_{t-1}$. Therefore, the information flow of the words in the source sentence may be diminished over time. This leads to the accuracy reduction when translating low-frequency words, since there is no direct connection between the target word and the source word. To alleviate the adverse impact of this problem, BIBREF3 combined the source embeddings with the predictive distribution over the output target word in several following steps:", "Firstly, the weighted average vector of the source embeddings is computed as follows:", "where $\\alpha _j(e)$ are alignment weights in the attention component and $f_e = E_s(x)$, are the embeddings of the source words.", "Then $l_j$ is transformed through one-hidden-layer FFNN with residual connection proposed by BIBREF12:", "Finally, the output distribution over the target word is calculated by:", "The matrices $\\mathbf {W}_l$, $\\mathbf {W}_t$ and $\\mathbf {b}_t$ are trained together with other parameters of the NMT model.", "This approach improves the performance of the NMT systems but introduces more computations as the model size increase due to the additional parameters $\\mathbf {W}_l$, $\\mathbf {W}_t$ and $\\mathbf {b}_t$. We simplify this method by using the weighted average of source embeddings directly in the softmax output layer:", "Our method does not learn any additional parameters. Instead, it requires the source embedding size to be compatible with the decoder's hidden states. With the additional information provided from the source embeddings, we achieve similar improvements compared to the more expensive method described in BIBREF3." ], [ "In our previous experiments for English$\\rightarrow $Vietnamese, BPE algorithm BIBREF9 applied to the source side does not significantly improves the systems despite it is able to reduce the number of unknown English words. We speculate that it might be due to the morphological differences between the source and the target languages (English and Vietnamese in this case). The unsupervised way of BPE while learning sub-words in English thus might be not explicit enough to provide the morphological information to the Vietnamese side. In this work, we would like to attempt a more explicit, supervised way. We collect 52 popular affixes (prefixes and suffixes) in English and then apply the separating affixes algorithm (called SAA) to reduce the number of unknown words as well as to force our NMT systems to learn better morphological mappings between two languages.", "The main ideal of our SAA is to separate affixes of unknown words while ensuring that the rest of them still exists in the vocabulary. Let the vocabulary $V$ containing $K$ most frequency words from the training set $T1$, a set of prefixes $P$, a set of suffixes $S$, we call word $w^{\\prime }$ is the rest of an unknown word or rare word $w$ after delimiting its affixes. We iteratively pick a $w$ from $N$ words (including unknown words and rare words) of the source text $T2$ to consider if $w$ starts with a prefix $p$ in $P$ or ends with a suffix $s$ in $S$, we then determine splitting its affixes if $w^{\\prime }$ in $V$. A rare word in $V$ also can be separated its affixes if its frequency is less than the given threshold. We set this threshold by 2 in our experiments. Similarly to BPE approach, we also employ a pair of the special symbol $@$ for separating affixes from the word. Listing SECREF6 shows our SAA algorithm.", "" ], [ "WordNet is a lexical database grouping words into sets which share some semantic relations. Its version for English is proposed for the first time by BIBREF13. It becomes a useful resource for many tasks of natural language processing BIBREF14, BIBREF15, BIBREF16. WordNet are available mainly for English and German, the version for other languages are being developed including some Asian languages in such as Japanese, Chinese, Indonesian and Vietnamese. Several works have employed WordNet in SMT systemsBIBREF17, BIBREF18 but to our knowledge, none of the work exploits the benefits of WordNet in order to ease the rare word problem in NMT. In this work, we propose the learning synonymous algorithm (called LSW) from the WordNet of English and Japanese to handle unknown words in our NMT systems.", "In WordNet, synonymous words are organized in groups which are called synsets. Our aim is to replace an OOV word by its synonym which appears in the vocabulary of the translation system. From the training set of the source language $T1$, we extract the vocabulary $V$ in size of $K$ most frequent words. For each OOV word from $T1$, we learn its synonyms which exist in the $V$ from the WordNet $W$. The synonyms are then arranged in the descending order of their frequencies to facilitate selection of the $n$ best words which have the highest frequencies. The output file $C$ of the algorithm contains OOV words and its corresponding synonyms and then it is applied to the input text $T2$. We also utilize a frequency threshold for rare words in the same way as in SAA algorithm. In practice, we set this threshold as 0, meaning no words on $V$ is replaced by its synonym. If a source sentence has $m$ unknown words and each of them has $n$ best synonyms, it would generate $m^n$ sentences. Translation process allow us to select the best hypothesis based on their scores. Because of each word in the WordNet can belong to many synsets with different meanings, thus an inappropriate word can be placed in the current source context. We will solve this situation in the further works. Our systems only use 1-best synonym for each OOV word. Listing SECREF7 presents the LSW algorithm.", "" ], [ "We evaluate our approaches on the English-Vietnamese and the Japanese-Vietnamese translation systems. Translation performance is measured in BLEU BIBREF19 by the multi-BLEU scripts from Moses." ], [ "We consider two low-resource language pairs: Japanese-Vietnamese and English-Vietnamese. For Japanese-Vietnamese, we use the TED data provided by WIT3 BIBREF20 and compiled by BIBREF21. The training set includes 106758 sentence pairs, the validation and test sets are dev2010 (568 pairs) and tst2010 (1220 pairs). For English$\\rightarrow $Vietnamese, we use the dataset from IWSLT 2015 BIBREF22 with around 133K sentence pairs for the training set, 1553 pairs in tst2012 as the validation and 1268 pairs in tst2013 as the test sets.", "For LSW algorithm, we crawled pairs of synonymous words from Japanese-English WordNet and achieved 315850 pairs for English and 1419948 pairs for Japanese." ], [ "For English and Vietnamese, we tokenized the texts and then true-cased the tokenized texts using Moses script. We do not use any word segmentation tool for Vietnamese. For comparison purpose, Sennrich's BPE algorithm is applied for English texts. Following the same preprocessing steps for Japanese (JPBPE) in BIBREF21, we use KyTea BIBREF23 to tokenize texts and then apply BPE on those texts. The number of BPE merging operators are 50k for both Japanese and English." ], [ "We implement our NMT systems using OpenNMT-py framework BIBREF24 with the same settings as in BIBREF21 for our baseline systems. Our system are built with two hidden layers in both encoder and decoder, each layer has 512 hidden units. In the encoder, a BiLSTM architecture is used for each layer and in the decoder, each layer are basically an LSTM layer. The size of embedding layers in both source and target sides is also 512. Adam optimizer is used with the initial learning rate of $0.001$ and then we apply learning rate annealing. We train our systems for 16 epochs with the batch size of 32. Other parameters are the same as the default settings of OpenNMT-py.", "We then modify the baseline architecture with the alternative proposed in Section SECREF5 in comparison to our baseline systems. All settings are the same as the baseline systems." ], [ "In this section, we show the effectiveness of our methods on two low-resource language pairs and compare them to the other works. The empirical results are shown in Table TABREF15 for Japanese-Vietnamese and in Table TABREF20 for English-Vietnamese. Note that, the Multi-BLEU is only measured in the Japanese$\\rightarrow $Vietnamese direction and the standard BLEU points are written in brackets." ], [ "We conduct two out of the three proposed approaches for Japanese-Vietnamese translation systems and the results are given in the Table TABREF15.", "Baseline Systems. We find that our translation systems which use Sennrich's BPE method for Japanese texts and do not use word segmentation for Vietnamese texts are neither better or insignificant differences compare to those systems used word segmentation in BIBREF21. Particularly, we obtained +0.38 BLEU points between (1) and (4) in the Japanese$\\rightarrow $Vietnamese and -0.18 BLEU points between (1) and (3) in the Vietnamese$\\rightarrow $Japanese.", "Our Approaches. On the systems trained with the modified architecture mentioned in the section SECREF5, we obtained an improvements of +0.54 BLEU points in the Japanese$\\rightarrow $Vietnamese and +0.42 BLEU points on the Vietnamese$\\rightarrow $Japanese compared to the baseline systems.", "Due to the fact that Vietnamese WordNet is not available, we only exploit WordNet to tackle unknown words of Japanese texts in our Japanese$\\rightarrow $Vietnamese translation system. After using Kytea, Japanese texts are applied LSW algorithm to replace OOV words by their synonyms. We choose 1-best synonym for each OOV word. Table TABREF18 shows the number of OOV words replaced by their synonyms. The replaced texts are then BPEd and trained on the proposed architecture. The largest improvement is +0.92 between (1) and (3). We observed an improvement of +0.7 BLEU points between (3) and (5) without using data augmentation described in BIBREF21.", "", "" ], [ "We examine the effect of all approaches presented in Section SECREF3 for our English-Vietnamese translation systems. Table TABREF20 summarizes those results and the scores from other systems BIBREF3, BIBREF25.", "Baseline systems. After preprocessing data using Moses scripts, we train the systems of English$\\leftrightarrow $Vietnamese on our baseline architecture. Our translation system obtained +0.82 BLEU points compared to BIBREF3 in the English$\\rightarrow $Vietnamese and this is lower than the system of BIBREF25 with neural phrase-based translation architecture.", "Our approaches. The datasets from the baseline systems are trained on our modified NMT architecture. The improvements can be found as +0.55 BLEU points between (1) and (2) in the English$\\rightarrow $Vietnamese and +0.45 BLEU points (in tst2012) between (1) and (2) in the Vietnamese$\\rightarrow $English.", "For comparison purpose, English texts are split into sub-words using Sennrich's BPE methods. We observe that, the achieved BLEU points are lower Therefore, we then apply the SAA algorithm on the English texts from (2) in the English$\\rightarrow $Vietnamese. The number of applied words are listed in Table TABREF21. The improvement in BLEU are +0.74 between (4) and (1).", "", "Similarly to the Japanese$\\rightarrow $Vietnamese system, we apply LSW algorithm on the English texts from (4) while selecting 1-best synonym for each OOV word. The number of replaced words on English texts are indicated in the Table TABREF22. Again, we obtained a bigger gain of +0.99 (+1.02) BLEU points in English$\\rightarrow $Vietnamese direction. Compared to the most recent work BIBREF25, our system reports an improvement of +0.47 standard BLEU points on the same dataset.", "We investigate some examples of translations generated by the English$\\rightarrow $Vietnamese systems with our proposed methods in the Table TABREF23. The bold texts in red color present correct or approximate translations while the italic texts in gray color denote incorrect translations. The first example, we consider two words: presentation and the unknown word applauded. The word presentation is predicted correctly as Vietnamese\"bài thuyết trình\" in most cases when we combined source context through embeddings. The unknown word applauded which has not seen in the vocabulary is ignored in the first two cases (baseline and source embedding) but it is roughly translated as Vietnamese\"hoan nghênh\" in the SAA because it is separated into applaud and ed. In the second example, we observe the translations of the unknown word tryout, they are mistaken in the first three cases but in the LSW, it is predicted with a closer meaning as Vietnamese\"bài kiểm tra\" due to the replacement by its synonymous word as test.", "" ], [ "Addressing unknown words was mentioned early in the Statistical Machine Translation (SMT) systems. Some typical studies as: BIBREF26 proposed four techniques to overcome this situation by extend the morphology and spelling of words or using a bilingual dictionary or transliterating for names. These approaches are difficult when manipulate to different domains. BIBREF27 trained word embedding models to learn word similarity from monolingual data and an unknown word are then replaced by a its similar word. BIBREF28 used a linear model to learn maps between source and target spaces base on a small initial bilingual dictionary to find the translations of source words. However, in NMT, there are not so many works tackling this problem. BIBREF7 use a very large vocabulary to solve unknown words. BIBREF6 generate a dictionary from alignment data based on annotated corpus to decide the hypotheses of unknown words. BIBREF3 have introduced the solutions for dealing with the rare word problem, however, their models require more parameters, thus, decreasing the overall efficiency.", "In another direction, BIBREF9 exploited the BPE algorithm to reduce number of unknown words in NMT and achieved significant efficiency on many language pairs. The second approach presented in this works follows this direction when instead of using an unsupervised method to split rare words and unknown words into sub-words that are able to translate, we use a supervised method. Our third approach using WordNet can be seen as a smoothing way, when we use the translations of the synonymous words to approximate the translation of an OOV word. Another work followed this direction is worth to mention is BIBREF29, when they use the morphological and semantic information as the factors of the words to help translating rare words." ], [ "In this study, we have proposed three difference strategies to handle rare words in NMT, in which the combination of methods brings significant improvements to the NMT systems on two low-resource language pairs. In future works, we will consider selecting some appropriate synonymous words for the source sentence from n-best synonymous words to further improve the performance of the NMT systems and leverage more unsupervised methods based on monolingual data to address rare word problem." ], [ "This work is supported by the project \"Building a machine translation system to support translation of documents between Vietnamese and Japanese to help managers and businesses in Hanoi approach to Japanese market\", No. TC.02-2016-03." ] ] }
{ "question": [ "Are synonymous relation taken into account in the Japanese-Vietnamese task?", "Is the supervised morphological learner tested on Japanese?" ], "question_id": [ "b367b823c5db4543ac421d0057b02f62ea16bf9f", "84737d871bde8058d8033e496179f7daec31c2d3" ], "nlp_background": [ "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no" ], "search_query": [ "morphology", "morphology" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Due to the fact that Vietnamese WordNet is not available, we only exploit WordNet to tackle unknown words of Japanese texts in our Japanese$\\rightarrow $Vietnamese translation system. After using Kytea, Japanese texts are applied LSW algorithm to replace OOV words by their synonyms. We choose 1-best synonym for each OOV word. Table TABREF18 shows the number of OOV words replaced by their synonyms. The replaced texts are then BPEd and trained on the proposed architecture. The largest improvement is +0.92 between (1) and (3). We observed an improvement of +0.7 BLEU points between (3) and (5) without using data augmentation described in BIBREF21." ], "highlighted_evidence": [ "After using Kytea, Japanese texts are applied LSW algorithm to replace OOV words by their synonyms. " ] } ], "annotation_id": [ "ad9a79b4dbb83226bc66c95f7486dc094781eade" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "We conduct two out of the three proposed approaches for Japanese-Vietnamese translation systems and the results are given in the Table TABREF15.", "FLOAT SELECTED: Table 1: Results of Japanese-Vietnamese NMT systems" ], "highlighted_evidence": [ "We conduct two out of the three proposed approaches for Japanese-Vietnamese translation systems and the results are given in the Table TABREF15.", "FLOAT SELECTED: Table 1: Results of Japanese-Vietnamese NMT systems" ] } ], "annotation_id": [ "09bf83d69f9f2be0a18d15913be1c6a92bbe00d4" ], "worker_id": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ] } ] }
{ "caption": [ "Table 1: Results of Japanese-Vietnamese NMT systems", "Table 2: The number of Japanese OOV words replaced by their synonyms.", "Table 3: Results of English-Vietnamese NMT systems", "Table 4: The number of rare words in which their affixes are detached from the English texts in the SAA algorithm.", "Table 6: Examples of outputs from the English→Vietnamese translation systems with the proposed methods." ], "file": [ "5-Table1-1.png", "5-Table2-1.png", "7-Table3-1.png", "7-Table4-1.png", "8-Table6-1.png" ] }
1908.09156
A framework for anomaly detection using language modeling, and its applications to finance
In the finance sector, studies focused on anomaly detection are often associated with time-series and transactional data analytics. In this paper, we lay out the opportunities for applying anomaly and deviation detection methods to text corpora and challenges associated with them. We argue that language models that use distributional semantics can play a significant role in advancing these studies in novel directions, with new applications in risk identification, predictive modeling, and trend analysis.
{ "section_name": [ "Introduction", "Five views on anomaly", "Five views on anomaly ::: Anomaly as error", "Five views on anomaly ::: Anomaly as irregularity", "Five views on anomaly ::: Anomaly as novelty", "Five views on anomaly ::: Anomaly as semantic richness", "Five views on anomaly ::: Anomaly as contextual relevance", "Anomaly detection via language modeling", "Anomaly detection via language modeling ::: Anomaly in input vectors", "Anomaly detection via language modeling ::: Anomaly in output vectors", "Anomaly detection via language modeling ::: Anomaly in hidden vectors", "Anomaly detection via language modeling ::: Anomaly in weight tensors and other parameters", "Challenges and Future Research", "Conclusion" ], "paragraphs": [ [ "The detection of anomalous trends in the financial domain has focused largely on fraud detection BIBREF0, risk modeling BIBREF1, and predictive analysis BIBREF2. The data used in the majority of such studies is of time-series, transactional, graph or generally quantitative or structured nature. This belies the critical importance of semi-structured or unstructured text corpora that practitioners in the finance domain derive insights from—corpora such as financial reports, press releases, earnings call transcripts, credit agreements, news articles, customer interaction logs, and social data.", "Previous research in anomaly detection from text has evolved largely independently from financial applications. Unsupervised clustering methods have been applied to documents in order to identify outliers and emerging topics BIBREF3. Deviation analysis has been applied to text in order to identify errors in spelling BIBREF4 and tagging of documents BIBREF5. Recent popularity of distributional semantics BIBREF6 has led to further advances in semantic deviation analysis BIBREF7. However, current research remains largely divorced from specific applications within the domain of finance.", "In the following sections, we enumerate major applications of anomaly detection from text in the financial domain, and contextualize them within current research topics in Natural Language Processing." ], [ "Anomaly detection is a strategy that is often employed in contexts where a deviation from a certain norm is sought to be captured, especially when extreme class imbalance impedes the use of a supervised approach. The implementation of such methods allows for the unveiling of previously hidden or obstructed insights.", "In this section, we lay out five perspectives on how textual anomaly detection can be applied in the context of finance, and how each application opens up opportunities for NLP researchers to apply current research to the financial domain." ], [ "Previous studies have used anomaly detection to identify and correct errors in text BIBREF4, BIBREF5. These are often unintentional errors that occur as a result of some form of data transfer, e.g. from audio to text, from image to text, or from one language to another. Such studies have direct applicability to the error-prone process of earnings call or customer call transcription, where audio quality, accents, and domain-specific terms can lead to errors. Consider a scenario where the CEO of a company states in an audio conference, `Now investments will be made in Asia.' However, the system instead transcribes, `No investments will be made in Asia.' There is a meaningful difference in the implication of the two statements that could greatly influence the analysis and future direction of the company. Additionally, with regards to the second scenario, it is highly unlikely that the CEO would make such a strong and negative statement in a public setting thus supporting the use of anomaly detection for error correction.", "Optical-character-recognition from images is another error-prone process with large applicability to finance. Many financial reports and presentations are circulated as image documents that need to undergo OCR in order to be machine-readable. OCR might also be applicable to satellite imagery and other forms of image data that might include important textual content such as a graphical representation of financial data. Errors that result from OCR'd documents can often be fixed using systems that have a robust semantic representation of the target domain. For instance, a model that is trained on financial reports might have encoded awareness that emojis are unlikely to appear in them or that it is unusual for the numeric value of profit to be higher than that of revenue." ], [ "Anomaly in the semantic space might reflect irregularities that are intentional or emergent, signaling risky behavior or phenomena. A sudden change in the tone and vocabulary of a company's leadership in their earnings calls or financial reports can signal risk. News stories that have abnormal language, or irregular origination or propagation patterns might be unreliable or untrustworthy.", "BIBREF8 showed that when trained on similar domains or contexts, distributed representations of words are likely to be stable, where stability is measured as the similarity of their nearest neighbors in the distributed space. Such insight can be used to assess anomalies in this sense. As an example, BIBREF9 identified cliques of users on Twitter who consistently shared news from similar domains. Characterizing these networks as “echo-chambers,” they then represented the content shared by these echo-chambers as distributed representations. When certain topics from one echo-chamber began to deviate from similar topics in other echo-chambers, the content was tagged as unreliable. BIBREF9 showed that this method can be used to improve the performance of standard methods for fake-news detection.", "In another study BIBREF10, the researchers hypothesized that transparent language in earnings calls indicates high expectations for performance in the upcoming quarters, whereas semantic ambiguity can signal a lack of confidence and expected poor performance. By quantifying transparency as the frequent use of numbers, shorter words, and unsophisticated vocabulary, they showed that a change in transparency is associated with a change in future performance." ], [ "Anomaly can indicate a novel event or phenomenon that may or may not be risky. Breaking news stories often emerge as anomalous trends on social media. BIBREF11 experimented with this in their effort to detect novel events from Twitter conversations. By representing each event as a real-time cluster of tweets (where each tweet was encoded as a vector), they managed to assess the novelty of the event by comparing its centroid to the centroids of older events.", "Novelty detection can also be used to detect emerging trends on social media, e.g. controversies that engulf various brands often start as small local events that are shared on social media and attract attention over a short period of time. How people respond to these events in early stages of development can be a measure of their veracity or controversiality BIBREF12, BIBREF13.", "An anomaly in an industry grouping of companies can also be indicative of a company that is disrupting the norm for that industry and the emergence of a new sector or sub-sector. Often known as trail-blazers, these companies innovate faster than their competitors to meet market demands sometimes even before the consumer is aware of their need. As these companies continually evolve their business lines, their core operations are novel outliers from others in the same industry classification that can serve as meaningful signals of transforming industry demands." ], [ "A large portion of text documents that analysts and researchers in the financial sectors consume have a regulatory nature. Annual financial reports, credit agreements, and filings with the U.S. Securities and Exchange Commission (SEC) are some of these types of documents. These documents can be tens or hundreds of pages long, and often include boilerplate language that the readers might need to skip or ignore in order to get to the “meat” of the content. Often, the abnormal clauses found in these documents are buried in standard text so as not to attract attention to the unique phrases.", "BIBREF14 used smoothed representations of n-grams in SEC filings in order to identify boilerplate and abnormal language. They did so by comparing the probability of each n-gram against the company's previous filings, against other filings in the same sector, and against other filings from companies with similar market cap. The aim was to assist accounting analysts in skipping boilerplate language and focusing their attention on important snippets in these documents.", "Similar methods can be applied to credit agreements where covenants and clauses that are too common are often ignored by risk analysts and special attention is paid to clauses that “stand out” from similar agreements." ], [ "Certain types of documents include universal as well as context-specific signals. As an example, consider a given company's financial reports. The reports may include standard financial metrics such as total revenue, net sales, net income, etc. In addition to these universal metrics, businesses often report their performance in terms of the performance of their operating segments. These segments can be business divisions, products, services, or regional operations. The segments are often specific to the company or its peers. For example, Apple Inc.'s segments might include “iPhone,” “iMac,” “iPad,” and “services.” The same segments will not appear in reports by other businesses.", "For many analysts and researchers, operating segments are a crucial part of exploratory or predictive analysis. They use performance metrics associated with these segments to compare the business to its competitors, to estimate its market share, and to project the overall performance of the business in upcoming quarters. Automating the identification and normalization of these metrics can facilitate more insightful analytical research. Since these segments are often specific to each business, supervised models that are trained on a diverse set of companies cannot capture them without overfitting to certain companies. Instead, these segments can be treated as company-specific anomalies." ], [ "Unlike numeric data, text data is not directly machine-readable, and requires some form of transformation as a pre-processing step. In “bag-of-words” methods, this transformation can take place by assigning an index number to each word, and representing any block of text as an unordered set of these words. A slightly more sophisticated approach might chain words into continuous “n-grams” and represent a block of text as an ordered series of “n-grams” that have been extracted on a sliding window of size n. These approaches are conventionally known as “language modeling.”", "Since the advent of high-powered processors enabled the widespread use of distributed representations, language modeling has rapidly evolved and adapted to these new capabilities. Recurrent neural networks can capture an arbitrarily long sequence of text and perform various tasks such as classification or text generation BIBREF16. In this new context, language modeling often refers to training a recurrent network that predicts a word in a given sequence of text BIBREF17. Language models are easy to train because even though they follow a predictive mechanism, they do not need any labeled data, and are thus unsupervised.", "Figure FIGREF6 is a simple illustration of how a neural network that is composed of recurrent units such as Long-Short Term Memory (LSTM) BIBREF18 can perform language modeling. The are four main components to the network:", "The input vectors ($x_i$), which represent units (i.e. characters, words, phrases, sentences, paragraphs, etc.) in the input text. Occasionally, these are represented by one-hot vectors that assign a unique index to each particular input. More commonly, these vectors are adapted from a pre-trained corpus, where distributed representations have been inferred either by a simpler auto-encoding process BIBREF19 or by applying the same recurrent model to a baseline corpus such as Wikipedia BIBREF17.", "The output vectors ($y_i$), which represent the model's prediction of the next word in the sequence. Naturally, they are represented in the same dimensionality as $x_i$s.", "The hidden vectors ($h_i$), which are often randomly initialized and learned through backpropagation. Often trained as dense representations, these vectors tend to display characteristics that indicate semantic richness BIBREF20 and compositionality BIBREF19. While the language model can be used as a text-generation mechanism, the hidden vectors are a strong side product that are sometimes extracted and reused as augmented features in other machine learning systems BIBREF21.", "The weights of the network ($W_{ij}$) (or other parameters in the network), which are tuned through backpropagation. These often indicate how each vectors in the input or hidden sequence is utilized to generate the output. These parameters play a big role in the way the output of neural networks are reverse-engineered or explained to the end user .", "The distributions of any of the above-mentioned components can be studied to mine signals for anomalous behavior in the context of irregularity, error, novelty, semantic richness, or contextual relevance." ], [ "As previously mentioned, the input vectors to a text-based neural network are often adapted from publicly-available word vector corpora. In simpler architectures, the network is allowed to back-propagate its errors all the way to the input layer, which might cause the input vectors to be modified. This can serve as a signal for anomaly in the semantic distributions between the original vectors and the modified vectors.", "Analyzing the stability of word vectors when trained on different iterations can also signal anomalous trends BIBREF8." ], [ "As previously mentioned, language models generate a probability distribution over a word (or character) in a sequence. These probabilities can be used to detect transcription or character-recognition errors in a domain-friendly manner. When the language model is trained on financial data, domain-specific trends (such as the use of commas and parentheses in financial metrics) can be captured and accounted for by the network, minimizing the rate of false positives." ], [ "A recent advancement in text processing is the introduction of fine-tuning methods to neural networks trained on text BIBREF17. Fine-tuning is an approach that facilitates the transfer of semantic knowledge from one domain (source) to another domain (target). The source domain is often large and generic, such as web data or the Wikipedia corpus, while the target domain is often specific (e.g. SEC filings). A network is pre-trained on the source corpus such that its hidden representations are enriched. Next, the pre-trained networks is re-trained on the target domain, but this time only the final (or top few) layers are tuned and the parameters in the remaining layers remain “frozen.” The top-most layer of the network can be modified to perform a classification, prediction, or generation task in the target domain (see Figure FIGREF15).", "Fine-tuning aims to change the distribution of hidden representations in such a way that important information about the source domain is preserved, while idiosyncrasies of the target domain are captured in an effective manner BIBREF22. A similar process can be used to determine anomalies in documents. As an example, consider a model that is pre-trained on historical documents from a given sector. If fine-tuning the model on recent documents from the same sector dramatically shifts the representations for certain vectors, this can signal an evolving trend." ], [ "Models that have interpretable parameters can be used to identify areas of deviation or anomalous content. Attention mechanisms BIBREF23 allow the network to account for certain input signals more than others. The learned attention mechanism can provide insight into potential anomalies in the input. Consider a language model that predicts the social media engagement for a given tweet. Such a model can be used to distinguish between engaging and information-rich content versus clickbait, bot-generated, propagandistic, or promotional content by exposing how, for these categories, engagement is associated with attention to certain distributions of “trigger words.”", "Table TABREF17 lists four scenarios for using the various layers and parameters of a language model in order to perform anomaly detection from text." ], [ "Like many other domains, in the financial domain, the application of language models as a measurement for semantic regularity of text bears the challenge of dealing with unseen input. Unseen input can be mistaken for anomaly, especially in systems that are designed for error detection. As an example, a system that is trained to correct errors in an earnings call transcript might treat named entities such as the names of a company's executives, or a recent acquisition, as anomalies. This problem is particularly prominent in fine-tuned language models, which are pre-trained on generic corpora that might not include domain-specific terms.", "When anomalies are of a malicious nature, such as in the case where abnormal clauses are included in credit agreements, the implementation of the anomalous content is adapted to appear normal. Thereby, the task of detecting normal language becomes more difficult.", "Alternatively, in the case of language used by executives in company presentations such as earnings calls, there may be a lot of noise in the data due to the large degree of variability in the personalities and linguistic patterns of various leaders. The noise variability present in this content could be similar to actual anomalies, hence making it difficult to identify true anomalies.", "Factors related to market interactions and competitive behavior can also impact the effectiveness of anomaly-detection models. In detecting the emergence of a new industry sector, it may be challenging for a system to detect novelty when a collection of companies, rather than a single company, behave in an anomalous way. The former may be the more common real-world scenario as companies closely monitor and mimic the innovations of their competitors. The exact notion of anomaly can also vary based on the sector and point in time. For example, in the technology sector, the norm in today's world is one of continuous innovation and technological advancements.", "Additionally, certain types of anomaly can interact and make it difficult for systems to distinguish between them. As an example, a system that is trained to identify the operating segments of a company tends to distinguish between information that is specific to the company, and information that is common across different companies. As a result, it might identify the names of the company's board of directors or its office locations as its operating segments.", "Traditional machine learning models have previously tackled the above challenges, and solutions are likely to emerge in the neural paradigms as well. Any future research in these directions will have to account for the impact of such solutions on the reliability and explainability of the resulting models and their robustness against adversarial data." ], [ "Anomaly detection from text can have numerous applications in finance, including risk detection, predictive analysis, error correction, and peer detection. We have outlined various perspectives on how anomaly can be interpreted in the context of finance, and corresponding views on how language modeling can be used to detect such aspects of anomalous content. We hope that this paper lays the groundwork for establishing a framework for understanding the opportunities and risks associated with these methods when applied in the financial domain." ] ] }
{ "question": [ "What is the dataset that is used in the paper?", "What is the performance of the models discussed in the paper?", "Does the paper consider the use of perplexity in order to identify text anomalies?", "Does the paper report a baseline for the task?" ], "question_id": [ "7b3d207ed47ae58286029b62fd0c160a0145e73d", "d58c264068d8ca04bb98038b4894560b571bab3e", "f80d89fb905b3e7e17af1fe179b6f441405ad79b", "5f6fac08c97c85d5f4f4d56d8b0691292696f8e6" ], "nlp_background": [ "two", "two", "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "language identification", "language identification", "language identification", "language identification" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "cc38f2b38d6baeafec38a209b64764bf9003ccce" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "09df9ac8c8a083ff80587ffe6f9c1166162984f5" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [ "PERPLEXITY" ] } ], "annotation_id": [ "877c7e8df5e139346fe387586d67d211595ed572" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "4ea64108416b85246cb365e5f2dbb25b31813f4d" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: Illustration of a recurrent step in a languagemodel. Excerpted from [8].", "Figure 2: A pre-trained model can be fine-tuned on a new domain, and applied to a classification or prediction task. Excerpted from [6].", "Table 1: Four scenarios for anomaly detection on text data using signals from various layers and parameters in a language model." ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "4-Table1-1.png" ] }
1911.00523
What Gets Echoed? Understanding the"Pointers"in Explanations of Persuasive Arguments
Explanations are central to everyday life, and are a topic of growing interest in the AI community. To investigate the process of providing natural language explanations, we leverage the dynamics of the /r/ChangeMyView subreddit to build a dataset with 36K naturally occurring explanations of why an argument is persuasive. We propose a novel word-level prediction task to investigate how explanations selectively reuse, or echo, information from what is being explained (henceforth, explanandum). We develop features to capture the properties of a word in the explanandum, and show that our proposed features not only have relatively strong predictive power on the echoing of a word in an explanation, but also enhance neural methods of generating explanations. In particular, while the non-contextual properties of a word itself are more valuable for stopwords, the interaction between the constituent parts of an explanandum is crucial in predicting the echoing of content words. We also find intriguing patterns of a word being echoed. For example, although nouns are generally less likely to be echoed, subjects and objects can, depending on their source, be more likely to be echoed in the explanations.
{ "section_name": [ "Introduction", "Related Work", "Dataset", "Understanding the Pointers in Explanations", "Predicting Pointers", "Predicting Pointers ::: Experiment setup", "Predicting Pointers ::: Prediction Performance", "Predicting Pointers ::: The Effect on Generating Explanations", "Concluding Discussions", "Acknowledgments", "Supplemental Material ::: Preprocessing.", "Supplemental Material ::: PC Echoing OP", "Supplemental Material ::: Feature Calculation", "Supplemental Material ::: Word–level Prediction Task", "Supplemental Material ::: Generating Explanations" ], "paragraphs": [ [ "Explanations are essential for understanding and learning BIBREF0. They can take many forms, ranging from everyday explanations for questions such as why one likes Star Wars, to sophisticated formalization in the philosophy of science BIBREF1, to simply highlighting features in recent work on interpretable machine learning BIBREF2.", "Although everyday explanations are mostly encoded in natural language, natural language explanations remain understudied in NLP, partly due to a lack of appropriate datasets and problem formulations. To address these challenges, we leverage /r/ChangeMyView, a community dedicated to sharing counterarguments to controversial views on Reddit, to build a sizable dataset of naturally-occurring explanations. Specifically, in /r/ChangeMyView, an original poster (OP) first delineates the rationales for a (controversial) opinion (e.g., in Table TABREF1, “most hit music artists today are bad musicians”). Members of /r/ChangeMyView are invited to provide counterarguments. If a counterargument changes the OP's view, the OP awards a $\\Delta $ to indicate the change and is required to explain why the counterargument is persuasive. In this work, we refer to what is being explained, including both the original post and the persuasive comment, as the explanandum.", "An important advantage of explanations in /r/ChangeMyView is that the explanandum contains most of the required information to provide its explanation. These explanations often select key counterarguments in the persuasive comment and connect them with the original post. As shown in Table TABREF1, the explanation naturally points to, or echoes, part of the explanandum (including both the persuasive comment and the original post) and in this case highlights the argument of “music serving different purposes.”", "These naturally-occurring explanations thus enable us to computationally investigate the selective nature of explanations: “people rarely, if ever, expect an explanation that consists of an actual and complete cause of an event. Humans are adept at selecting one or two causes from a sometimes infinite number of causes to be the explanation” BIBREF3. To understand the selective process of providing explanations, we formulate a word-level task to predict whether a word in an explanandum will be echoed in its explanation.", "Inspired by the observation that words that are likely to be echoed are either frequent or rare, we propose a variety of features to capture how a word is used in the explanandum as well as its non-contextual properties in Section SECREF4. We find that a word's usage in the original post and in the persuasive argument are similarly related to being echoed, except in part-of-speech tags and grammatical relations. For instance, verbs in the original post are less likely to be echoed, while the relationship is reversed in the persuasive argument.", "We further demonstrate that these features can significantly outperform a random baseline and even a neural model with significantly more knowledge of a word's context. The difficulty of predicting whether content words (i.e., non-stopwords) are echoed is much greater than that of stopwords, among which adjectives are the most difficult and nouns are relatively the easiest. This observation highlights the important role of nouns in explanations. We also find that the relationship between a word's usage in the original post and in the persuasive comment is crucial for predicting the echoing of content words. Our proposed features can also improve the performance of pointer generator networks with coverage in generating explanations BIBREF4.", "To summarize, our main contributions are:", "[itemsep=0pt,leftmargin=*,topsep=0pt]", "We highlight the importance of computationally characterizing human explanations and formulate a concrete problem of predicting how information is selected from explananda to form explanations, including building a novel dataset of naturally-occurring explanations.", "We provide a computational characterization of natural language explanations and demonstrate the U-shape in which words get echoed.", "We identify interesting patterns in what gets echoed through a novel word-level classification task, including the importance of nouns in shaping explanations and the importance of contextual properties of both the original post and persuasive comment in predicting the echoing of content words.", "We show that vanilla LSTMs fail to learn some of the features we develop and that the proposed features can even improve performance in generating explanations with pointer networks.", "Our code and dataset is available at https://chenhaot.com/papers/explanation-pointers.html." ], [ "To provide background for our study, we first present a brief overview of explanations for the NLP community, and then discuss the connection of our study with pointer networks, linguistic accommodation, and argumentation mining.", "The most developed discussion of explanations is in the philosophy of science. Extensive studies aim to develop formal models of explanations (e.g., the deductive-nomological model in BIBREF5, see BIBREF1 and BIBREF6 for a review). In this view, explanations are like proofs in logic. On the other hand, psychology and cognitive sciences examine “everyday explanations” BIBREF0, BIBREF7. These explanations tend to be selective, are typically encoded in natural language, and shape our understanding and learning in life despite the absence of “axioms.” Please refer to BIBREF8 for a detailed comparison of these two modes of explanation.", "Although explanations have attracted significant interest from the AI community thanks to the growing interest on interpretable machine learning BIBREF9, BIBREF10, BIBREF11, such studies seldom refer to prior work in social sciences BIBREF3. Recent studies also show that explanations such as highlighting important features induce limited improvement on human performance in detecting deceptive reviews and media biases BIBREF12, BIBREF13. Therefore, we believe that developing a computational understanding of everyday explanations is crucial for explainable AI. Here we provide a data-driven study of everyday explanations in the context of persuasion.", "In particular, we investigate the “pointers” in explanations, inspired by recent work on pointer networks BIBREF14. Copying mechanisms allow a decoder to generate a token by copying from the source, and have been shown to be effective in generation tasks ranging from summarization to program synthesis BIBREF4, BIBREF15, BIBREF16. To the best of our knowledge, our work is the first to investigate the phenomenon of pointers in explanations.", "Linguistic accommodation and studies on quotations also examine the phenomenon of reusing words BIBREF17, BIBREF18, BIBREF19, BIBREF20. For instance, BIBREF21 show that power differences are reflected in the echoing of function words; BIBREF22 find that news media prefer to quote locally distinct sentences in political debates. In comparison, our word-level formulation presents a fine-grained view of echoing words, and puts a stronger emphasis on content words than work on linguistic accommodation.", "Finally, our work is concerned with an especially challenging problem in social interaction: persuasion. A battery of studies have done work to enhance our understanding of persuasive arguments BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, and the area of argumentation mining specifically investigates the structure of arguments BIBREF28, BIBREF29, BIBREF30. We build on previous work by BIBREF31 and leverage the dynamics of /r/ChangeMyView. Although our findings are certainly related to the persuasion process, we focus on understanding the self-described reasons for persuasion, instead of the structure of arguments or the factors that drive effective persuasion." ], [ "Our dataset is derived from the /r/ChangeMyView subreddit, which has more than 720K subscribers BIBREF31. /r/ChangeMyView hosts conversations where someone expresses a view and others then try to change that person's mind. Despite being fundamentally based on argument, /r/ChangeMyView has a reputation for being remarkably civil and productive BIBREF32, e.g., a journalist wrote “In a culture of brittle talking points that we guard with our lives, Change My View is a source of motion and surprise” BIBREF33.", "The delta mechanism in /r/ChangeMyView allows members to acknowledge opinion changes and enables us to identify explanations for opinion changes BIBREF34. Specifically, it requires “Any user, whether they're the OP or not, should reply to a comment that changed their view with a delta symbol and an explanation of the change.” As a result, we have access to tens of thousands of naturally-occurring explanations and associated explananda. In this work, we focus on the opinion changes of the original posters.", "Throughout this paper, we use the following terminology:", "[itemsep=-5pt,leftmargin=*,topsep=0pt]", "An original post (OP) is an initial post where the original poster justifies his or her opinion. We also use OP to refer to the original poster.", "A persuasive comment (PC) is a comment that directly leads to an opinion change on the part of the OP (i.e., winning a $\\Delta $).", "A top-level comment is a comment that directly replies to an OP, and /r/ChangeMyView requires the top-level comment to “challenge at least one aspect of OP’s stated view (however minor), unless they are asking a clarifying question.”", "An explanation is a comment where an OP acknowledges a change in his or her view and provides an explanation of the change. As shown in Table TABREF1, the explanation not only provides a rationale, it can also include other discourse acts, such as expressing gratitude.", "Using https://pushshift.io, we collect the posts and comments in /r/ChangeMyView from January 17th, 2013 to January 31st, 2019, and extract tuples of (OP, PC, explanation). We use the tuples from the final six months of our dataset as the test set, those from the six months before that as the validation set, and the remaining tuples as the training set. The sets contain 5,270, 5,831, and 26,617 tuples respectively. Note that there is no overlap in time between the three sets and the test set can therefore be used to assess generalization including potential changes in community norms and world events.", "Preprocessing. We perform a number of preprocessing steps, such as converting blockquotes in Markdown to quotes, filtering explicit edits made by authors, mapping all URLs to a special @url@ token, and replacing hyperlinks with the link text. We ignore all triples that contain any deleted comments or posts. We use spaCy for tokenization and tagging BIBREF35. We also use the NLTK implementation of the Porter stemming algorithm to store the stemmed version of each word, for later use in our prediction task BIBREF36, BIBREF37. Refer to the supplementary material for more information on preprocessing.", "Data statistics. Table TABREF16 provides basic statistics of the training tuples and how they compare to other comments. We highlight the fact that PCs are on average longer than top-level comments, suggesting that PCs contain substantial counterarguments that directly contribute to opinion change. Therefore, we simplify the problem by focusing on the (OP, PC, explanation) tuples and ignore any other exchanges between an OP and a commenter.", "Below, we highlight some notable features of explanations as they appear in our dataset.", "The length of explanations shows stronger correlation with that of OPs and PCs than between OPs and PCs (Figure FIGREF8). This observation indicates that explanations are somehow better related with OPs and PCs than PCs are with OPs in terms of language use. A possible reason is that the explainer combines their natural tendency towards length with accommodating the PC.", "Explanations have a greater fraction of “pointers” than do persuasive comments (Figure FIGREF8). We measure the likelihood of a word in an explanation being copied from either its OP or PC and provide a similar probability for a PC for copying from its OP. As we discussed in Section SECREF1, the words in an explanation are much more likely to come from the existing discussion than are the words in a PC (59.8% vs 39.0%). This phenomenon holds even if we restrict ourselves to considering words outside quotations, which removes the effect of quoting other parts of the discussion, and if we focus only on content words, which removes the effect of “reusing” stopwords.", "Relation between a word being echoed and its document frequency (Figure FIGREF8). Finally, as a preview of our main results, the document frequency of a word from the explanandum is related to the probability of being echoed in the explanation. Although the average likelihood declines as the document frequency gets lower, we observe an intriguing U-shape in the scatter plot. In other words, the words that are most likely to be echoed are either unusually frequent or unusually rare, while most words in the middle show a moderate likelihood of being echoed." ], [ "To further investigate how explanations select words from the explanandum, we formulate a word-level prediction task to predict whether words in an OP or PC are echoed in its explanation. Formally, given a tuple of (OP, PC, explanation), we extract the unique stemmed words as $\\mathcal {V}_{\\text{OP}}, \\mathcal {V}_{\\text{PC}}, \\mathcal {V}_{\\text{EXP}}$. We then define the label for each word in the OP or PC, $w \\in \\mathcal {V}_{\\text{OP}} \\cup \\mathcal {V}_{\\text{PC}}$, based on the explanation as follows:", "Our prediction task is thus a straightforward binary classification task at the word level. We develop the following five groups of features to capture properties of how a word is used in the explanandum (see Table TABREF18 for the full list):", "[itemsep=0pt,leftmargin=*,topsep=0pt]", "Non-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations.", "Word usage in an OP or PC (two groups). These features capture how a word is used in an OP or PC. As a result, for each feature, we have two values for the OP and PC respectively.", "How a word connects an OP and PC. These features look at the difference between word usage in the OP and PC. We expect this group to be the most important in our task.", "General OP/PC properties. These features capture the general properties of a conversation. They can be used to characterize the background distribution of echoing.", "Table TABREF18 further shows the intuition for including each feature, and condensed $t$-test results after Bonferroni correction. Specifically, we test whether the words that were echoed in explanations have different feature values from those that were not echoed. In addition to considering all words, we also separately consider stopwords and content words in light of Figure FIGREF8. Here, we highlight a few observations:", "[itemsep=0pt,leftmargin=*,topsep=0pt]", "Although we expect more complicated words (#characters) to be echoed more often, this is not the case on average. We also observe an interesting example of Simpson's paradox in the results for Wordnet depth BIBREF38: shallower words are more likely to be echoed across all words, but deeper words are more likely to be echoed in content words and stopwords.", "OPs and PCs generally exhibit similar behavior for most features, except for part-of-speech and grammatical relation (subject, object, and other.) For instance, verbs in an OP are less likely to be echoed, while verbs in a PC are more likely to be echoed.", "Although nouns from both OPs and PCs are less likely to be echoed, within content words, subjects and objects from an OP are more likely to be echoed. Surprisingly, subjects and objects in a PC are less likely to be echoed, which suggests that the original poster tends to refer back to their own subjects and objects, or introduce new ones, when providing explanations.", "Later words in OPs and PCs are more likely to be echoed, especially in OPs. This could relate to OPs summarizing their rationales at the end of their post and PCs putting their strongest points last.", "Although the number of surface forms in an OP or PC is positively correlated with being echoed, the differences in surface forms show reverse trends: the more surface forms of a word that show up only in the PC (i.e., not in the OP), the more likely a word is to be echoed. However, the reverse is true for the number of surface forms in only the OP. Such contrast echoes BIBREF31, in which dissimilarity in word usage between the OP and PC was a predictive feature of successful persuasion." ], [ "We further examine the effectiveness of our proposed features in a predictive setting. These features achieve strong performance in the word-level classification task, and can enhance neural models in both the word-level task and generating explanations. However, the word-level task remains challenging, especially for content words." ], [ "We consider two classifiers for our word-level classification task: logistic regression and gradient boosting tree (XGBoost) BIBREF39. We hypothesized that XGBoost would outperform logistic regression because our problem is non-linear, as shown in Figure FIGREF8.", "To examine the utility of our features in a neural framework, we further adapt our word-level task as a tagging task, and use LSTM as a baseline. Specifically, we concatenate an OP and PC with a special token as the separator so that an LSTM model can potentially distinguish the OP from PC, and then tag each word based on the label of its stemmed version. We use GloVe embeddings to initialize the word embeddings BIBREF40. We concatenate our proposed features of the corresponding stemmed word to the word embedding; the resulting difference in performance between a vanilla LSTM demonstrates the utility of our proposed features. We scale all features to $[0, 1]$ before fitting the models. As introduced in Section SECREF3, we split our tuples of (OP, PC, explanation) into training, validation, and test sets, and use the validation set for hyperparameter tuning. Refer to the supplementary material for additional details in the experiment.", "Evaluation metric. Since our problem is imbalanced, we use the F1 score as our evaluation metric. For the tagging approach, we average the labels of words with the same stemmed version to obtain a single prediction for the stemmed word. To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances)." ], [ "Overall performance (Figure FIGREF28). Although our word-level task is heavily imbalanced, all of our models outperform the random baseline by a wide margin. As expected, content words are much more difficult to predict than stopwords, but the best F1 score in content words more than doubles that of the random baseline (0.286 vs. 0.116). Notably, although we strongly improve on our random baseline, even our best F1 scores are relatively low, and this holds true regardless of the model used. Despite involving more tokens than standard tagging tasks (e.g., BIBREF41 and BIBREF42), predicting whether a word is going to be echoed in explanations remains a challenging problem.", "Although the vanilla LSTM model incorporates additional knowledge (in the form of word embeddings), the feature-based XGBoost and logistic regression models both outperform the vanilla LSTM model. Concatenating our proposed features with word embeddings leads to improved performance from the LSTM model, which becomes comparable to XGBoost. This suggests that our proposed features can be difficult to learn with an LSTM alone.", "Despite the non-linearity observed in Figure FIGREF8, XGBoost only outperforms logistic regression by a small margin. In the rest of this section, we use XGBoost to further examine the effectiveness of different groups of features, and model performance in different conditions.", "Ablation performance (Table TABREF34). First, if we only consider a single group of features, as we hypothesized, the relation between OP and PC is crucial and leads to almost as strong performance in content words as using all features. To further understand the strong performance of OP-PC relation, Figure FIGREF28 shows the feature importance in the ablated model, measured by the normalized total gain (see the supplementary material for feature importance in the full model). A word's occurrence in both the OP and PC is clearly the most important feature, with distance between its POS tag distributions as the second most important. Recall that in Table TABREF18 we show that words that have similar POS behavior between the OP and PC are more likely to be echoed in the explanation.", "Overall, it seems that word-level properties contribute the most valuable signals for predicting stopwords. If we restrict ourselves to only information in either an OP or PC, how a word is used in a PC is much more predictive of content word echoing (0.233 vs 0.191). This observation suggests that, for content words, the PC captures more valuable information than the OP. This finding is somewhat surprising given that the OP sets the topic of discussion and writes the explanation.", "As for the effects of removing a group of features, we can see that there is little change in the performance on content words. This can be explained by the strong performance of the OP-PC relation on its own, and the possibility of the OP-PC relation being approximated by OP and PC usage. Again, word-level properties are valuable for strong performance in stopwords.", "Performance vs. word source (Figure FIGREF28). We further break down the performance by where a word is from. We can group a word based on whether it shows up only in an OP, a PC, or both OP and PC, as shown in Table TABREF1. There is a striking difference between the performance in the three categories (e.g., for all words, 0.63 in OP & PC vs. 0.271 in PC only). The strong performance on words in both the OP and PC applies to stopwords and content words, even accounting for the shift in the random baseline, and recalls the importance of occurring both in OP and PC as a feature.", "Furthermore, the echoing of words from the PC is harder to predict (0.271) than from the OP (0.347) despite the fact that words only in PCs are more likely to be echoed than words only in OPs (13.5% vs. 8.6%). The performance difference is driven by stopwords, suggesting that our overall model is better at capturing signals for stopwords used in OPs. This might relate to the fact that the OP and the explanation are written by the same author; prior studies have demonstrated the important role of stopwords for authorship attribution BIBREF43.", "Nouns are the most reliably predicted part-of-speech tag within content words (Table TABREF35). Next, we break down the performance by part-of-speech tags. We focus on the part-of-speech tags that are semantically important, namely, nouns, proper nouns, verbs, adverbs, and adjectives.", "Prediction performance can be seen as a proxy for how reliably a part-of-speech tag is reused when providing explanations. Consistent with our expectations for the importance of nouns and verbs, our models achieve the best performance on nouns within content words. Verbs are more challenging, but become the least difficult tag to predict when we consider all words, likely due to stopwords such as “have.” Adjectives turn out to be the most challenging category, suggesting that adjectival choice is perhaps more arbitrary than other parts of speech, and therefore less central to the process of constructing an explanation. The important role of nouns in shaping explanations resonates with the high recall rate of nouns in memory tasks BIBREF44." ], [ "One way to measure the ultimate success of understanding pointers in explanations is to be able to generate explanations. We use the pointer generator network with coverage as our starting point BIBREF4, BIBREF46 (see the supplementary material for details). We investigate whether concatenating our proposed features with word embeddings can improve generation performance, as measured by ROUGE scores.", "Consistent with results in sequence tagging for word-level echoing prediction, our proposed features can enhance a neural model with copying mechanisms (see Table TABREF37). Specifically, their use leads to statistically significant improvement in ROUGE-1 and ROUGE-L, while slightly hurting the performance in ROUGE-2 (the difference is not statistically significant). We also find that our features can increase the likelihood of copying: an average of 17.59 unique words get copied to the generated explanation with our features, compared to 14.17 unique words without our features. For comparison, target explanations have an average of 34.81 unique words. We emphasize that generating explanations is a very challenging task (evidenced by the low ROUGE scores and examples in the supplementary material), and that fully solving the generation task requires more work." ], [ "In this work, we conduct the first large-scale empirical study of everyday explanations in the context of persuasion. We assemble a novel dataset and formulate a word-level prediction task to understand the selective nature of explanations. Our results suggest that the relation between an OP and PC plays an important role in predicting the echoing of content words, while a word's non-contextual properties matter for stopwords. We show that vanilla LSTMs fail to learn some of the features we develop and that our proposed features can improve the performance in generating explanations using pointer networks. We also demonstrate the important role of nouns in shaping explanations.", "Although our approach strongly outperforms random baselines, the relatively low F1 scores indicate that predicting which word is echoed in explanations is a very challenging task. It follows that we are only able to derive a limited understanding of how people choose to echo words in explanations. The extent to which explanation construction is fundamentally random BIBREF47, or whether there exist other unidentified patterns, is of course an open question. We hope that our study and the resources that we release encourage further work in understanding the pragmatics of explanations.", "There are many promising research directions for future work in advancing the computational understanding of explanations. First, although /r/ChangeMyView has the useful property that its explanations are closely connected to its explananda, it is important to further investigate the extent to which our findings generalize beyond /r/ChangeMyView and Reddit and establish universal properties of explanations. Second, it is important to connect the words in explanations that we investigate here to the structure of explanations in pyschology BIBREF7. Third, in addition to understanding what goes into an explanation, we need to understand what makes an explanation effective. A better understanding of explanations not only helps develop explainable AI, but also informs the process of collecting explanations that machine learning systems learn from BIBREF48, BIBREF49, BIBREF50." ], [ "We thank Kimberley Buchan, anonymous reviewers, and members of the NLP+CSS research group at CU Boulder for their insightful comments and discussions; Jason Baumgartner for sharing the dataset that enabled this research." ], [ "Before tokenizing, we pass each OP, PC, and explanation through a preprocessing pipeline, with the following steps:", "Occasionally, /r/ChangeMyView's moderators will edit comments, prefixing their edits with “Hello, users of CMV” or “This is a footnote” (see Table TABREF46). We remove this, and any text that follows on the same line.", "We replace URLs with a “@url@” token, defining a URL to be any string which matches the following regular expression: (https?://[^\\s)]*).", "We replace “$\\Delta $” symbols and their analogues—such as “$\\delta $”, “&;#8710;”, and “!delta”—with the word “delta”. We also remove the word “delta” from explanations, if the explanation starts with delta.", "Reddit–specific prefixes, such as “u/” (denoting a user) and “r/” (denoting a subreddit) are removed, as we observed that they often interfered with spaCy's ability to correctly parse its inputs.", "We remove any text matching the regular expression EDIT(.*?):.* from the beginning of the match to the end of that line, as well as variations, such as Edit(.*?):.*.", "Reddit allows users to insert blockquoted text. We extract any blockquotes and surround them with standard quotation marks.", "We replace all contiguous whitespace with a single space. We also do this with tab characters and carriage returns, and with two or more hyphens, asterisks, or underscores.", "Tokenizing the data. After passing text through our preprocessing pipeline, we use the default spaCy pipeline to extract part-of-speech tags, dependency tags, and entity details for each token BIBREF35. In addition, we use NLTK to stem words BIBREF36. This is used to compute all word level features discussed in Section 4 of the main paper." ], [ "Figure FIGREF49 shows a similar U-shape in the probability of a word being echoed in PC. However, visually, we can see that rare words seem more likely to have high echoing probability in explanations, while that probability is higher for words with moderate frequency in PCs. As PCs tend to be longer than explanations, we also used the echoing probability of the most frequent words to normalize the probability of other words so that they are comparable. We indeed observed a higher likelihood of echoing the rare words, but lower likelihood of echoing words with moderate frequency in explanations than in PCs." ], [ "Given an OP, PC, and explanation, we calculate a 66–dimensional vector for each unique stem in the concatenated OP and PC. Here, we describe the process of calculating each feature.", "Inverse document frequency: for a stem $s$, the inverse document frequency is given by $\\log \\frac{N}{\\mathrm {df}_s}$, where $N$ is the total number of documents (here, OPs and PCs) in the training set, and $\\mathrm {df}_s$ is the number of documents in the training data whose set of stemmed words contains $s$.", "Stem length: the number of characters in the stem.", "Wordnet depth (min): starting with the stem, this is the length of the minimum hypernym path to the synset root.", "Wordnet depth (max): similarly, this is the length of the maximum hypernym path.", "Stem transfer probability: the percentage of times in which a stem seen in the explanandum is also seen in the explanation. If, during validation or testing, a stem is encountered for the first time, we set this to be the mean probability of transfer over all stems seen in the training data.", "OP part–of–speech tags: a stem can represent multiple parts of speech. For example, both “traditions” and “traditional” will be stemmed to “tradit.” We count the percentage of times the given stem appears as each part–of–speech tag, following the Universal Dependencies scheme BIBREF53. If the stem does not appear in the OP, each part–of–speech feature will be $\\frac{1}{16}$.", "OP subject, object, and other: Given a stem $s$, we calculate the percentage of times that $s$'s surface forms in the OP are classified as subjects, objects, or something else by SpaCy. We follow the CLEAR guidelines, BIBREF51 and use the following tags to indicate a subject: nsubj, nsubjpass, csubj, csubjpass, agent, and expl. Objects are identified using these tags: dobj, dative, attr, oprd. If $s$ does not appear at all in the OP, we let subject, object, and other each equal $\\frac{1}{3}$.", "OP term frequency: the number of times any surface form of a stem appears in the list of tokens that make up the OP.", "OP normalized term frequency: the percentage of the OP's tokens which are a surface form of the given stem.", "OP # of surface forms: the number of different surface forms for the given stem.", "OP location: the average location of each surface form of the given stem which appears in the OP, where the location of a surface form is defined as the percentage of tokens which appear after that surface form. If the stem does not appear at all in the OP, this value is $\\frac{1}{2}$.", "OP is in quotes: the number of times the stem appears in the OP surrounded by quotation marks.", "OP is entity: the percentage of tokens in the OP that are both a surface form for the given stem, and are tagged by SpaCy as one of the following entities: PERSON, NORP, FAC, ORG, GPE, LOC, PRODUCT, EVENT, WORK_OF_ART, LAW, and LANGUAGE.", "PC equivalents of features 6-30.", "In both OP and PC: 1, if one of the stem's surface forms appears in both the OP and PC. 0 otherwise.", "# of unique surface forms in OP: for the given stem, the number of surface forms that appear in the OP, but not in the PC.", "# of unique surface forms in PC: for the given stem, the number of surface forms that appear in the PC, but not in the OP.", "Stem part–of–speech distribution difference: we consider the concatenation of features 6-21, along with the concatenation of features 31-46, as two distributions, and calculate the Jensen–Shannon divergence between them.", "Stem dependency distribution difference: similarly, we consider the concatenation of features 22-24 (OP dependency labels), and the concatenation of features 47-49 (PC dependency labels), as two distributions, and calculate the Jensen–Shannon divergence between them.", "OP length: the number of tokens in the OP.", "PC length: the number of tokens in the PC.", "Length difference: the absolute value of the difference between OP length and PC length.", "Avg. word length difference: the difference between the average number of characters per token in the OP and the average number of characters per token in the PC.", "OP/PC part–of–speech tag distribution difference: the Jensen–Shannon divergence between the part–of–speech tag distributions of the OP on the one hand, and the PC on the other.", "Depth of the PC in the thread: since there can be many back–and–forth replies before a user awards a delta, we number each comment in a thread, starting at 0 for the OP, and incrementing for each new comment before the PC appears." ], [ "For each non–LSTM classifier, we train 11 models: one full model, and forward and backward models for each of the five feature groups. To train, we fit on the training set and use the validation set for hyperparameter tuning.", "For the random model, since the echo rate of the training set is 15%, we simply predict 1 with 15% probability, and 0 otherwise.", "For logistic regression, we use the lbfgs solver. To tune hyperparameters, we perform an exhaustive grid search, with $C$ taking values from $\\lbrace 10^{x}:x\\in \\lbrace -1, 0, 1, 2, 3, 4\\rbrace \\rbrace $, and the respective weights of the negative and positive classes taking values from $\\lbrace (x, 1-x): x\\in \\lbrace 0.25, 0.20, 0.15\\rbrace \\rbrace $.", "We also train XGBoost models. Here, we use a learning rate of $0.1$, 1000 estimator trees, and no subsampling. We perform an exhaustive grid search to tune hyperparameters, with the max tree depth equaling 5, 7, or 9, the minimum weight of a child equaling 3, 5, or 7, and the weight of a positive class instance equaling 3, 4, or 5.", "Finally, we train two LSTM models, each with a single 300–dimensional hidden layer. Due to efficiency considerations, we eschewed a full search of the parameter space, but experimented with different values of dropout, learning rate, positive class weight, and batch size. We ultimately trained each model for five epochs with a batch size of 32 and a learning rate of 0.001, using the Adam optimizer BIBREF52. We also weight positive instances four times more highly than negative instances." ], [ "We formulate an abstractive summarization task using an OP concatenated with the PC as a source, and the explanation as target. We train two models, one with the features described above, and one without. A shared vocabulary of 50k words is constructed from the training set by setting the maximum encoding length to 500 words. We set the maximum decoding length to 100. We use a pointer generator network with coverage for generating explanations, using a bidirectional LSTM as an encoder and a unidirectional LSTM as a decoder. Both use a 256-dimensional hidden state. The parameters of this network are tuned using a validation set of five thousand instances. We constrain the batch size to 16 and train the network for 20k steps, using the parameters described in Table TABREF82." ] ] }
{ "question": [ "What non-contextual properties do they refer to?", "What is the baseline?", "What are their proposed features?", "What are overall baseline results on new this new task?", "What metrics are used in evaluation of this task?", "Do authors provide any explanation for intriguing patterns of word being echoed?", "What features are proposed?" ], "question_id": [ "6adec34d86095643e6b89cda5c7cd94f64381acc", "62ba1fefc1eb826fe0cbac092d37a3e2098967e9", "93ac147765ee2573923f68aa47741d4bcbf88fa8", "14c0328e8ec6360a913b8ecb3e50cb27650ff768", "6073fa9050da76eeecd8aa3ccc7ecb16a238d83f", "eacd7e540cc34cb45770fcba463f4bf968681d59", "1124804c3702499b78cf0678bab5867e81284b6c" ], "nlp_background": [ "two", "two", "two", "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "These features are derived directly from the word and capture the general tendency of a word being echoed in explanations." ], "yes_no": null, "free_form_answer": "", "evidence": [ "Non-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations." ], "highlighted_evidence": [ "Non-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations." ] } ], "annotation_id": [ "76c6e79704ddfdd42b6f2371d2ff18e20336f73e" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "random method ", "LSTM " ], "yes_no": null, "free_form_answer": "", "evidence": [ "Evaluation metric. Since our problem is imbalanced, we use the F1 score as our evaluation metric. For the tagging approach, we average the labels of words with the same stemmed version to obtain a single prediction for the stemmed word. To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances).", "To examine the utility of our features in a neural framework, we further adapt our word-level task as a tagging task, and use LSTM as a baseline. Specifically, we concatenate an OP and PC with a special token as the separator so that an LSTM model can potentially distinguish the OP from PC, and then tag each word based on the label of its stemmed version. We use GloVe embeddings to initialize the word embeddings BIBREF40. We concatenate our proposed features of the corresponding stemmed word to the word embedding; the resulting difference in performance between a vanilla LSTM demonstrates the utility of our proposed features. We scale all features to $[0, 1]$ before fitting the models. As introduced in Section SECREF3, we split our tuples of (OP, PC, explanation) into training, validation, and test sets, and use the validation set for hyperparameter tuning. Refer to the supplementary material for additional details in the experiment." ], "highlighted_evidence": [ " To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances).", "To examine the utility of our features in a neural framework, we further adapt our word-level task as a tagging task, and use LSTM as a baseline. Specifically, we concatenate an OP and PC with a special token as the separator so that an LSTM model can potentially distinguish the OP from PC, and then tag each word based on the label of its stemmed version. We use GloVe embeddings to initialize the word embeddings BIBREF40. We concatenate our proposed features of the corresponding stemmed word to the word embedding; the resulting difference in performance between a vanilla LSTM demonstrates the utility of our proposed features." ] } ], "annotation_id": [ "2ba3e5aa8fd9d044b90923587b31dfdc36d43c4e" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Non-contextual properties of a word", "Word usage in an OP or PC (two groups)", "How a word connects an OP and PC.", "General OP/PC properties" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our prediction task is thus a straightforward binary classification task at the word level. We develop the following five groups of features to capture properties of how a word is used in the explanandum (see Table TABREF18 for the full list):", "[itemsep=0pt,leftmargin=*,topsep=0pt]", "Non-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations.", "Word usage in an OP or PC (two groups). These features capture how a word is used in an OP or PC. As a result, for each feature, we have two values for the OP and PC respectively.", "How a word connects an OP and PC. These features look at the difference between word usage in the OP and PC. We expect this group to be the most important in our task.", "General OP/PC properties. These features capture the general properties of a conversation. They can be used to characterize the background distribution of echoing.", "Table TABREF18 further shows the intuition for including each feature, and condensed $t$-test results after Bonferroni correction. Specifically, we test whether the words that were echoed in explanations have different feature values from those that were not echoed. In addition to considering all words, we also separately consider stopwords and content words in light of Figure FIGREF8. Here, we highlight a few observations:" ], "highlighted_evidence": [ "Our prediction task is thus a straightforward binary classification task at the word level. We develop the following five groups of features to capture properties of how a word is used in the explanandum (see Table TABREF18 for the full list):\n\n[itemsep=0pt,leftmargin=*,topsep=0pt]\n\nNon-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations.\n\nWord usage in an OP or PC (two groups). These features capture how a word is used in an OP or PC. As a result, for each feature, we have two values for the OP and PC respectively.\n\nHow a word connects an OP and PC. These features look at the difference between word usage in the OP and PC. We expect this group to be the most important in our task.\n\nGeneral OP/PC properties. These features capture the general properties of a conversation. They can be used to characterize the background distribution of echoing.\n\nTable TABREF18 further shows the intuition for including each feature, and condensed $t$-test results after Bonferroni correction. Specifically, we test whether the words that were echoed in explanations have different feature values from those that were not echoed. In addition to considering all words, we also separately consider stopwords and content words in light of Figure FIGREF8. Here, we highlight a few observations:\n\n[itemsep=0pt,leftmargin=*,topsep=0pt]" ] } ], "annotation_id": [ "8ef9a229dc74bf1b06e8e9f42f2124032596b7b4" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "all of our models outperform the random baseline by a wide margin", "he best F1 score in content words more than doubles that of the random baseline (0.286 vs. 0.116)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Evaluation metric. Since our problem is imbalanced, we use the F1 score as our evaluation metric. For the tagging approach, we average the labels of words with the same stemmed version to obtain a single prediction for the stemmed word. To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances).", "Overall performance (Figure FIGREF28). Although our word-level task is heavily imbalanced, all of our models outperform the random baseline by a wide margin. As expected, content words are much more difficult to predict than stopwords, but the best F1 score in content words more than doubles that of the random baseline (0.286 vs. 0.116). Notably, although we strongly improve on our random baseline, even our best F1 scores are relatively low, and this holds true regardless of the model used. Despite involving more tokens than standard tagging tasks (e.g., BIBREF41 and BIBREF42), predicting whether a word is going to be echoed in explanations remains a challenging problem." ], "highlighted_evidence": [ "To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances).", "Overall performance (Figure FIGREF28). Although our word-level task is heavily imbalanced, all of our models outperform the random baseline by a wide margin. As expected, content words are much more difficult to predict than stopwords, but the best F1 score in content words more than doubles that of the random baseline (0.286 vs. 0.116). Notably, although we strongly improve on our random baseline, even our best F1 scores are relatively low, and this holds true regardless of the model used. Despite involving more tokens than standard tagging tasks (e.g., BIBREF41 and BIBREF42), predicting whether a word is going to be echoed in explanations remains a challenging problem." ] } ], "annotation_id": [ "2193617a9f2dc7166977e951ffa94bf430aad96e" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "F1 score" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Evaluation metric. Since our problem is imbalanced, we use the F1 score as our evaluation metric. For the tagging approach, we average the labels of words with the same stemmed version to obtain a single prediction for the stemmed word. To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances)." ], "highlighted_evidence": [ "Evaluation metric. Since our problem is imbalanced, we use the F1 score as our evaluation metric. For the tagging approach, we average the labels of words with the same stemmed version to obtain a single prediction for the stemmed word. To establish a baseline, we consider a random method that predicts the positive label with 0.15 probability (the base rate of positive instances)." ] } ], "annotation_id": [ "98bf705125cf75633ed30fedc861441cf2b39522" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "Although our approach strongly outperforms random baselines, the relatively low F1 scores indicate that predicting which word is echoed in explanations is a very challenging task. It follows that we are only able to derive a limited understanding of how people choose to echo words in explanations. The extent to which explanation construction is fundamentally random BIBREF47, or whether there exist other unidentified patterns, is of course an open question. We hope that our study and the resources that we release encourage further work in understanding the pragmatics of explanations." ], "highlighted_evidence": [ "Although our approach strongly outperforms random baselines, the relatively low F1 scores indicate that predicting which word is echoed in explanations is a very challenging task. It follows that we are only able to derive a limited understanding of how people choose to echo words in explanations. The extent to which explanation construction is fundamentally random BIBREF47, or whether there exist other unidentified patterns, is of course an open question. We hope that our study and the resources that we release encourage further work in understanding the pragmatics of explanations." ] } ], "annotation_id": [ "8d7dda29a2296bdaa2368bc35c75f16565b46806" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Non-contextual properties of a word", "Word usage in an OP or PC (two groups)", "How a word connects an OP and PC", "General OP/PC properties" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our prediction task is thus a straightforward binary classification task at the word level. We develop the following five groups of features to capture properties of how a word is used in the explanandum (see Table TABREF18 for the full list):", "[itemsep=0pt,leftmargin=*,topsep=0pt]", "Non-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations.", "Word usage in an OP or PC (two groups). These features capture how a word is used in an OP or PC. As a result, for each feature, we have two values for the OP and PC respectively.", "How a word connects an OP and PC. These features look at the difference between word usage in the OP and PC. We expect this group to be the most important in our task.", "General OP/PC properties. These features capture the general properties of a conversation. They can be used to characterize the background distribution of echoing.", "Table TABREF18 further shows the intuition for including each feature, and condensed $t$-test results after Bonferroni correction. Specifically, we test whether the words that were echoed in explanations have different feature values from those that were not echoed. In addition to considering all words, we also separately consider stopwords and content words in light of Figure FIGREF8. Here, we highlight a few observations:", "Although we expect more complicated words (#characters) to be echoed more often, this is not the case on average. We also observe an interesting example of Simpson's paradox in the results for Wordnet depth BIBREF38: shallower words are more likely to be echoed across all words, but deeper words are more likely to be echoed in content words and stopwords." ], "highlighted_evidence": [ "Our prediction task is thus a straightforward binary classification task at the word level. We develop the following five groups of features to capture properties of how a word is used in the explanandum (see Table TABREF18 for the full list):\n\n[itemsep=0pt,leftmargin=*,topsep=0pt]\n\nNon-contextual properties of a word. These features are derived directly from the word and capture the general tendency of a word being echoed in explanations.\n\nWord usage in an OP or PC (two groups). These features capture how a word is used in an OP or PC. As a result, for each feature, we have two values for the OP and PC respectively.\n\nHow a word connects an OP and PC. These features look at the difference between word usage in the OP and PC. We expect this group to be the most important in our task.\n\nGeneral OP/PC properties. These features capture the general properties of a conversation. They can be used to characterize the background distribution of echoing.\n\nTable TABREF18 further shows the intuition for including each feature, and condensed $t$-test results after Bonferroni correction. Specifically, we test whether the words that were echoed in explanations have different feature values from those that were not echoed. In addition to considering all words, we also separately consider stopwords and content words in light of Figure FIGREF8. Here, we highlight a few observations:\n\n[itemsep=0pt,leftmargin=*,topsep=0pt]\n\nAlthough we expect more complicated words (#characters) to be echoed more often, this is not the case on average. We also observe an interesting example of Simpson's paradox in the results for Wordnet depth BIBREF38: shallower words are more likely to be echoed across all words, but deeper words are more likely to be echoed in content words and stopwords." ] } ], "annotation_id": [ "79886fa3ae326a5f8f82328db74784241c0fd679" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ] }
{ "caption": [ "Table 1: Sample data that were affected by preprocessing.", "Figure 1: The U-shape exists both in Figure 1a and Figure 1b, but not in Figure 1c.", "Table 2: Full testing results after Bonferroni correction.", "Table 3: Feature importance for the full XGBoost model, as measured by total gain.", "Table 4: Parameters tuned on validation dataset containing 5k instances." ], "file": [ "1-Table1-1.png", "3-Figure1-1.png", "5-Table2-1.png", "6-Table3-1.png", "7-Table4-1.png" ] }
1803.03664
Automating Reading Comprehension by Generating Question and Answer Pairs
Neural network-based methods represent the state-of-the-art in question generation from text. Existing work focuses on generating only questions from text without concerning itself with answer generation. Moreover, our analysis shows that handling rare words and generating the most appropriate question given a candidate answer are still challenges facing existing approaches. We present a novel two-stage process to generate question-answer pairs from the text. For the first stage, we present alternatives for encoding the span of the pivotal answer in the sentence using Pointer Networks. In our second stage, we employ sequence to sequence models for question generation, enhanced with rich linguistic features. Finally, global attention and answer encoding are used for generating the question most relevant to the answer. We motivate and linguistically analyze the role of each component in our framework and consider compositions of these. This analysis is supported by extensive experimental evaluations. Using standard evaluation metrics as well as human evaluations, our experimental results validate the significant improvement in the quality of questions generated by our framework over the state-of-the-art. The technique presented here represents another step towards more automated reading comprehension assessment. We also present a live system \footnote{Demo of the system is available at \url{https://www.cse.iitb.ac.in/~vishwajeet/autoqg.html}.} to demonstrate the effectiveness of our approach.
{ "section_name": [ "Introduction", "Problem Formulation", "Related Work", "Approach and Contributions", "Answer Selection and Encoding", "Named Entity Selection", "Answer Selection using Pointer Networks", "Question Generation", "Sequence to Sequence Model", "Linguistic Features ", "Implementation Details", "Experiments and Results", "Results and Analysis ", "Conclusion" ], "paragraphs": [ [ "Asking relevant and intelligent questions has always been an integral part of human learning, as it can help assess the user's understanding of a piece of text (an article, an essay etc.). However, forming questions manually can be sometimes arduous. Automated question generation (QG) systems can help alleviate this problem by learning to generate questions on a large scale and in lesser time. Such a system has applications in a myriad of areas such as FAQ generation, intelligent tutoring systems, and virtual assistants.", "The task for a QG system is to generate meaningful, syntactically correct, semantically sound and natural questions from text. Additionally, to further automate the assessment of human users, it is highly desirable that the questions are relevant to the text and have supporting answers present in the text.", "Figure 1 below shows a sample of questions generated by our approach using a variety of configurations (vanilla sentence, feature tagged sentence and answer encoded sentence) that will be described later in this paper.", "Initial attempts at automated question generation were heavily dependent on a limited, ad-hoc, hand-crafted set of rules BIBREF0 , BIBREF1 . These rules focus mainly on the syntactic structure of the text and are limited in their application, only to sentences of simple structures. Recently, the success of sequence to sequence learning models BIBREF2 opened up possibilities of looking beyond a fixed set of rules for the task of question generation BIBREF3 , BIBREF4 . When we encode ground truth answers into the sentence along with other linguistic features, we get improvement of upto 4 BLEU points along with improvement in the quality of questions generated. A recent deep learning approach to question generation BIBREF3 investigates a simpler task of generating questions only from a triplet of subject, relation and object. In contrast, we build upon recent works that train sequence to sequence models for generating questions from natural language text.", "Our work significantly improves the latest work of sequence to sequence learning based question generation using deep networks BIBREF4 by making use of (i) an additional module to predict span of best answer candidate on which to generate the question (ii) several additional rich set of linguistic features to help model generalize better (iii) suitably modified decoder to generate questions more relevant to the sentence.", "The rest of the paper is organized as follows. In Section \"Problem Formulation\" we formally describe our question generation problem, followed by a discussion on related work in Section \"Related Work\" . In Section \"Approach and Contributions\" we describe our approach and methodology and summarize our main contributions. In Sections \"Named Entity Selection\" and \"Question Generation\" we describe the two main components of our framework. Implementation details of the models are described in Section \"Implementation Details\" , followed by experimental results in Section \"Experiments and Results\" and conclusion in Section \"Conclusion\" ." ], [ "Given a sentence S, viewed as a sequence of words, our goal is to generate a question Q, which is syntactically and semantically correct, meaningful and natural. More formally, given a sentence S, our model's main objective is to learn the underlying conditional probability distribution $P(\\textbf {Q}|\\textbf {S};\\theta )$ parameterized by $\\theta $ to generate the most appropriate question that is closest to the human generated question(s). Our model learns $\\theta $ during training using sentence/question pairs such that the probability $P(\\textbf {Q}|\\textbf {S};\\theta $ ) is maximized over the given training dataset.", "Let the sentence S be a sequence of $M$ words $(w_1, w_2, w_3, ...w_M)$ , and question Q a sequence of $N$ words $(y_1, y_2, y_3,...y_N)$ . Mathematically, the model is meant to generate Q* such that: ", "$$\\mathbf {Q^* } & = & \\underset{\\textbf {Q}}{\\operatorname{argmax}}~P(\\textbf {Q}|\\textbf {S};\\theta ) \\\\\n& = & \\underset{y_1,..y_{n}}{\\operatorname{argmax}}~\\prod _{i=1}^{N}P(y_i|y_1,..y_{i-1},w_1..w_M;\\theta )$$ (Eq. 3) ", "Equation ( 3 ) is to be realized using a RNN-based architecture, which is described in detail in Section UID17 ." ], [ "Heilman and Smith BIBREF0 use a set of hand-crafted syntax-based rules to generate questions from simple declarative sentences. The system identifies multiple possible answer phrases from all declarative sentences using the constituency parse tree structure of each sentence. The system then over-generates questions and ranks them statistically by assigning scores using logistic regression.", " BIBREF1 use semantics of the text by converting it into the Minimal Recursion Semantics notation BIBREF5 . Rules specific to the summarized semantics are applied to generate questions. Most of the approaches proposed for the QGSTEC challenge BIBREF6 are also rule-based systems, some of which put to use sentence features such as part of speech (POS) tags and named entity relations (NER) tags. BIBREF7 use ASSERT (an automatic statistical semantic role tagger that can annotate naturally occurring text with semantic arguments) for semantic role parses, generate questions based on rules and rank them based on subtopic similarity score using ESSK (Extended String Subsequence Kernel). BIBREF8 break sentences into fine and coarse classes and proceed to generate questions based on templates matching these classes.", "All approaches mentioned so far are heavily dependent on rules whose design requires deep linguistic knowledge and yet are not exhaustive enough. Recent successes in neural machine translation BIBREF2 , BIBREF9 have helped address this problem by letting deep neural nets learn the implicit rules through data. This approach has inspired application of sequence to sequence learning to automated question generation. BIBREF3 propose an attention-based BIBREF10 , BIBREF11 approach to question generation from a pre-defined template of knowledge base triples (subject, relation, object). Additionally, recent studies suggest that the sharp learning capability of neural networks does not make linguistic features redundant in machine translation. BIBREF12 suggest augmenting each word with its linguistic features such as POS, NER. BIBREF13 suggest a tree-based encoder to incorporate features, although for a different application.", "We build on the recent sequence to sequence learning-based method of question generation by BIBREF4 , but with significant differences and improvements from all previous works in the following ways. (i) Unlike BIBREF4 our question generation technique is pivoted on identification of the best candidate answer (span) around which the question should be generated. (ii) Our approach is enhanced with the use of several syntactic and linguistic features that help in learning models that generalize well. (iii) We propose a modified decoder to generate questions relevant to the text." ], [ "Our approach to generating question-answer pairs from text is a two-stage process: in the first stage we select the most relevant and appropriate candidate answer, i.e., the pivotal answer, using an answer selection module, and in the second stage we encode the answer span in the sentence and use a sequence to sequence model with a rich set of linguistic features to generate questions for the pivotal answer.", "Our sentence encoder transforms the input sentence into a list of fixed-length continuous vector word representation, each input symbol being represented as a vector. The question decoder takes in the output from the sentence encoder and produces one symbol at a time and stops at the EOS (end of sentence) marker. To focus on certain important words while generating questions (decoding) we use a global attention mechanism. The attention module is connected to both the sentence encoder as well as the question decoder, thus allowing the question decoder to focus on appropriate segments of the sentence while generating the next word of the question. We include linguistic features for words so that the model can learn more generalized syntactic transformations. We provide a detailed description of these modules in the following sections. Here is a summary of our three main contributions: (1) a versatile neural network-based answer selection and Question Generation (QG) approach and an associated dataset of question/sentence pairs suitable for learning answer selection, (2) incorporation of linguistic features that help generalize the learning to syntactic and semantic transformations of the input, and (3) a modified decoder to generate the question most relevant to the text." ], [ "In applications such as reading comprehension, it is natural for a question to be generated keeping the answer in mind (hereafter referred to as the `pivotal' answer). Identifying the most appropriate pivotal answer will allow comprehension be tested more easily and with even higher automation. We propose a novel named entity selection model and answer selection model based on Pointer Networks BIBREF14 . These models give us the span of pivotal answer in the sentence, which we encode using the BIO notation while generating the questions." ], [ "In our first approach, we restrict our pivotal answer to be one of the named entities in the sentence, extracted using the Stanford CoreNLP toolkit. To choose the most appropriate pivotal answer for QG from a set of candidate entities present in the sentence we propose a named entity selection model. We train a multi-layer perceptron on the sentence, named entities present in the sentence and the ground truth answer. The model learns to predict the pivotal answer given the sentence and a set of candidate entities. The sentence $S = (w_1, w_2, ... , w_n)$ is first encoded using a 2 layered unidirectional LSTM encoder into hidden activations $H = (h_1^s, h_2^s, ... , h_n^s)$ . For a named entity $NE = (w_i, ... , w_j)$ , a vector representation (R) is created as $<h_n^s;h_{mean}^s;h_{mean}^{ne}>$ , where $h_n^s$ is the final state of the hidden activations, $h_{mean}^s$ is the mean of all the activations and $h_{mean}^{ne}$ is the mean of hidden activations $(h_i^s, ... , h_j^s)$ between the span of the named entity. This representation vector R is fed into a multi-layer perceptron, which predicts the probability of a named entity being a pivotal answer. Then we select the entity with the highest probability as the answer entity. More formally, ", "$$P(NE_i|S) = softmax(\\textbf {R}_i.W+B)$$ (Eq. 6) ", "where $W$ is weight, $B$ is bias, and $P(NE_i|S)$ is the probability of named entity being the pivotal answer." ], [ "We propose a novel Pointer Network BIBREF14 based approach to find the span of pivotal answer given a sentence. Using the attention mechanism, a boundary Pointer Network output start and end positions from the input sequence. More formally, the problem can be formulated as follows: given a sentence S, we want to predict the start index $a_k^{start}$ and the end index $a_k^{end}$ of the pivotal answer. The main motivation in using a boundary pointer network is to predict the span from the input sequence as output. While we adapt the boundary pointer network to predict the start and end index positions of the pivotal answer in the sentence, we also present results using a sequence pointer network instead.", "Answer sequence pointer network produces a sequence of pointers as output. Each pointer in the sequence is word index of some token in the input. It only ensures that output is contained in the sentence but isn't necessarily a substring. Let the encoder's hidden states be $H = (h_1,h_2,\\ldots ,h_n)$ for a sentence the probability of generating output sequence $O$ = $(o_1,o_2,\\ldots ,o_m)$ is defined as, ", "$$P(O|S) = \\prod P(o_i|o_1,o_2,o_3,\\ldots ,o_{i-1},H)$$ (Eq. 8) ", "We model the probability distribution as: ", "$$u^i = v^T tanh(W^e\\hat{H}+W^dD_i)$$ (Eq. 9) ", "$$P(o_i|o_1,o_2,\\ldots .,o_{i-1},H) = softmax(u^i)$$ (Eq. 10) ", "Here, $W^e\\in R^{d \\times 2d}$ , $W^D\\in R^{d \\times d}$ , $v\\in R^d$ are the model parameters to be learned. $\\hat{H}$ is ${<}H;0{>}$ , where a 0 vector is concatenated with LSTM encoder hidden states to produce an end pointer token. $D_i$ is produced by taking the last state of the LSTM decoder with inputs ${<}softmax(u^i)\\hat{H};D_{i-1}{>}$ . $D_0$ is a zero vector denoting the start state of the decoder.", "Answer boundary pointer network produces two tokens corresponding to the start and end index of the answer span. The probability distribution model remains exactly the same as answer sequence pointer network. The boundary pointer network is depicted in Figure 2 .", "We take sentence S = $(w_1,w_2,\\ldots ,w_M)$ and generate the hidden activations H by using embedding lookup and an LSTM encoder. As the pointers are not conditioned over a second sentence, the decoder is fed with just a start state.", "Example: For the Sentence: “other past residents include composer journalist and newspaper editor william henry wills , ron goodwin , and journalist angela rippon and comedian dawn french”, the answer pointers produced are:", "Pointer(s) by answer sequence: [6,11,20] $\\rightarrow $ journalist henry rippon", "Pointer(s) by answer boundary: [10,12] $\\rightarrow $ william henry wills" ], [ "After encoding the pivotal answer (prediction of the answer selection module) in a sentence, we train a sequence to sequence model augmented with a rich set of linguistic features to generate the question. In sections below we describe our linguistic features as well as our sequence to sequence model." ], [ "Sequence to sequence models BIBREF2 learn to map input sequence (sentence) to an intermediate fixed length vector representation using an encoder RNN along with the mapping for translating this vector representation to the output sequence (question) using another decoder RNN. Encoder of the sequence to sequence model first conceptualizes the sentence as a single fixed length vector before passing this along to the decoder which uses this vector and attention weights to generate the output.", "Sentence Encoder: The sentence encoder is realized using a bi-directional LSTM. In the forward pass, the given sentence along with the linguistic features is fed through a recurrent activation function recursively till the whole sentence is processed. Using one LSTM as encoder will capture only the left side sentence dependencies of the current word being fed. To alleviate this and thus to also capture the right side dependencies of the sentence for the current word while predicting in the decoder stage, another LSTM is fed with the sentence in the reverse order. The combination of both is used as the encoding of the given sentence. ", "$$\\overrightarrow{\\hat{h}_t}=f(\\overrightarrow{W}w_t + \\overrightarrow{V}\\overrightarrow{\\hat{h}_{t-1}} +\\overrightarrow{b})$$ (Eq. 13) ", "$$\\overleftarrow{\\hat{h}_t}=f(\\overleftarrow{W}w_t + \\overleftarrow{V}\\overleftarrow{\\hat{h}_{t+1}} +\\overleftarrow{b})$$ (Eq. 14) ", "The hidden state $\\hat{h_t}$ of the sentence encoder is used as the intermediate representation of the source sentence at time step $t$ whereas $W, V, U \\in R^{n\\times m}$ are weights, where m is the word embedding dimensionality, n is the number of hidden units, and $w_t \\in R^{p\\times q \\times r} $ is the weight vector corresponding to feature encoded input at time step $t$ .", "Attention Mechanism: In the commonly used sequence to sequence model ( BIBREF2 ), the decoder is directly initialized with intermediate source representation ( $\\hat{h_t}$ ). Whereas the attention mechanism proposed in BIBREF11 suggests using a subset of source hidden states, giving more emphasis to a, possibly, more relevant part of the context in the source sentence while predicting a new word in the target sequence. In our method we specifically use the global attention mechanism. In this mechanism a context vector $c_t$ is generated by capturing relevant source side information for predicting the current target word $y_t$ in the decoding phase at time $t$ . Relevance between the current decoder hidden state $h_t$ and each of the source hidden states ( $\\hat{h_1},\\hat{h_2}...\\hat{h_{N}}$ ) is realized through a dot similarity metric: $score(h_t,\\hat{h_i}) = h_t^{T}\\cdot \\hat{h_i}$ .", "A softmax layer ( 16 ) is applied over these scores to get the variable length alignment vector $\\alpha _t$ which in turn is used to compute the weighted sum over all the source hidden states ( $\\hat{h_1},\\hat{h_2}, \\ldots , \\hat{h_N}$ ) to generate the context vector $c_t$ () at time $t$ . ", "$$\\alpha _t(i) &= align(h_t,\\hat{h_i})\n=\\frac{\\exp (score(h_t,\\hat{h_i}) }{\\sum \\limits _{i^{\\prime }} \\exp (score(h_t,\\hat{h_{i^{\\prime }}}))}\\\\\nc_t &= \\sum \\limits _{i} \\alpha _{ti} \\hat{h_i}$$ (Eq. 16) ", "Question decoder is a two layer LSTM network. It takes output of sentence encoder and decodes it to generate question. The question decoder is designed to maximize our objective in equation 3 . More formally decoder computes probability $P(Q|S;\\theta )$ as: ", "$$P(Q|S;\\theta )=softmax(W_s(tanh(W_r[h_t,c_t]+b)))$$ (Eq. 18) ", "where $W_s$ and $W_r$ are weight vectors and tanh is the activation function. The hidden state of the decoder along with the context vector $c_t$ is used to predict the target word $y_t$ . It is a known fact that decoder may output words which are not even present in the source sentence as it learns a probability distribution over the words in the vocabulary. To generate questions relevant to the text we suitably modified decoder and integrated an attention mechanism (described in Section \"Sequence to Sequence Model\" ) with the decoder to attend to words in source sentence while generating questions. This modification to the decoder increases the relevance of question generated for a particular sentence." ], [ "We propose using a set of linguistic features so that the model can learn better generalized transformation rules, rather than learning a transformation rule per sentence. We describe our features below:", "POS Tag: Parts of speech tag of the word. Words having same POS tag have similar grammatical properties and demonstrate similar syntactic behavior. We use the Stanford ConeNLP -pos annotator to get POS Tag of words in the sentence.", "Named Entity Tag: Name entity tag represent coarse grained category of a word for example PERSON, PLACE, ORGANIZATION, DATE, etc. In order to help the model identify named entities present in the sentence, named entity tag of each word is provided as a feature. This ensures that the model learns to pose a question about the entities present in the sentence. We use the Stanford CoreNLP -ner annotator to assign named entity tag to each word.", "Dependency Label: Dependency label of a word is the edge label connecting each word with the parent in the dependency parse tree. Root node of the tree is assigned label `ROOT'. Dependency label help models to learn inter-word relations. It helps in understanding the semantic structure of the sentence while generating question. Dependency structure also helps in learning syntactic transformations between sentence and question pair. Verbs and adverbs present in the sentence signify the type of the question (which, who .. etc.) that would be posed for the subject it refers to. We use dependency parse trees generated using the Stanford CoreNLP parser to obtain the dependency labels.", "Linguistic features are added by the conventional feature concatenation of tokens using the delimiter ` $|$ '. We create separate vocabularies for words (encoded using glove's pre-trained word embedding) and features (using one-hot encoding) respectively." ], [ "We implement our answer selection and question generation models in Torch. The sentence encoder of QG is a 3 layer bi-directional LSTM stack and the question decoder is a 3 layer LSTM stack. Each LSTM has a hidden unit of size 600 units. we use pre-trained glove embeddings BIBREF15 of 300 dimensions for both the encoder and the decoder. All model parameters are optimized using Adam optimizer with a learning rate of 1.0 and we decay the learning rate by 0.5 after 10th epoch of training. The dropout probability is set to 0.3. We train our model in each experiment for 30 epochs, we select the model with the lowest perplexity on validation set.", "The linguistic features for each word such as POS, named entity tag etc., are incorporated along with word embeddings through concatenation." ], [ "We evaluate performance of our models on the SQUAD BIBREF16 dataset (denoted $\\mathcal {S}$ ). We use the same split as that of BIBREF4 , where a random subset of 70,484 instances from $\\mathcal {S}\\ $ are used for training ( ${\\mathcal {S}}^{tr}$ ), 10,570 instances for validation ( ${\\mathcal {S}}^{val}$ ), and 11,877 instances for testing ( ${\\mathcal {S}}^{te}$ ).", "We performed both human-based evaluation as well as automatic evaluation to assess the quality of the questions generated. For automatic evaluation, we report results using a metric widely used to evaluate machine translation systems, called BLEU BIBREF17 .", "We first list the different systems (models) that we evaluate and compare in our experiments. A note about abbreviations: Whereas components in blue are different alternatives for encoding the pivotal answer, the brown color coded component represents the set of linguistic features that can be optionally added to any model.", "Baseline System (QG): Our baseline system is a sequence-to-sequence LSTM model (see Section \"Question Generation\" ) trained only on raw sentence-question pairs without using features or answer encoding. This model is the same as BIBREF4 .", "System with feature tagged input (QG+F): We encoded linguistic features (see Section \"Linguistic Features \" ) for each sentence-question pair to augment the basic QG model. This was achieved by appending features to each word using the “ $|$ ” delimiter. This model helps us analyze the isolated effect of incorporating syntactic and semantic properties of the sentence (and words in the sentence) on the outcome of question generation.", "Features + NE encoding (QG+F+NE): We also augmented the feature-enriched sequence-to-sequence QG+F model by encoding each named entity predicted by the named entity selection module (see section \"Named Entity Selection\" ) as a pivotal answer. This model helps us analyze the effect of (indiscriminate) use of named entity as potential (pivotal) answer, when used in conjunction with features.", "Ground truth answer encoding (QG+GAE): In this setting we use the encoding of ground truth answers from sentences to augment the training of the basic QG model (see Section \"Named Entity Selection\" ). For encoding answers into the sentence we employ the BIO notation. We append “B” as a feature using the delimiter “ $|$ ” to the first word of the answer and “I” as a feature for the rest of the answer words. We used this model to analyze the effect of answer encoding on question generation, independent of features and named entity alignment.", "We would like to point out that any direct comparison of a generated question with the question in the ground truth using any machine translation-like metric (such as the BLEU metric discussed in Section \"Results and Analysis \" ) makes sense only when both the questions are associated with the same pivotal answer. This specific experimental setup and the ones that follow are therefore more amenable for evaluation using standard metrics used in machine translation.", "Features + sequence pointer network predicted answer encoding (QG+F+AES): In this setting, we encoded the pivotal answer in the sentence as predicted by the sequence pointer network (see Section \"Implementation Details\" ) to augment the linguistic feature based QG+F model. In this and in the following setting, we expect the prediction of the pivotal answer in the sentence to closely approximate the ground truth answer.", "Features + boundary pointer network predicted answer encoding (QG+F+AEB): In this setting, we encoded the pivotal answer in the sentence as predicted by the boundary pointer network (see Section \"Implementation Details\" ) to augment the linguistic feature based QG+F model.", "Features + ground truth answer encoding (QG+F+GAE): In this experimental setup, building upon the previous model (QG+F), we encoded ground truth answers to augment the QG model." ], [ "We compare the performance of the 7 systems QG, QG+F, QG+F+NE, QG+GAE, QG+F+AES, QG+F+AEB and QG+F+GAE described in the previous sections on (the train-val-test splits of) ${\\mathcal {S}}$ and report results using both human and automated evaluation metrics. We first describe experimental results using human evaluation followed by evaluation on other metrics.", "Human Evaluation: We randomly selected 100 sentences from the test set ( ${\\mathcal {S}}^{te}$ ) and generated one question using each of the 7 systems for each of these 100 sentences and asked three human experts for feedback on the quality of questions generated. Our human evaluators are professional English language experts. They were asked to provide feedback about a randomly sampled sentence along with the corresponding questions from each competing system, presented in an anonymised random order. This was to avoid creating any bias in the evaluator towards any particular system. They were not at all primed about the different models and the hypothesis.", "We asked the following binary (yes/no) questions to each of the experts: a) is this question syntactically correct?, b) is this question semantically correct?, and c) is this question relevant to this sentence?. Responses from all three experts were collected and averaged. For example, suppose the cumulative scores of the 100 binary judgements for syntactic correctness by the 3 evaluators were $(80, 79, 73)$ . Then the average response would be 77.33. In Table 1 we present these results on the test set ${\\mathcal {S}}^{te}$ .", "Evaluation on other metrics: We also evaluated our system on other standard metrics to enable comparison with other systems. However, as explained earlier, the standard metrics used in machine translation such as BLEU BIBREF17 , METEOR BIBREF18 , and ROUGE-L BIBREF19 , might not be appropriate measures to evaluate the task of question generation. To appreciate this, consider the candidate question “who was the widow of mcdonald 's owner ?” against the ground truth “to whom was john b. kroc married ?” for the sentence “it was founded in 1986 through the donations of joan b. kroc , the widow of mcdonald 's owner ray kroc.”. It is easy to see that the candidate is a valid question and makes perfect sense. However its BLEU-4 score is almost zero. Thus, it may be the case that the human generated question against which we evaluate the system generated questions may be completely different in structure and semantics, but still be perfectly valid, as seen previously. While we find human evaluation to be more appropriate, for the sake of completeness, we also report the BLEU, METEOR and ROUGE-L scores in each setting.In Table 2 , we observe that our models, QG+F+AEB, QG+F+AES and QG+F+GAE outperform the state-of-the art question generation system QG BIBREF4 significantly on all standard metrics.", "Our model QG+F+GAE, which encodes ground truth answers and uses a rich set of linguistic features, performs the best as per every metric. And in Table 1 , we observe that adding the rich set of linguistic features to the baseline model (QG) further improves performance. Specifically, addition of features increases syntactic correctness of questions by 2%, semantic correctness by 9% and relevance of questions with respect to sentence by 12.3% in comparison with the baseline model QG BIBREF4 .", "In Figure 3 we present some sample answers predicted and corresponding questions generated by our model QG+F+AEB. Though not better, the performance of models QG+F+AES and QG+F+AEB is comparable to the best model (that is QG+F+GAE, which additionally uses ground truth answers). This is because the ground truth answer might not be the best and most relevant pivotal answer for question generation, particularly since each question in the SQUAD dataset was generated by looking at an entire paragraph and not any single sentence. Consider the sentence “manhattan was on track to have an estimated 90,000 hotel rooms at the end of 2014 , a 10 % increase from 2013 .”. On encoding the ground truth answer, “90,000”, the question generated using model QG+GAE is “what was manhattan estimated hotel rooms in 2014 ?” and and additionally, with linguistic features (QG+F+GAE), we get “how many hotel rooms did manhattan have at the end of 2014 ?”. This is indicative of how a rich set of linguistic features help in shaping the correct question type as well generating syntactically and semantically correct question. Further when we do not encode any answer (either pivotal answer predicted by sequence/boundary pointer network or ground truth answer) and just augment the linguistic features (QG+F) the question generated is “what was manhattan 's hotel increase in 2013 ?”, which is clearly a poor quality question. Thus, both answer encoding and augmenting rich set of linguistic features are important for generating high quality (syntactically correct, semantically correct and relevant) questions. When we select pivotal answer from amongst the set of named entities present in the sentence (i.e., model QG+F+NE), the question generated on encoding the named entity “manhattan” is “what was the 10 of hotel 's city rooms ?”, which is clearly a poor quality question. The poor performance of QG+F+NE can be attributed to the fact that only 50% of the answers in SQUAD dataset are named entities." ], [ "We introduce a novel two-stage process to generate question-answer pairs from text. We combine and enhance a number of techniques including sequence to sequence models, Pointer Networks, named entity alignment, as well as rich linguistic features to identify potential answers from text, handle rare words, and generate questions most relevant to the answer. To the best of our knowledge this is the first attempt in generating question-answer pairs. Our comprehensive evaluation shows that our approach significantly outperforms current state-of-the-art question generation techniques on both human evaluation and evaluation on common metrics such as BLEU, METEOR, and ROUGE-L." ] ] }
{ "question": [ "Which datasets are used to train this model?" ], "question_id": [ "2b78052314cb730824836ea69bc968df7964b4e4" ], "nlp_background": [ "five" ], "topic_background": [ "familiar" ], "paper_read": [ "no" ], "search_query": [ "question" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "SQUAD" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We evaluate performance of our models on the SQUAD BIBREF16 dataset (denoted $\\mathcal {S}$ ). We use the same split as that of BIBREF4 , where a random subset of 70,484 instances from $\\mathcal {S}\\ $ are used for training ( ${\\mathcal {S}}^{tr}$ ), 10,570 instances for validation ( ${\\mathcal {S}}^{val}$ ), and 11,877 instances for testing ( ${\\mathcal {S}}^{te}$ )." ], "highlighted_evidence": [ "We evaluate performance of our models on the SQUAD BIBREF16 dataset (denoted $\\mathcal {S}$ ). We use the same split as that of BIBREF4 , where a random subset of 70,484 instances from $\\mathcal {S}\\ $ are used for training ( ${\\mathcal {S}}^{tr}$ ), 10,570 instances for validation ( ${\\mathcal {S}}^{val}$ ), and 11,877 instances for testing ( ${\\mathcal {S}}^{te}$ )." ] } ], "annotation_id": [ "0a1745dd21b3ae4d8aaa262b2d318481ca0bba5a" ], "worker_id": [ "efdb8f7f2fe9c47e34dfe1fb7c491d0638ec2d86" ] } ] }
{ "caption": [ "Table 1: Human evaluation results on Ste. Parameters are, p1: percentage of syntactically correct questions, p2: percentage of semantically correct questions, p3: percentage of relevant questions.", "Table 2: Automatic evaluation results on Ste. BLEU, METEOR and ROUGE-L scores vary between 0 and 100, with the upper bound of 100 attainable on the ground truth. QG[7]:Result obtained using latest version of Torch." ], "file": [ "10-Table1-1.png", "10-Table2-1.png" ] }
1910.11949
Automatic Reminiscence Therapy for Dementia.
With people living longer than ever, the number of cases with dementia such as Alzheimer's disease increases steadily. It affects more than 46 million people worldwide, and it is estimated that in 2050 more than 100 million will be affected. While there are not effective treatments for these terminal diseases, therapies such as reminiscence, that stimulate memories from the past are recommended. Currently, reminiscence therapy takes place in care homes and is guided by a therapist or a carer. In this work, we present an AI-based solution to automatize the reminiscence therapy, which consists in a dialogue system that uses photos as input to generate questions. We run a usability case study with patients diagnosed of mild cognitive impairment that shows they found the system very entertaining and challenging. Overall, this paper presents how reminiscence therapy can be automatized by using machine learning, and deployed to smartphones and laptops, making the therapy more accessible to every person affected by dementia.
{ "section_name": [ "Introduction", "Related Work", "Methodology", "Methodology ::: VQG model", "Methodology ::: Chatbot network", "Datasets", "Datasets ::: MS-COCO, Bing and Flickr datasets", "Datasets ::: Persona-chat and Cornell-movie corpus", "Validation", "Validation ::: Implementation", "Validation ::: Quantitative evaluation", "Validation ::: Qualitative results", "Usability study", "Usability study ::: User interface", "Feedback from patients", "Conclusions", "Acknowledgements" ], "paragraphs": [ [ "Increases in life expectancy in the last century have resulted in a large number of people living to old ages and will result in a double number of dementia cases by the middle of the century BIBREF0BIBREF1. The most common form of dementia is Alzheimer disease which contributes to 60–70% of cases BIBREF2. Research focused on identifying treatments to slow down the evolution of Alzheimer's disease is a very active pursuit, but it has been only successful in terms of developing therapies that eases the symptoms without addressing the cause BIBREF3BIBREF4. Besides, people with dementia might have some barriers to access to the therapies, such as cost, availability and displacement to the care home or hospital, where the therapy takes place. We believe that Artificial Intelligence (AI) can contribute in innovative systems to give accessibility and offer new solutions to the patients needs, as well as help relatives and caregivers to understand the illness of their family member or patient and monitor the progress of the dementia.", "Therapies such as reminiscence, that stimulate memories of the patient's past, have well documented benefits on social, mental and emotional well-being BIBREF5BIBREF6, making them a very desirable practice, especially for older adults. Reminiscence therapy in particular involves the discussion of events and past experiences using tangible prompts such as pictures or music to evoke memories and stimulate conversation BIBREF7. With this aim, we explore multi-modal deep learning architectures to be used to develop an intuitive, easy to use, and robust dialogue system to automatize the reminiscence therapy for people affected by mild cognitive impairment or at early stages of Alzheimer's disease.", "We propose a conversational agent that simulates a reminiscence therapist by asking questions about the patient's experiences. Questions are generated from pictures provided by the patient, which contain significant moments or important people in user's life. Moreover, to engage the user in the conversation we propose a second model which generates comments on user's answers. A chatbot model trained with a dataset containing simple conversations between different people. The activity pretends to be challenging for the patient, as the questions may require the user to exercise the memory. Our contributions include:", "Automation of the Reminiscence therapy by using a multi-modal approach that generates questions from pictures, without using a reminiscence therapy dataset.", "An end-to-end deep learning approach which do not require hand-crafted rules and it is ready to be used by mild cognitive impairment patients. The system is designed to be intuitive and easy to use for the users and could be reached by any smartphone with internet connection." ], [ "The origin of chatbots goes back to 1966 with the creation of ELIZA BIBREF8 by Joseph Weizenbaum at MIT. Its implementation consisted in pattern matching and substitution methodology. Recently, data driven approaches have drawn significant attention. Existing work along this line includes retrieval-based methods BIBREF9BIBREF10 and generation-based methodsBIBREF11BIBREF12. In this work we focus on generative models, where sequence-to-sequence algorithm that uses RNNs to encode and decode inputs into responses is a current best practice.", "Our conversational agent uses two architectures to simulate a specialized reminiscence therapist. The block in charge of generating questions is based on the work Show, Attend and Tell BIBREF13. This work generates descriptions from pictures, also known as image captioning. In our case, we focus on generating questions from pictures. Our second architecture is inspired by Neural Conversational Model from BIBREF14 where the author presents an end-to-end approach to generate simple conversations. Building an open-domain conversational agent is a challenging problem. As addressed in BIBREF15 and BIBREF16, the lack of a consistent personality and lack of long-term memory which produces some meaningless responses in these models are still unresolved problems.", "Some works have proposed conversational agents for older adults with a variety of uses, such as stimulate conversation BIBREF17 , palliative care BIBREF18 or daily assistance. An example of them is ‘Billie’ reported in BIBREF19 which is a virtual agent that uses facial expression for a more natural behavior and is focused on managing user’s calendar, or ‘Mary’ BIBREF20 that assists the users by organizing their tasks offering reminders and guidance with household activities. Both of the works perform well on its specific tasks, but report difficulties to maintain a casual conversation. Other works focus on the content used in Reminiscence therapy. Like BIBREF21 where the authors propose a system that recommends multimedia content to be used in therapy, or Visual Dialog BIBREF22 where the conversational agent is the one that has to answer the questions about the image." ], [ "In this section we explain the main two components of our model, as well as how the interaction with the model works. We named it Elisabot and its goal is to mantain a dialog with the patient about her user’s life experiences.", "Before starting the conversation, the user must introduce photos that should contain significant moments for him/her. The system randomly chooses one of these pictures and analyses the content. Then, Elisabot shows the selected picture and starts the conversation by asking a question about the picture. The user should give an answer, even though he does not know it, and Elisabot makes a relevant comment on it. The cycle starts again by asking another relevant question about the image and the flow is repeated for 4 to 6 times until the picture is changed. The Figure FIGREF3 summarizes the workflow of our system.", "Elisabot is composed of two models: the model in charge of asking questions about the image which we will refer to it as VQG model, and the Chatbot model which tries to make the dialogue more engaging by giving feedback to the user's answers." ], [ "The algorithm behind VQG consists in an Encoder-Decoder architecture with attention. The Encoder takes as input one of the given photos $I$ from the user and learns its information using a CNN. CNNs have been widely studied for computer vision tasks. The CNN provides the image's learned features to the Decoder which generates the question $y$ word by word by using an attention mechanism with a Long Short-Term Memory (LSTM). The model is trained to maximize the likelihood $p(y|I)$ of producing a target sequence of words:", "where $K$ is the size of the vocabulary and $C$ is the length of the caption.", "Since there are already Convolutional Neural Networks (CNNs) trained on large datasets to represent images with an outstanding performance, we make use of transfer learning to integrate a pre-trained model into our algorithm. In particular, we use a ResNet-101 BIBREF23 model trained on ImageNet. We discard the last 2 layers, since these layers classify the image into categories and we only need to extract its features." ], [ "The core of our chatbot model is a sequence-to-sequence BIBREF24. This architecture uses a Recurrent Neural Network (RNN) to encode a variable-length sequence to obtain a large fixed dimensional vector representation and another RNN to decode the vector into a variable-length sequence.", "The encoder iterates through the input sentence one word at each time step producing an output vector and a hidden state vector. The hidden state vector is passed to the next time step, while the output vector is stored. We use a bidirectional Gated Recurrent Unit (GRU), meaning we use two GRUs one fed in sequential order and another one fed in reverse order. The outputs of both networks are summed at each time step, so we encode past and future context.", "The final hidden state $h_t^{enc}$ is fed into the decoder as the initial state $h_0^{dec}$. By using an attention mechanism, the decoder uses the encoder’s context vectors, and internal hidden states to generate the next word in the sequence. It continues generating words until it outputs an $<$end$>$ token, representing the end of the sentence. We use an attention layer to multiply attention weights to encoder's outputs to focus on the relevant information when decoding the sequence. This approach have shown better performance on sequence-to-sequence models BIBREF25." ], [ "One of the first requirements to develop an architecture using a machine learning approach is a training dataset. The lack of open-source datasets containing dialogues from reminiscence therapy lead as to use a dataset with content similar to the one used in the therapy. In particular, we use two types of datasets to train our models: A dataset that maps pictures with questions, and an open-domain conversation dataset. The details of the two datasets are as follows." ], [ "We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions. COCO dataset includes images of complex everyday scenes containing common objects in their natural context, but it is limited in terms of the concepts it covers. Bing dataset contains more event related questions and has a wider range of questions longitudes (between 3 and 20 words), while Flickr questions are shorter (less than 6 words) and the images appear to be more casual." ], [ "We use two datasets to train our chatbot model. The first one is the Persona-chat BIBREF15 which contains dialogues between two people with different profiles that are trying to know each other. It is complemented by the Cornell-movie dialogues dataset BIBREF27, which contains a collection of fictional conversations extracted from raw movie scripts. Persona-chat's sentences have a maximum of 15 words, making it easier to learn for machines and a total of 162,064 utterances over 10,907 dialogues. While Cornell-movie dataset contains 304,713 utterances over 220,579 conversational exchanges between 10,292 pairs of movie characters." ], [ "An important aspect of dialogue response generation systems is how to evaluate the quality of the generated response. This section presents the training procedure and the quantitative evaluation of the model, together with some qualitative results." ], [ "Both models are trained using Stochastic Gradient Descent with ADAM optimization BIBREF28 and a learning rate of 1e-4. Besides, we use dropout regularization BIBREF29 which prevents from over-fitting by dropping some units of the network.", "The VQG encoder is composed of 2048 neuron cells, while the VQG decoder has an attention layer of 512 followed by an embedding layer of 512 and a LSTM with the same size. We use a dropout of 50% and a beam search of 7 for decoding, which let as obtain up to 5 output questions. The vocabulary we use consists of all words seen 3 or more times in the training set, which amounts to 11.214 unique tokens. Unknown words are mapped to an $<$unk$>$ token during training, but we do not allow the decoder to produce this token at test time. We also set a maximum sequence length of 6 words as we want simple questions easy to understand and easy to learn by the model.", "In the Chatbot model we use a hidden size of 500 and Dropout regularization of 25%. For decoding we use greedy search, which consists in making the optimal token choice at each step. We first train it with Persona-chat and then fine-tune it with Cornell dataset. The vocabulary we use consists of all words seen 3 or more times in Persona-chat dataset and we set a maximum sequence length of 12 words. For the hyperparameter setting, we use a batch size of 64." ], [ "We use the BLEU BIBREF30 metric on the validation set for the VQG model training. BLEU is a measure of similitude between generated and target sequences of words, widely used in natural language processing. It assumes that valid generated responses have significant word overlap with the ground truth responses. We use it because in this case we have five different references for each of the generated questions. We obtain a BLEU score of 2.07.", "Our chatbot model instead, only have one reference ground truth in training when generating a sequence of words. We considered that it was not a good metric to apply as in some occasions responses have the same meaning, but do not share any words in common. Thus, we save several models with different hyperparameters and at different number of training iterations and compare them using human evaluation, to chose the model that performs better in a conversation." ], [ "Our first goal was to generate meaningful questions from the provided pictures. Table TABREF16 contains some generated questions by our VQG model for each of the proposed images. These images were not in the training set as they were taken from photo albums, so the model was the first time that had these images as input. By looking at the questions, we can easily realise that the model is not only able to generate questions grammatically correct, but to understand the content of the image and ask about it. Furthermore, we not only wanted to generate only a question per image, but to bring out up to five questions for a deeper conversation.", "We used different kind of pictures as we wanted a robust model able to work with all kind of images. In the Table TABREF16 we can see both, old gray and colorful photos, both with vertical and horizontal shapes. Most of the pictures we selected for testing were showing people, because these are the type of pictures used in reminiscence therapy, as they evoke memories from the relationships with that persons. We also tested pictures showing pets, cars and motorbikes as some people are very attached to them, triggering strong emotions.", "Most of the generated questions could be asked by a person. Nonetheless, we found some of the questions such as How did the car get damaged? that do not correspond with the image's content, or are not well formulated like What body of dog is that?, but can be easily understood.", "About the chatbot module, we test it manually by having a conversation with it. In the following lines, we can see two samples of a conversation, where each sentence starting with \"$>$\" is an input to our seq2seq model. In this first sample, the chatbot model is clearly overfitted as it repeats the same sentence twice. While in the second sample, we can see that the model is able to follow the conversation.", "Overfitted chatbot model:", "$>$ hi", "Elisabot: hello ! how are you ?", "$>$ fine and you?", "Elisabot : i am good do you have any hobbies ?", "$>$ yes i like to ski", "Elisabot: i love to play sports . it is my favorite", "$>$ I have a sister", "Elisabot : i have a dog . i love them .", "$>$ I have a dog too", "Elisabot: i have a dog . i love them", "$>$ ok", "Elisabot : i am not really a good person", "$>$ why?", "Elisabot : i love to play sports . it is my favorite", "Final chatbot model:", "$>$ hello", "Elisabot: hello how are you ?", "$>$ fine and you?", "Elisabot : i m great !", "$>$ I have a sister, do you have any brother or sister?", "Elisabot : i have a iphone but i have no idea .", "$>$ ok", "Elisabot : i am a pro fan of thrones", "$>$ me too", "Elisabot : i like to eat healthy", "$>$ have you seen the last chapter?", "Elisabot : no i have not .", "$>$ where do you live?", "Elisabot : i live in the mountains" ], [ "As most of the metrics correlate very weakly with human judgements in the non-technical domain BIBREF31 we decide to evaluate our system with a simple user study with two patients. We present the user interface built and the feedback obtained from the patients." ], [ "We developed a user interface for Elisabot with Telegram, an instant messaging application available for smartphones or computers. We select it because it is easy to use and it offers an API for developers to connect bots to the Telegram system. It enables to create special accounts for bots which do not require a phone number to set up.", "Telegram is only the interface for the code running in the server. The bot is executed via an HTTP-request to the API. Users can start a conversation with Elisabot by typing @TherapistElisabot in the searcher and executing the command /start, as can be seen in the Figure FIGREF31. Messages, commands and requests sent by users are passed to the software running on the server. We add /change, /yes and /exit commands to enable more functionalities. /Change gives the opportunity to the user to change the image in case the user does not want to talk about it, /yes accepts the image which is going to talk about and /exit finishes the dialogue with Elisabot. The commands can be executed either by tapping on the linked text or typing them." ], [ "We designed a usability study where users with and without mild cognitive impairment interacted with the system with the help of a doctor and one of the authors. The purpose was to study the acceptability and feasibility of the system with patients of mild cognitive impairment. The users were all older than 60 years old. The sessions lasted 30 minutes and were carried out by using a laptop computer connected to Telegram. As Elisabot's language is English we translated the questions to the users and the answers to Elisabot.", "Figure FIGREF38 is a sample of the session we did with mild cognitive impairment patients from anonymized institution and location. The picture provided by the patient (Figure FIGREF37 is blurred for user's privacy rights. In this experiment all the generated questions were right according to the image content, but the feedback was wrong for some of the answers. We can see that it was the last picture of the session as when Elisabot asks if the user wants to continue or leave, and he decides to continue, Elisabot finishes the session as there are no more pictures remaining to talk about.", "At the end of the session, we administrated a survey to ask participants the following questions about their assessment of Elisabot:", "Did you like it?", "Did you find it engaging?", "How difficult have you found it?", "Responses were given on a five-point scale ranging from strongly disagree (1) to strongly agree (5) and very easy (1) to very difficult (5). The results were 4.6 for amusing and engaging and 2.6 for difficulty. Healthy users found it very easy to use (1/5) and even a bit silly, because of some of the generated questions and comments. Nevertheless, users with mild cognitive impairment found it engaging (5/5) and challenging (4/5), because of the effort they had to make to remember the answers for some of the generated questions. All the users had in common that they enjoyed doing the therapy with Elisabot." ], [ "We presented a dialogue system for handling sessions of 30 minutes of reminiscence therapy. Elisabot, our conversational agent leads the therapy by showing a picture and generating some questions. The goal of the system is to improve users mood and stimulate their memory and communication skills. Two models were proposed to generate the dialogue system for the reminiscence therapy. A visual question generator composed of a CNN and a LSTM with attention and a sequence-to-sequence model to generate feedback on the user's answers. We realize that fine-tuning our chatbot model with another dataset improved the generated dialogue.", "The manual evaluation shows that our model can generate questions and feedback well formulated grammatically, but in some occasions not appropriate in content. As expected, it has tendency to produce non-specific answers and to loss its consistency in the comments with respect to what it has said before. However, the overall usability evaluation of the system by users with mild cognitive impairment shows that they found the session very entertaining and challenging. They had to make an effort to remember the answers for some of the questions, but they were very satisfied when they achieved it. Though, we see that for the proper performance of the therapy is essential a person to support the user to help remember the experiences that are being asked.", "This project has many possible future lines. In our future work, we suggest to train the model including the Reddit dataset which could improve the chatbot model, as it has many open-domain conversations. Moreover, we would like to include speech recognition and generation, as well as real-time text translation, to make Elisabot more autonomous and open to older adults with reading and writing difficulties. Furthermore, the lack of consistency in the dialogue might be avoided by improving the architecture including information about passed conversation into the model. We also think it would be a good idea to recognize feelings from the user's answers and give a feedback according to them." ], [ "Marioan Caros was funded with a scholarship from the Fundacion Vodafona Spain. Petia Radeva was partially funded by TIN2018-095232-B-C21, 2017 SGR 1742, Nestore, Validithi, and CERCA Programme/Generalitat de Catalunya. We acknowledge the support of NVIDIA Corporation with the donation of Titan Xp GPUs." ] ] }
{ "question": [ "How is performance of this system measured?", "How many questions per image on average are available in dataset?", "Is machine learning system underneath similar to image caption ML systems?", "How big dataset is used for training this system?" ], "question_id": [ "11d2f0d913d6e5f5695f8febe2b03c6c125b667c", "1c85a25ec9d0c4f6622539f48346e23ff666cd5f", "37d829cd42db9ae3d56ab30953a7cf9eda050841", "4b41f399b193d259fd6e24f3c6e95dc5cae926dd" ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "using the BLEU score as a quantitative metric and human evaluation for quality", "evidence": [ "We use the BLEU BIBREF30 metric on the validation set for the VQG model training. BLEU is a measure of similitude between generated and target sequences of words, widely used in natural language processing. It assumes that valid generated responses have significant word overlap with the ground truth responses. We use it because in this case we have five different references for each of the generated questions. We obtain a BLEU score of 2.07.", "Our chatbot model instead, only have one reference ground truth in training when generating a sequence of words. We considered that it was not a good metric to apply as in some occasions responses have the same meaning, but do not share any words in common. Thus, we save several models with different hyperparameters and at different number of training iterations and compare them using human evaluation, to chose the model that performs better in a conversation." ], "highlighted_evidence": [ "We use the BLEU BIBREF30 metric on the validation set for the VQG model training. BLEU is a measure of similitude between generated and target sequences of words, widely used in natural language processing. It assumes that valid generated responses have significant word overlap with the ground truth responses. We use it because in this case we have five different references for each of the generated questions. We obtain a BLEU score of 2.07.\n\nOur chatbot model instead, only have one reference ground truth in training when generating a sequence of words. We considered that it was not a good metric to apply as in some occasions responses have the same meaning, but do not share any words in common. Thus, we save several models with different hyperparameters and at different number of training iterations and compare them using human evaluation, to chose the model that performs better in a conversation." ] } ], "annotation_id": [ "395868f357819b6de3a616992a33977f125f92d9" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "5 questions per image" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions. COCO dataset includes images of complex everyday scenes containing common objects in their natural context, but it is limited in terms of the concepts it covers. Bing dataset contains more event related questions and has a wider range of questions longitudes (between 3 and 20 words), while Flickr questions are shorter (less than 6 words) and the images appear to be more casual." ], "highlighted_evidence": [ "We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions." ] } ], "annotation_id": [ "0a2bc42cf256a183dae47c2a043832d669e89831" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Our conversational agent uses two architectures to simulate a specialized reminiscence therapist. The block in charge of generating questions is based on the work Show, Attend and Tell BIBREF13. This work generates descriptions from pictures, also known as image captioning. In our case, we focus on generating questions from pictures. Our second architecture is inspired by Neural Conversational Model from BIBREF14 where the author presents an end-to-end approach to generate simple conversations. Building an open-domain conversational agent is a challenging problem. As addressed in BIBREF15 and BIBREF16, the lack of a consistent personality and lack of long-term memory which produces some meaningless responses in these models are still unresolved problems." ], "highlighted_evidence": [ "Our conversational agent uses two architectures to simulate a specialized reminiscence therapist. The block in charge of generating questions is based on the work Show, Attend and Tell BIBREF13. This work generates descriptions from pictures, also known as image captioning." ] } ], "annotation_id": [ "eda46fe815453f31e8ee4092686f9581bb42d7d0" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "For the question generation model 15,000 images with 75,000 questions. For the chatbot model, around 460k utterances over 230k dialogues.", "evidence": [ "We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions. COCO dataset includes images of complex everyday scenes containing common objects in their natural context, but it is limited in terms of the concepts it covers. Bing dataset contains more event related questions and has a wider range of questions longitudes (between 3 and 20 words), while Flickr questions are shorter (less than 6 words) and the images appear to be more casual.", "We use two datasets to train our chatbot model. The first one is the Persona-chat BIBREF15 which contains dialogues between two people with different profiles that are trying to know each other. It is complemented by the Cornell-movie dialogues dataset BIBREF27, which contains a collection of fictional conversations extracted from raw movie scripts. Persona-chat's sentences have a maximum of 15 words, making it easier to learn for machines and a total of 162,064 utterances over 10,907 dialogues. While Cornell-movie dataset contains 304,713 utterances over 220,579 conversational exchanges between 10,292 pairs of movie characters." ], "highlighted_evidence": [ "We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions.", "We use two datasets to train our chatbot model. The first one is the Persona-chat BIBREF15 which contains dialogues between two people with different profiles that are trying to know each other. It is complemented by the Cornell-movie dialogues dataset BIBREF27, which contains a collection of fictional conversations extracted from raw movie scripts. Persona-chat's sentences have a maximum of 15 words, making it easier to learn for machines and a total of 162,064 utterances over 10,907 dialogues. While Cornell-movie dataset contains 304,713 utterances over 220,579 conversational exchanges between 10,292 pairs of movie characters." ] } ], "annotation_id": [ "a488e4b08f2b52306f8f0add5978e19db2db5b4f" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Figure 1: Scheme of the interaction with Elisabot", "Figure 2: Samples from Bing 2a), Coco 2b) and Flickr 2c) datasets", "Table 1: Generated questions", "Figure 3: Elisabot running on Telegram application", "Figure 5: Sample of the session study with mild cognitive impairment patient" ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "6-Table1-1.png", "7-Figure3-1.png", "8-Figure5-1.png" ] }
1902.09087
Lattice CNNs for Matching Based Chinese Question Answering
Short text matching often faces the challenges that there are great word mismatch and expression diversity between the two texts, which would be further aggravated in languages like Chinese where there is no natural space to segment words explicitly. In this paper, we propose a novel lattice based CNN model (LCNs) to utilize multi-granularity information inherent in the word lattice while maintaining strong ability to deal with the introduced noisy information for matching based question answering in Chinese. We conduct extensive experiments on both document based question answering and knowledge based question answering tasks, and experimental results show that the LCNs models can significantly outperform the state-of-the-art matching models and strong baselines by taking advantages of better ability to distill rich but discriminative information from the word lattice input.
{ "section_name": [ "Introduction", "Lattice CNNs", "Siamese Architecture", "Word Lattice", "Lattice based CNN Layer", "Experiments", "Datasets", "Evaluation Metrics", "Implementation Details", "Baselines", "Results", "Analysis and Discussions", "Case Study", "Related Work", "Conclusions", "Acknowledgments" ], "paragraphs": [ [ "Short text matching plays a critical role in many natural language processing tasks, such as question answering, information retrieval, and so on. However, matching text sequences for Chinese or similar languages often suffers from word segmentation, where there are often no perfect Chinese word segmentation tools that suit every scenario. Text matching usually requires to capture the relatedness between two sequences in multiple granularities. For example, in Figure FIGREF4 , the example phrase is generally tokenized as “China – citizen – life – quality – high”, but when we plan to match it with “Chinese – live – well”, it would be more helpful to have the example segmented into “Chinese – livelihood – live” than its common segmentation. ", "Existing efforts use neural network models to improve the matching based on the fact that distributed representations can generalize discrete word features in traditional bag-of-words methods. And there are also works fusing word level and character level information, which, to some extent, could relieve the mismatch between different segmentations, but these solutions still suffer from the original word sequential structures. They usually depend on an existing word tokenization, which has to make segmentation choices at one time, e.g., “ZhongGuo”(China) and “ZhongGuoRen”(Chinese) when processing “ZhongGuoRenMin”(Chinese people). And the blending just conducts at one position in their frameworks.", "Specific tasks such as question answering (QA) could pose further challenges for short text matching. In document based question answering (DBQA), the matching degree is expected to reflect how likely a sentence can answer a given question, where questions and candidate answer sentences usually come from different sources, and may exhibit significantly different styles or syntactic structures, e.g. queries in web search and sentences in web pages. This could further aggravate the mismatch problems. In knowledge based question answering (KBQA), one of the key tasks is to match relational expressions in questions with knowledge base (KB) predicate phrases, such as “ZhuCeDi”(place of incorporation). Here the diversity between the two kinds of expressions is even more significant, where there may be dozens of different verbal expressions in natural language questions corresponding to only one KB predicate phrase. Those expression problems make KBQA a further tough task. Previous works BIBREF0 , BIBREF1 adopt letter-trigrams for the diverse expressions, which is similar to character level of Chinese. And the lattices are combinations of words and characters, so with lattices, we can utilize words information at the same time.", "Recent advances have put efforts in modeling multi-granularity information for matching. BIBREF2 , BIBREF3 blend words and characters to a simple sequence (in word level), and BIBREF4 utilize multiple convoluational kernel sizes to capture different n-grams. But most characters in Chinese can be seen as words on their own, so combining characters with corresponding words directly may lose the meanings that those characters can express alone. Because of the sequential inputs, they will either lose word level information when conducting on character sequences or have to make segmentation choices.", "In this paper, we propose a multi-granularity method for short text matching in Chinese question answering which utilizes lattice based CNNs to extract sentence level features over word lattice. Specifically, instead of relying on character or word level sequences, LCNs take word lattices as input, where every possible word and character will be treated equally and have their own context so that they can interact at every layer. For each word in each layer, LCNs can capture different context words in different granularity via pooling methods. To the best of our knowledge, we are the first to introduce word lattice into the text matching tasks. Because of the similar IO structures to original CNNs and the high efficiency, LCNs can be easily adapted to more scenarios where flexible sentence representation modeling is required.", "We evaluate our LCNs models on two question answering tasks, document based question answering and knowledge based question answering, both in Chinese. Experimental results show that LCNs significantly outperform the state-of-the-art matching methods and other competitive CNNs baselines in both scenarios. We also find that LCNs can better capture the multi-granularity information from plain sentences, and, meanwhile, maintain better de-noising capability than vanilla graphic convolutional neural networks thanks to its dynamic convolutional kernels and gated pooling mechanism." ], [ "Our Lattice CNNs framework is built upon the siamese architecture BIBREF5 , one of the most successful frameworks in text matching, which takes the word lattice format of a pair of sentences as input, and outputs the matching score." ], [ "The siamese architecture and its variant have been widely adopted in sentence matching BIBREF6 , BIBREF3 and matching based question answering BIBREF7 , BIBREF0 , BIBREF8 , that has a symmetrical component to extract high level features from different input channels, which share parameters and map inputs to the same vector space. Then, the sentence representations are merged and compared to output the similarities.", "For our models, we use multi-layer CNNs for sentence representation. Residual connections BIBREF9 are used between convolutional layers to enrich features and make it easier to train. Then, max-pooling summarizes the global features to get the sentence level representations, which are merged via element-wise multiplication. The matching score is produced by a multi-layer perceptron (MLP) with one hidden layer based on the merged vector. The fusing and matching procedure is formulated as follows: DISPLAYFORM0 ", "where INLINEFORM0 and INLINEFORM1 are feature vectors of question and candidate (sentence or predicate) separately encoded by CNNs, INLINEFORM2 is the sigmoid function, INLINEFORM3 are parameters, and INLINEFORM4 is element-wise multiplication. The training objective is to minimize the binary cross-entropy loss, defined as: DISPLAYFORM0 ", "where INLINEFORM0 is the {0,1} label for the INLINEFORM1 training pair.", "Note that the CNNs in the sentence representation component can be either original CNNs with sequence input or lattice based CNNs with lattice input. Intuitively, in an original CNN layer, several kernels scan every n-gram in a sequence and result in one feature vector, which can be seen as the representation for the center word and will be fed into the following layers. However, each word may have different context words in different granularities in a lattice and may be treated as the center in various kernel spans with same length. Therefore, different from the original CNNs, there could be several feature vectors produced for a given word, which is the key challenge to apply the standard CNNs directly to a lattice input.", "For the example shown in Figure FIGREF6 , the word “citizen” is the center word of four text spans with length 3: “China - citizen - life”, “China - citizen - alive”, “country - citizen - life”, “country - citizen - alive”, so four feature vectors will be produced for width-3 convolutional kernels for “citizen”." ], [ "As shown in Figure FIGREF4 , a word lattice is a directed graph INLINEFORM0 , where INLINEFORM1 represents a node set and INLINEFORM2 represents a edge set. For a sentence in Chinese, which is a sequence of Chinese characters INLINEFORM3 , all of its possible substrings that can be considered as words are treated as vertexes, i.e. INLINEFORM4 . Then, all neighbor words are connected by directed edges according to their positions in the original sentence, i.e. INLINEFORM5 .", "Here, one of the key issues is how we decide a sequence of characters can be considered as a word. We approach this through an existing lookup vocabulary, which contains frequent words in BaiduBaike. Note that most Chinese characters can be considered as words on their own, thus are included in this vocabulary when they have been used as words on their own in this corpus.", "However, doing so will inevitably introduce noisy words (e.g., “middle” in Figure FIGREF4 ) into word lattices, which will be smoothed by pooling procedures in our model. And the constructed graphs could be disconnected because of a few out-of-vocabulary characters. Thus, we append INLINEFORM0 labels to replace those characters to connect the graph.", "Obviously, word lattices are collections of characters and all possible words. Therefore, it is not necessary to make explicit decisions regarding specific word segmentations, but just embed all possible information into the lattice and take them to the next CNN layers. The inherent graph structure of a word lattice allows all possible words represented explicitly, no matter the overlapping and nesting cases, and all of them can contribute directly to the sentence representations." ], [ "As we mentioned in previous section, we can not directly apply standard CNNs to take word lattice as input, since there could be multiple feature vectors produced for a given word. Inspired by previous lattice LSTM models BIBREF10 , BIBREF11 , here we propose a lattice based CNN layers to allow standard CNNs to work over word lattice input. Specifically, we utilize pooling mechanisms to merge the feature vectors produced by multiple CNN kernels over different context compositions.", "Formally, the output feature vector of a lattice CNN layer with kernel size INLINEFORM0 at word INLINEFORM1 in a word lattice INLINEFORM2 can be formulated as Eq EQREF12 : DISPLAYFORM0 ", "where INLINEFORM0 is the activation function, INLINEFORM1 is the input vector corresponding to word INLINEFORM2 in this layer, ( INLINEFORM3 means the concatenation of these vectors, and INLINEFORM4 are parameters with size INLINEFORM5 , and INLINEFORM6 , respectively. INLINEFORM7 is the input dim and INLINEFORM8 is the output dim. INLINEFORM9 is one of the following pooling functions: max-pooling, ave-pooling, or gated-pooling, which execute the element-wise maximum, element-wise average, and the gated operation, respectively. The gated operation can be formulated as: DISPLAYFORM0 ", "where INLINEFORM0 are parameters, and INLINEFORM1 are gated weights normalized by a softmax function. Intuitively, the gates represent the importance of the n-gram contexts, and the weighted sum can control the transmission of noisy context words. We perform padding when necessary.", "For example, in Figure FIGREF6 , when we consider “citizen” as the center word, and the kernel size is 3, there will be five words and four context compositions involved, as mentioned in the previous section, each marked in different colors. Then, 3 kernels scan on all compositions and produce four 3-dim feature vectors. The gated weights are computed based on those vectors via a dense layer, which can reflect the importance of each context compositions. The output vector of the center word is their weighted sum, where noisy contexts are expected to have lower weights to be smoothed. This pooling over different contexts allows LCNs to work over word lattice input.", "Word lattice can be seen as directed graphs and modeled by Directed Graph Convolutional networks (DGCs) BIBREF12 , which use poolings on neighboring vertexes that ignore the semantic structure of n-grams. But to some situations, their formulations can be very similar to ours (See Appendix for derivation). For example, if we set the kernel size in LCNs to 3, use linear activations and suppose the pooling mode is average in both LCNs and DGCs, at each word in each layer, the DGCs compute the average of the first order neighbors together with the center word, while the LCNs compute the average of the pre and post words separately and add them to the center word. Empirical results are exhibited in Experiments section.", "Finally, given a sentence that has been constructed into a word-lattice form, for each node in the lattice, an LCN layer will produce one feature vector similar to original CNNs, which makes it easier to stack multiple LCN layers to obtain more abstract feature representations." ], [ "Our experiments are designed to answer: (1) whether multi-granularity information in word lattice helps in matching based QA tasks, (2) whether LCNs capture the multi-granularity information through lattice well, and (3) how to balance the noisy and informative words introduced by word lattice." ], [ "We conduct experiments on two Chinese question answering datasets from NLPCC-2016 evaluation task BIBREF13 .", "DBQA is a document based question answering dataset. There are 8.8k questions with 182k question-sentence pairs for training and 6k questions with 123k question-sentence pairs in the test set. In average, each question has 20.6 candidate sentences and 1.04 golden answers. The average length for questions is 15.9 characters, and each candidate sentence has averagely 38.4 characters. Both questions and sentences are natural language sentences, possibly sharing more similar word choices and expressions compared to the KBQA case. But the candidate sentences are extracted from web pages, and are often much longer than the questions, with many irrelevant clauses.", "KBRE is a knowledge based relation extraction dataset. We follow the same preprocess as BIBREF14 to clean the dataset and replace entity mentions in questions to a special token. There are 14.3k questions with 273k question-predicate pairs in the training set and 9.4k questions with 156k question-predicate pairs for testing. Each question contains only one golden predicate. Each question averagely has 18.1 candidate predicates and 8.1 characters in length, while a KB predicate is only 3.4 characters long on average. Note that a KB predicate is usually a concise phrase, with quite different word choices compared to the natural language questions, which poses different challenges to solve.", "The vocabulary we use to construct word lattices contains 156k words, including 9.1k single character words. In average, each DBQA question contains 22.3 tokens (words or characters) in its lattice, each DBQA candidate sentence has 55.8 tokens, each KBQA question has 10.7 tokens and each KBQA predicate contains 5.1 tokens." ], [ "For both datasets, we follow the evaluation metrics used in the original evaluation tasks BIBREF13 . For DBQA, P@1 (Precision@1), MAP (Mean Average Precision) and MRR (Mean Reciprocal Rank) are adopted. For KBRE, since only one golden candidate is labeled for each question, only P@1 and MRR are used." ], [ "The word embeddings are trained on the Baidu Baike webpages with Google's word2vector, which are 300-dim and fine tuned during training. In DBQA, we also follow previous works BIBREF15 , BIBREF16 to concatenate additional 1d-indicators with word vectors which denote whether the words are concurrent in both questions and candidate sentences. In each CNN layer, there are 256, 512, and 256 kernels with width 1, 2, and 3, respectively. The size of the hidden layer for MLP is 1024. All activation are ReLU, the dropout rate is 0.5, with a batch size of 64. We optimize with adadelta BIBREF17 with learning rate INLINEFORM0 and decay factor INLINEFORM1 . We only tune the number of convolutional layers from [1, 2, 3] and fix other hyper-parameters. We sample at most 10 negative sentences per question in DBQA and 5 in KBRE. We implement our models in Keras with Tensorflow backend." ], [ "Our first set of baselines uses original CNNs with character (CNN-char) or word inputs. For each sentence, two Chinese word segmenters are used to obtain three different word sequences: jieba (CNN-jieba), and Stanford Chinese word segmenter in CTB (CNN-CTB) and PKU (CNN-PKU) mode.", "Our second set of baselines combines different word segmentations. Specifically, we concatenate the sentence embeddings from different segment results, which gives four different word+word models: jieba+PKU, PKU+CTB, CTB+jieba, and PKU+CTB+jieba.", "Inspired by previous works BIBREF2 , BIBREF3 , we also concatenate word and character embeddings at the input level. Specially, when the basic sequence is in word level, each word may be constructed by multiple characters through a pooling operation (Word+Char). Our pilot experiments show that average-pooling is the best for DBQA while max-pooling after a dense layer is the best for KBQA. When the basic sequence is in character level, we simply concatenate the character embedding with its corresponding word embedding (Char+Word), since each character belongs to one word only. Again, when the basic sequence is in character level, we can also concatenate the character embedding with a pooled representation of all words that contain this character in the word lattice (Char+Lattice), where we use max pooling as suggested by our pilot experiments.", "DGCs BIBREF12 , BIBREF18 are strong baselines that perform CNNs over directed graphs to produce high level representation for each vertex in the graph, which can be used to build a sentence representation via certain pooling operation. We therefore choose to compare with DGC-max (with maximum pooling), DGC-ave (with average pooling), and DGC-gated (with gated pooling), where the gate value is computed using the concatenation of the vertex vector and the center vertex vector through a dense layer. We also implement several state-of-the-art matching models using the open-source project MatchZoo BIBREF19 , where we tune hyper-parameters using grid search, e.g., whether using word or character inputs. Arc1, Arc2, CDSSM are traditional CNNs based matching models proposed by BIBREF20 , BIBREF21 . Arc1 and CDSSM compute the similarity via sentence representations and Arc2 uses the word pair similarities. MV-LSTM BIBREF22 computes the matching score by examining the interaction between the representations from two sentences obtained by a shared BiLSTM encoder. MatchPyramid(MP) BIBREF23 utilizes 2D convolutions and pooling strategies over word pair similarity matrices to compute the matching scores.", "We also compare with the state-of-the-art models in DBQA BIBREF15 , BIBREF16 ." ], [ "Here, we mainly describe the main results on the DBQA dataset, while we find very similar trends on the KBRE dataset. Table TABREF26 summarizes the main results on the two datasets. We can see that the simple MatchZoo models perform the worst. Although Arc1 and CDSSM are also constructed in the siamese architecture with CNN layers, they do not employ multiple kernel sizes and residual connections, and fail to capture the relatedness in a multi-granularity fashion.", " BIBREF15 is similar to our word level models (CNN-jieba/PKU/CTB), but outperforms our models by around 3%, since it benefits from an extra interaction layer with fine tuned hyper-parameters. BIBREF16 further incorporates human designed features including POS-tag interaction and TF-IDF scores, achieving state-of-the-art performance in the literature of this DBQA dataset. However, both of them perform worse than our simple CNN-char model, which is a strong baseline because characters, that describe the text in a fine granularity, can relieve word mismatch problem to some extent. And our best LCNs model further outperforms BIBREF16 by .0134 in MRR.", "For single granularity CNNs, CNN-char performs better than all word level models, because they heavily suffer from word mismatching given one fixed word segmentation result. And the models that utilize different word segmentations can relieve this problem and gain better performance, which can be further improved by the combination of words and characters. The DGCs and LCNs, being able to work on lattice input, outperform all previous models that have sequential inputs, indicating that the word lattice is a more promising form than a single word sequence, and should be better captured by taking the inherent graph structure into account. Although they take the same input, LCNs still perform better than the best DGCs by a margin, showing the advantages of the CNN kernels over multiple n-grams in the lattice structures and the gated pooling strategy.", "To fairly compare with previous KBQA works, we combine our LCN-ave settings with the entity linking results of the state-of-the-art KBQA model BIBREF14 . The P@1 for question answering of single LCN-ave is 86.31%, which outperforms both the best single model (84.55%) and the best ensembled model (85.40%) in literature." ], [ "As shown in Table TABREF26 , the combined word level models (e.g. CTB+jieba or PKU+CTB) perform better than any word level CNNs with single word segmentation result (e.g. CNN-CTB or CNN-PKU). The main reason is that there are often no perfect Chinese word segmenters and a single improper segmentation decision may harm the matching performance, since that could further make the word mismatching issue worse, while the combination of different word segmentation results can somehow relieve this situation.", "Furthermore, the models combining words and characters all perform better than PKU+CTB+jieba, because they could be complementary in different granularities. Specifically, Word+Char is still worse than CNN-char, because Chinese characters have rich meanings and compressing several characters to a single word vector will inevitably lose information. Furthermore, the combined sequence of Word+Char still exploits in a word level, which still suffers from the single segmentation decision. On the other side, the Char+Word model is also slightly worse than CNN-char. We think one reason is that the reduplicated word embeddings concatenated with each character vector confuse the CNNs, and perhaps lead to overfitting. But, we can still see that Char+Word performs better than Word+Char, because the former exploits in a character level and the fine-granularity information actually helps to relieve word mismatch. Note that Char+Lattice outperforms Char+Word, and even slightly better than CNN-char. This illustrates that multiple word segmentations are still helpful to further improve the character level strong baseline CNN-char, which may still benefit from word level information in a multi-granularity fashion.", "In conclusion, the combination between different sequences and information of different granularities can help improve text matching, showing that it is necessary to consider the fashion which considers both characters and more possible words, which perhaps the word lattice can provide.", "For DGCs with different kinds of pooling operations, average pooling (DGC-ave) performs the best, which delivers similar performance with LCN-ave. While DGC-max performs a little worse, because it ignores the importance of different edges and the maximum operation is more sensitive to noise than the average operation. The DGC-gated performs the worst. Compared with LCN-gated that learns the gate value adaptively from multiple n-gram context, it is harder for DGC to learn the importance of each edge via the node and the center node in the word lattice. It is not surprising that LCN-gated performs much better than GDC-gated, indicating again that n-grams in word lattice play an important role in context modeling, while DGCs are designed for general directed graphs which may not be perfect to work with word lattice.", "For LCNs with different pooling operations, LCN-max and LCN-ave lead to similar performances, and perform better on KBRE, while LCN-gated is better on DBQA. This may be due to the fact that sentences in DBQA are relatively longer with more irrelevant information which require to filter noisy context, while on KBRE with much shorter predicate phrases, LCN-gated may slightly overfit due to its more complex model structure. Overall, we can see that LCNs perform better than DGCs, thanks to the advantage of better capturing multiple n-grams context in word lattice.", "To investigate how LCNs utilize multi-granularity more intuitively, we analyze the MRR score against granularities of overlaps between questions and answers in DBQA dataset, which is shown in Figure FIGREF32 . It is demonstrated that CNN-char performs better than CNN-CTB impressively in first few groups where most of the overlaps are single characters which will cause serious word mismatch. With the growing of the length of overlaps, CNN-CTB is catching up and finally overtakes CNN-char even though its overall performance is much lower. This results show that word information is complementary to characters to some extent. The LCN-gated is approaching the CNN-char in first few groups, and outperforms both character and word level models in next groups, where word level information becomes more powerful. This demonstrates that LCNs can effectively take advantages of different granularities, and the combination will not be harmful even when the matching clues present in extreme cases.", "How to Create Word Lattice In previous experiments, we construct word lattice via an existing lookup vocabulary, which will introduce some noisy words inevitably. Here we construct from various word segmentations with different strategies to investigate the balance between the noisy words and additional information introduced by word lattice. We only use the DBQA dataset because word lattices here are more complex, so the construction strategies have more influence. Pilot experiments show that word lattices constructed based on character sequence perform better, so the strategies in Table TABREF33 are based on CNN-char.", "From Table TABREF33 , it is shown that all kinds of lattice are better than CNN-char, which also evidence the usage of word information. And among all LCN models, more complex lattice produces better performance in principle, which indicates that LCNs can handle the noisy words well and the influence of noisy words can not cancel the positive information brought by complex lattices. It is also noticeable that LCN-gated is better than LCN-C+20 by a considerable margin, which shows that the words not in general tokenization (e.g. “livelihood” in Fig FIGREF4 ) are potentially useful.", "LCNs only introduce inappreciable parameters in gated pooling besides the increasing vocabulary, which will not bring a heavy burden. The training speed is about 2.8 batches per second, 5 times slower than original CNNs, and the whole training of a 2-layer LCN-gated on DBQA dataset only takes about 37.5 minutes. The efficiency may be further improved if the network structure builds dynamically with supported frameworks. The fast speed and little parameter increment give LCNs a promising future in more NLP tasks." ], [ "Figure FIGREF37 shows a case study comparing models in different input levels. The word level model is relatively coarse in utilizing informations, and finds a sentence with the longest overlap (5 words, 12 characters). However, it does not realize that the question is about numbers of people, and the “DaoHang”(navigate) in question is a verb, but noun in the sentence. The character level model finds a long sentence which covers most of the characters in question, which shows the power of fine-granularity matching. But without the help of words, it is hard to distinguish the “Ren”(people) in “DuoShaoRen”(how many people) and “ChuangShiRen”(founder), so it loses the most important information. While in lattice, although overlaps are limited, “WangZhan”(website, “Wang” web, “Zhan” station) can match “WangZhi”(Internet addresses, “Wang” web, “Zhi” addresses) and also relate to “DaoHang”(navigate), from which it may infer that “WangZhan”(website) refers to “tao606 seller website navigation”(a website name). Moreover, “YongHu”(user) can match “Ren”(people). With cooperations between characters and words, it catches the key points of the question and eliminates the other two candidates, as a result, it finds the correct answer." ], [ "Deep learning models have been widely adopted in natural language sentence matching. Representation based models BIBREF21 , BIBREF7 , BIBREF0 , BIBREF8 encode and compare matching branches in hidden space. Interaction based models BIBREF23 , BIBREF22 , BIBREF3 incorporates interactions features between all word pairs and adopts 2D-convolution to extract matching features. Our models are built upon the representation based architecture, which is better for short text matching.", "In recent years, many researchers have become interested in utilizing all sorts of external or multi-granularity information in matching tasks. BIBREF24 exploit hidden units in different depths to realize interaction between substrings with different lengths. BIBREF3 join multiple pooling methods in merging sentence level features, BIBREF4 exploit interactions between different lengths of text spans. For those more similar to our work, BIBREF3 also incorporate characters, which is fed into LSTMs and concatenate the outcomes with word embeddings, and BIBREF8 utilize words together with predicate level tokens in KBRE task. However, none of them exploit the multi-granularity information in word lattice in languages like Chinese that do not have space to segment words naturally. Furthermore, our model has no conflicts with most of them except BIBREF3 and could gain further improvement.", "GCNs BIBREF25 , BIBREF26 and graph-RNNs BIBREF27 , BIBREF28 have extended CNNs and RNNs to model graph information, and DGCs generalize GCNs on directed graphs in the fields of semantic-role labeling BIBREF12 , document dating BIBREF18 , and SQL query embedding BIBREF29 . However, DGCs control information flowing from neighbor vertexes via edge types, while we focus on capturing different contexts for each word in word lattice via convolutional kernels and poolings.", "Previous works involved Chinese lattice into RNNs for Chinese-English translation BIBREF10 , Chinese named entity recognition BIBREF11 , and Chinese word segmentation BIBREF30 . To the best of our knowledge, we are the first to conduct CNNs on word lattice, and the first to involve word lattice in matching tasks. And we motivate to utilize multi-granularity information in word lattices to relieve word mismatch and diverse expressions in Chinese question answering, while they mainly focus on error propagations from segmenters." ], [ "In this paper, we propose a novel neural network matching method (LCNs) for matching based question answering in Chinese. Rather than relying on a word sequence only, our model takes word lattice as input. By performing CNNs over multiple n-gram context to exploit multi-granularity information, LCNs can relieve the word mismatch challenges. Thorough experiments show that our model can better explore the word lattice via convolutional operations and rich context-aware pooling, thus outperforms the state-of-the-art models and competitive baselines by a large margin. Further analyses exhibit that lattice input takes advantages of word and character level information, and the vocabulary based lattice constructor outperforms the strategies that combine characters and different word segmentations together." ], [ "This work is supported by Natural Science Foundation of China (Grant No. 61672057, 61672058, 61872294); the UK Engineering and Physical Sciences Research Council under grants EP/M01567X/1 (SANDeRs) and EP/M015793/1 (DIVIDEND); and the Royal Society International Collaboration Grant (IE161012). For any correspondence, please contact Yansong Feng." ] ] }
{ "question": [ "How do they obtain word lattices from words?", "Which metrics do they use to evaluate matching?", "Which dataset(s) do they evaluate on?" ], "question_id": [ "76377e5bb7d0a374b0aefc54697ac9cd89d2eba8", "85aa125b3a15bbb6f99f91656ca2763e8fbdb0ff", "4b128f9e94d242a8e926bdcb240ece279d725729" ], "nlp_background": [ "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "By considering words as vertices and generating directed edges between neighboring words within a sentence", "evidence": [ "Word Lattice", "As shown in Figure FIGREF4 , a word lattice is a directed graph INLINEFORM0 , where INLINEFORM1 represents a node set and INLINEFORM2 represents a edge set. For a sentence in Chinese, which is a sequence of Chinese characters INLINEFORM3 , all of its possible substrings that can be considered as words are treated as vertexes, i.e. INLINEFORM4 . Then, all neighbor words are connected by directed edges according to their positions in the original sentence, i.e. INLINEFORM5 ." ], "highlighted_evidence": [ "Word Lattice\nAs shown in Figure FIGREF4 , a word lattice is a directed graph INLINEFORM0 , where INLINEFORM1 represents a node set and INLINEFORM2 represents a edge set. For a sentence in Chinese, which is a sequence of Chinese characters INLINEFORM3 , all of its possible substrings that can be considered as words are treated as vertexes, i.e. INLINEFORM4 . Then, all neighbor words are connected by directed edges according to their positions in the original sentence, i.e. INLINEFORM5 ." ] } ], "annotation_id": [ "16a08b11f033b08e392175ed187aebd84970919c" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Precision@1", "Mean Average Precision", "Mean Reciprocal Rank" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For both datasets, we follow the evaluation metrics used in the original evaluation tasks BIBREF13 . For DBQA, P@1 (Precision@1), MAP (Mean Average Precision) and MRR (Mean Reciprocal Rank) are adopted. For KBRE, since only one golden candidate is labeled for each question, only P@1 and MRR are used." ], "highlighted_evidence": [ "For both datasets, we follow the evaluation metrics used in the original evaluation tasks BIBREF13 . For DBQA, P@1 (Precision@1), MAP (Mean Average Precision) and MRR (Mean Reciprocal Rank) are adopted. For KBRE, since only one golden candidate is labeled for each question, only P@1 and MRR are used." ] } ], "annotation_id": [ "0a87b02811796b7a34c65018823bc2bf7b874e4a" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "DBQA", "KBRE" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Datasets", "We conduct experiments on two Chinese question answering datasets from NLPCC-2016 evaluation task BIBREF13 .", "DBQA is a document based question answering dataset. There are 8.8k questions with 182k question-sentence pairs for training and 6k questions with 123k question-sentence pairs in the test set. In average, each question has 20.6 candidate sentences and 1.04 golden answers. The average length for questions is 15.9 characters, and each candidate sentence has averagely 38.4 characters. Both questions and sentences are natural language sentences, possibly sharing more similar word choices and expressions compared to the KBQA case. But the candidate sentences are extracted from web pages, and are often much longer than the questions, with many irrelevant clauses.", "KBRE is a knowledge based relation extraction dataset. We follow the same preprocess as BIBREF14 to clean the dataset and replace entity mentions in questions to a special token. There are 14.3k questions with 273k question-predicate pairs in the training set and 9.4k questions with 156k question-predicate pairs for testing. Each question contains only one golden predicate. Each question averagely has 18.1 candidate predicates and 8.1 characters in length, while a KB predicate is only 3.4 characters long on average. Note that a KB predicate is usually a concise phrase, with quite different word choices compared to the natural language questions, which poses different challenges to solve." ], "highlighted_evidence": [ "Datasets\nWe conduct experiments on two Chinese question answering datasets from NLPCC-2016 evaluation task BIBREF13 .", "DBQA is a document based question answering dataset. ", "KBRE is a knowledge based relation extraction dataset." ] } ], "annotation_id": [ "4e2de011ee880e520268d7144efde72ef499a962" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Figure 1: A word lattice for the phrase “Chinese people have high quality of life.”", "Figure 2: An illustration of our LCN-gated, when “人民” (people) is being considered as the center of convolutional spans.", "Table 1: The performance of all models on the two datasets. The best results in each group are bolded. * is the best published DBQA result.", "Figure 3: MRR score against granularities of overlaps between questions and answers, which is the average length of longest common substrings. About 2.3% questions are ignored for they have no overlaps and the rests are separated in 12 groups orderly and equally. Group 1 has the least average overlap length while group 12 has the largest.", "Table 2: Comparisons of various ways to construct word lattice. l.qu and l.sen are the average token number in questions and sentences respectively. The 4 models in the middle construct lattices by adding words to CNN-char. +2& considers the intersection of words of CTB and PKU mode while +2 considers the union. +20 uses the top 10 results of the two segmentors.", "Table 3: Example, questions (in word) and 3 sentences selected by 3 systems. Bold mean sequence exactly match between question and answer." ], "file": [ "1-Figure1-1.png", "3-Figure2-1.png", "5-Table1-1.png", "6-Figure3-1.png", "6-Table2-1.png", "7-Table3-1.png" ] }
2003.04748
On the coexistence of competing languages
We investigate the evolution of competing languages, a subject where much previous literature suggests that the outcome is always the domination of one language over all the others. Since coexistence of languages is observed in reality, we here revisit the question of language competition, with an emphasis on uncovering the ways in which coexistence might emerge. We find that this emergence is related to symmetry breaking, and explore two particular scenarios -- the first relating to an imbalance in the population dynamics of language speakers in a single geographical area, and the second to do with spatial heterogeneity, where language preferences are specific to different geographical regions. For each of these, the investigation of paradigmatic situations leads us to a quantitative understanding of the conditions leading to language coexistence. We also obtain predictions of the number of surviving languages as a function of various model parameters.
{ "section_name": [ "Introduction", "Breaking internal symmetry: language coexistence by imbalanced population dynamics", "Breaking internal symmetry: language coexistence by imbalanced population dynamics ::: Two competing languages", "Breaking internal symmetry: language coexistence by imbalanced population dynamics ::: @!START@$N$@!END@ competing languages", "Breaking internal symmetry: language coexistence by imbalanced population dynamics ::: The case of equally spaced attractivenesses", "Breaking internal symmetry: language coexistence by imbalanced population dynamics ::: The general case", "Breaking spatial symmetry: language coexistence by inhomogeneous attractivenesses", "Breaking spatial symmetry: language coexistence by inhomogeneous attractivenesses ::: Two geographic areas", "Breaking spatial symmetry: language coexistence by inhomogeneous attractivenesses ::: @!START@$M$@!END@ geographical areas", "Breaking spatial symmetry: language coexistence by inhomogeneous attractivenesses ::: Ordered attractiveness profiles", "Breaking spatial symmetry: language coexistence by inhomogeneous attractivenesses ::: Random attractiveness profiles", "Discussion", "Asymptotic analysis for a large number of competing languages in a single area", "Stability matrices and their spectra ::: Generalities", "Stability matrices and their spectra ::: Array models", "Stability matrices and their spectra ::: Array models ::: Random arrays", "Stability matrices and their spectra ::: Array models ::: Ordered arrays" ], "paragraphs": [ [ "The dynamics of language evolution is one of many interdisciplinary fields to which methods and insights from statistical physics have been successfully applied (see BIBREF0 for an overview, and BIBREF1 for a specific comprehensive review).", "In this work we revisit the question of language coexistence. It is known that a sizeable fraction of the more than 6000 languages that are currently spoken, is in danger of becoming extinct BIBREF2, BIBREF3, BIBREF4. In pioneering work by Abrams and Strogatz BIBREF5, theoretical predictions were made to the effect that less attractive or otherwise unfavoured languages are generally doomed to extinction, when contacts between speakers of different languages become sufficiently frequent. Various subsequent investigations have corroborated this finding, emphasising that the simultaneous coexistence of competing languages is only possible in specific circumstances BIBREF6, BIBREF7, all of which share the common feature that they involve some symmetry breaking mechanism BIBREF1. A first scenario can be referred to as spatial symmetry breaking. Different competing languages may coexist in different geographical areas, because they are more or less favoured locally, despite the homogenising effects of migration and language shift BIBREF8, BIBREF9, BIBREF10. A second scenario corresponds to a more abstract internal symmetry breaking. Two or more competing languages may coexist at a given place if the populations of speakers of these languages have imbalanced dynamics BIBREF11, BIBREF12, BIBREF13. Moreover, it has been shown that a stable population of bilinguals or multilinguals also favours the coexistence of several languages BIBREF14, BIBREF15, BIBREF16.", "The aim of the present study is to provide a quantitative understanding of the conditions which ensure the coexistence of two or more competing languages within each of the symmetry breaking scenarios outlined above. Throughout this paper, in line with many earlier studies on the dynamics of languages BIBREF5, BIBREF7, BIBREF8, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, and with an investigation of grammar acquisition BIBREF17, we describe the dynamics of the numbers of speakers of various languages by means of coupled rate equations. This approach is sometimes referred to as ecological modelling, because of its similarity with models used in theoretical ecology (see e.g. BIBREF18). From a broader perspective, systems of coupled differential equations, and especially Lotka-Volterra equations and replicator equations, are ubiquitous in game theory and in a broad range of areas in mathematical biology (see e.g. BIBREF19, BIBREF20, BIBREF21).", "The plan of this paper is as follows. For greater clarity, we first consider in Section SECREF2 the situation of several competing languages in a single geographic area where the population is well mixed. We address the situation where internal symmetry is broken by imbalanced population dynamics. The relevant concepts are reviewed in detail in the case of two competing languages in Section SECREF1, and the full phase diagram of the model is derived. The case of an arbitrary number $N$ of competing languages is then considered in Section SECREF11 in full generality. The special situation where the attractivenesses of the languages are equally spaced is studied in Section SECREF22, whereas Section SECREF34 is devoted to the case where attractivenesses are modelled as random variables. Section SECREF3 is devoted to the situation where coexistence is due to spatial symmetry breaking. We focus our attention onto the simple case of two languages in competition on a linear array of $M$ distinct geographic areas. Language attractivenesses vary arbitrarily along the array, whereas migrations take place only between neighbouring areas at a uniform rate $\\gamma $. A uniform consensus is reached at high migration rate, where the same language survives everywhere. This general result is demonstrated in detail for two geographic areas (Section SECREF57), and generalised to an arbitrary number $M$ of areas (Section SECREF67). The cases of ordered and random attractiveness profiles are investigated in Sections SECREF71 and SECREF84. In Section SECREF4 we present a non-technical discussion of our findings and their implications. Two appendices contain technical details about the regime of a large number of competing languages in a single geographic area (Appendix SECREF5) and about stability matrices and their spectra (Appendix SECREF6)." ], [ "This section is devoted to the dynamics of languages in a single geographic area. As mentioned above, it has been shown that two or more competing languages may coexist only if the populations of speakers of these languages have imbalanced dynamics BIBREF11, BIBREF12, BIBREF13. Our goal is to make these conditions more explicit and to provide a quantitative understanding of them." ], [ "We begin with the case of two competing languages. We assume that language 1 is more favoured than language 2. Throughout this work we neglect the effect of bilingualism, so that at any given time $t$ each individual speaks a single well-defined language. Let $X_1(t)$ and $X_2(t)$ denote the numbers of speakers of each language at time $t$, so that $X(t)=X_1(t)+X_2(t)$ is the total population of the area under consideration.", "The dynamics of the model is defined by the coupled rate equations", "The above equations are an example of Lotka-Volterra equations (see e.g. BIBREF18, BIBREF19). The terms underlined by braces describe the intrinsic dynamics of the numbers of speakers of each language. For the sake of simplicity we have chosen the well-known linear-minus-bilinear or `logistic' form which dates back to Lotka BIBREF22 and is still commonly used in population dynamics. The linear term describes population growth, whereas the quadratic terms represent a saturation mechanism.", "The main novelty of our approach is the introduction of the parameter $q$ in the saturation terms. This imbalance parameter is responsible for the internal symmetry breaking leading to language coexistence. It allows for the interpolation between two situations: when the saturation mechanism only involves the total population, i.e., $q=1$, and when the saturation mechanism acts separately on the populations of speakers of each language, $q=0$, which is the situation considered by Pinasco and Romanelli BIBREF11. Generic values of $q$ correspond to tunably imbalanced dynamics.", "The last term in each of equations (DISPLAY_FORM2), () describes the language shift consisting of the conversions of single individuals from the less favoured language 2 to the more favoured language 1. In line with earlier studies BIBREF7, BIBREF11, BIBREF12, BIBREF13, conversions are triggered by binary interactions between individuals, so that the frequency of conversions is proportional to the product $X_1(t)X_2(t)$. The reduced conversion rate $C$ measures the difference of attractivenesses between the two languages.", "For generic values of the parameters $q$ and $C$, the rate equations (DISPLAY_FORM2), () admit a unique stable fixed point. The dynamics converges exponentially fast to the corresponding stationary state, irrespective of initial conditions. There are two possible kinds of stationary states:", "I. Consensus.", "The solution", "describes a consensus state where the unfavoured language 2 is extinct. The inverse relaxation times describing convergence toward the latter state are the opposites of the eigenvalues of the stability matrix associated with equations (DISPLAY_FORM2), (). The reader is referred to Appendix SECREF131 for details. These inverse relaxation times read", "The above stationary solution is thus stable whenever $q+C>1$.", "II. Coexistence.", "The solution", "describes a coexistence state where both languages survive forever. This stationary solution exists whenever $q+C<1$. It is always stable, as the inverse relaxation times read", "Figure FIGREF9 shows the phase diagram of the model in the $q$–$C$ plane. There is a possibility of language coexistence only for $q<1$. The vertical axis ($q=0$) corresponds to the model considered by Pinasco and Romanelli BIBREF11, where the coexistence phase is maximal and extends up to $C=1$. As the parameter $q$ is increased, the coexistence phase shrinks until it disappears at the point $q=1$, corresponding to the balanced dynamics where the saturation mechanism involves the total population.", "The model exhibits a continuous transition along the phase boundary between both phases ($q+C=1$). The number $X_2$ of speakers of the unfavoured language vanishes linearly as the phase boundary is approached from the coexistence phase (see (DISPLAY_FORM7)), whereas the relaxation time $1/\\omega _2$ diverges linearly as the phase boundary is approached from both sides (see (DISPLAY_FORM5) and (DISPLAY_FORM8)).", "For parameters along the phase boundary ($q+C=1$), the less attractive language still becomes extinct, albeit very slowly. Equations (DISPLAY_FORM2), () here yield the power-law relaxation laws", "irrespective of initial conditions." ], [ "The above setting can be extended to the case of an arbitrary number $N$ of competing languages in a given area. Languages, numbered $i=1,\\dots ,N$, are more or less favoured, depending on their attractivenesses $A_i$. The latter quantities are assumed to be quenched, i.e., fixed once for all. This non-trivial static profile of attractivenesses is responsible for conversions of single individuals from less attractive to more attractive languages.", "Let $X(t)$ be the total population of the area under consideration at time $t$, and $X_i(t)$ be the number of speakers of language number $i=1,\\dots ,N$. The dynamics of the model are defined by the rate equations", "The terms underlined by braces describe the intrinsic dynamics of the numbers of speakers of each language. The novel feature here is again the presence of the parameter $q$, which is responsible for imbalanced dynamics, allowing thus the possibility of language coexistence. The last term in (DISPLAY_FORM12) describes the conversions of single individuals. If language $i$ is more attractive than language $j$, there is a net positive conversion rate $C_{ji}=-C_{ij}$ from language $j$ to language $i$. For the sake of simplicity, we assume that these conversion rates depend linearly on the differences of attractivenesses between departure and target languages, i.e.,", "in some consistent units.", "Throughout this work we shall not pay any attention to the evolution of the whole population $X(t)$. We therefore reformulate the model in terms of the fractions", "of speakers of the various languages, which sum up to unity:", "The reduction to be derived below is quite natural in the present setting. It provides an example of the reduction of Lotka-Volterra equations to replicator equations, proposed in BIBREF23 (see also BIBREF19, BIBREF20, BIBREF21). In the present situation, for $q<1$, which is precisely the range of $q$ where there is a possibility of language coexistence, the dynamics of the fractions $x_i(t)$ obeys the following reduced rate equations, which can be derived from (DISPLAY_FORM12):", "with", "and where attractivenesses and conversion rates have been rescaled according to", "In the following, we focus our attention onto the stationary states of the model, rather than on its dynamics. It is therefore legitimate to redefine time according to", "so that equations (DISPLAY_FORM16) simplify to", "The rate equations (DISPLAY_FORM20) for the fractions of speakers of the $N$ competing languages will be the starting point of further developments. The quantity $Z(t)$ can be alternatively viewed as a dynamical Lagrange multiplier ensuring that the dynamics conserves the sum rule (DISPLAY_FORM15). The above equations belong to the class of replicator equations (see e.g. BIBREF19, BIBREF20, BIBREF21). Extensive studies of the dynamics of this class of equations have been made in mathematical biology, where the main focus has been on systematic classifications of fixed points and bifurcations in low-dimensional cases BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28.", "From now on, we focus on the stationary state of the model for arbitrarily high values of the number $N$ of competing languages. The analysis of this goes as follows. The stationary values $x_i$ of the fractions of speakers are such that the right-hand sides of (DISPLAY_FORM20) vanish. For each language number $i$, there are two possibilities: either $x_i=0$, i.e., language $i$ gets extinct, or $x_i>0$, i.e., language $i$ survives forever. The non-zero fractions $x_i$ of speakers of surviving languages obey the coupled linear equations", "where the parameter $Z$ is determined by expressing that the sum rule (DISPLAY_FORM15) holds in the stationary state. For generic values of model parameters, there is a unique stationary state, and the system relaxes exponentially fast to the latter, irrespective of its initial conditions. The uniqueness of the attractor is characteristic of the specific form of the rate equations (DISPLAY_FORM20), (DISPLAY_FORM21), with skew-symmetric conversion rates $c_{ij}$ (see (DISPLAY_FORM18)). This has been demonstrated explicitly in the case of two competing languages, studied in detail in Section SECREF1. The problem is however more subtle than it seems at first sight, as the number $K$ of surviving languages depends on model parameters in a non-trivial way." ], [ "It is useful to consider first the simple case where the (reduced) attractivenesses $a_i$ of the $N$ competing languages are equally spaced between 0 and some maximal value that we denote by $2g$. Numbering languages in order of decreasing attractivenesses, so that language 1 is the most attractive and language $N$ the least attractive, this reads", "We have", "The parameter $g$ is therefore the mean attractiveness.", "The (reduced) conversion rates read", "so that the fixed-point equations (DISPLAY_FORM21) take the form", "Already in this simple situation the number $K$ of surviving languages depends on the mean attractiveness $g$ in a non-trivial way.", "Consider first the situation where all languages survive ($K=N$). This is certainly true for $g=0$, where there are no conversions, so that the solution is simply $x_i=1/N$. There, all languages are indeed equally popular, as nothing distinguishes them. More generally, as long as all languages survive, the stationary solution obeying (DISPLAY_FORM26) reads", "for $i=1,\\dots ,N$. The above solution ceases to hold when the fraction of speakers of the least attractive language vanishes, i.e., $x_N=0$. This first extinction takes place for the threshold value", "of the mean attractiveness $g$.", "Consider now the general case where only $K$ among the $N$ languages survive. These are necessarily the $K$ most attractive ones, shown as red symbols in Figure FIGREF29.", "In this situation, (DISPLAY_FORM26) yields", "for $i=1,\\dots ,K$. The linear relationship between the attractiveness $a_i$ of language $i$ and the stationary fraction $x_i$ of speakers of that language, observed in (DISPLAY_FORM27) and (DISPLAY_FORM30), is a general feature of the model (see Section SECREF34). The fraction $x_K$ of speakers of the least attractive of the surviving languages vanishes at the following threshold mean attractiveness:", "for $K=2,\\dots ,N$.", "The following picture therefore emerges for the stationary state of $N$ competing languages with equally spaced attractivenesses. The number $K$ of surviving languages decreases as a function of the mean attractiveness $g$, from $K=N$ (all languages survive) near $g=0$ to $K=1$ (consensus) as very large $g$. Less attractive languages become extinct one by one as every single one of the thresholds (DISPLAY_FORM31) is traversed, so that", "Figure FIGREF33 illustrates this picture for 5 competing languages. In each of the sectors defined in (DISPLAY_FORM32), the stationary fractions $x_i$ of speakers of the surviving languages are given by (DISPLAY_FORM30). They depend continuously on the mean attractiveness $g$, even though they are given by different expressions in different sectors. In particular, $x_i$ is flat, i.e., independent of $g$, in the sector where $K=2i-1$. The fraction $x_1$ of speakers of the most attractive language grows monotonically as a function of $g$, whereas all the other fractions of speakers eventually go to zero.", "When the number of languages $N$ is large, the range of values of $g$ where the successive transitions take place is very broad. The threshold at which a consensus is reached, $g_{N,2}=N/2$, is indeed much larger than the threshold at which the least attractive language disappears, $g_{N,N}=1/(N-1)$. The ratio between these two extreme thresholds reads $N(N-1)/2$." ], [ "We now turn to the general case of $N$ competing languages with arbitrary reduced attractivenesses $a_i$. Throughout the following, languages are numbered in order of decreasing attractivenesses, i.e.,", "We shall be interested mostly in the stationary state of the model. As already mentioned above, the number $K$ of surviving languages depends on model parameters in a non-trivial way. The $K$ surviving languages are always the most attractive ones (see Figure FIGREF29). The fractions $x_i$ of speakers of those languages, obeying the fixed-point equations (DISPLAY_FORM21), can be written in full generality as", "for $i=1,\\dots ,K$, with", "The existence of an explicit expression (DISPLAY_FORM36) for the solution of the fixed-point equations (DISPLAY_FORM21) in full generality is a consequence of their simple linear-minus-bilinear form, which also ensures the uniqueness of the attractor.", "The number $K$ of surviving languages is the largest such that the solution (DISPLAY_FORM36) obeys $x_i>0$ for $i=1,\\dots ,K$. Equivalently, $K$ is the largest integer in $1,\\dots ,N$ such that", "Every single one of the differences involved in the sum is positive, so that:", "From now on, we model attractivenesses as independent random variables. More precisely, we set", "where $w$ is the mean attractiveness, and the rescaled attractivenesses $\\xi _i$ are positive random variables drawn from some continuous distribution $f(\\xi )$ such that $\\left\\langle \\xi \\right\\rangle =1$. For any given instance of the model, i.e., any draw of the $N$ random variables $\\lbrace \\xi _i\\rbrace $, languages are renumbered in order of decreasing attractivenesses (see (DISPLAY_FORM35)).", "For concreteness we assume that $f(0)$ is non-vanishing and that $f(\\xi )$ falls off more rapidly than $1/\\xi ^3$ at large $\\xi $. These hypotheses respectively imply that small values of $\\xi $ are allowed with non-negligible probability and ensure the convergence of the second moment $\\left\\langle \\xi ^2\\right\\rangle =1+\\sigma ^2$, where $\\sigma ^2$ is the variance of $\\xi $.", "Some quantities of interest can be expressed in closed form for all language numbers $N$. One example is the consensus probability ${\\cal P}$, defined as the probability of reaching consensus, i.e., of having $K=1$ (see (DISPLAY_FORM39)). This reads", "We have", "for all $N\\ge 2$, where", "is the cumulative distribution of $\\xi $.", "In forthcoming numerical and analytical investigations we use the following distributions:", "We begin our exploration of the model by looking at the dynamics of a typical instance of the model with $N=10$ languages and a uniform distribution of attractivenesses with $w=0.3$. Figure FIGREF45 shows the time-dependent fractions of speakers of all languages, obtained by solving the rate equations (DISPLAY_FORM20) numerically, with the uniform initial condition $x_i(0)=1/10$ for all $i$. In this example there are $K=6$ surviving languages. The plotted quantities are observed to converge to their stationary values given by (DISPLAY_FORM36) for $i=1,\\dots ,6$, and to zero for $i=7,\\dots ,10$. They are ordered as the corresponding attractivenesses at all positive times, i.e., $x_1(t)>x_2(t)>\\dots >x_N(t)$. Some of the fractions however exhibit a non-monotonic evolution. This is the case for $i=5$ in the present example.", "Figure FIGREF48 shows the distribution $p_K$ of the number $K$ of surviving languages, for $N=10$ (top) and $N=40$ (bottom), and a uniform distribution of attractivenesses for four values of the product", "This choice is motivated by the analysis of Appendix SECREF5. Each dataset is the outcome of $10^7$ draws of the attractiveness profile. The widths of the distributions $p_K$ are observed to shrink as $N$ is increased, in agreement with the expected $1/\\sqrt{N}$ behavior stemming from the law of large numbers. The corresponding mean fractions $\\left\\langle K\\right\\rangle /N$ of surviving languages are shown in Table TABREF49 to converge smoothly to the asymptotic prediction (DISPLAY_FORM126), i.e.,", "with $1/N$ corrections.", "An overall picture of the dependence of the statistics of surviving languages on the mean attractiveness $w$ is provided by Figure FIGREF50, showing the mean number $\\left\\langle K\\right\\rangle $ of surviving languages against $w$, for $N=10$ and uniform and exponential attractiveness distributions. The plotted quantity decreases monotonically, starting from the value $\\left\\langle K\\right\\rangle =N$ in the absence of conversions ($w=0$), and converging to its asymptotic value $\\left\\langle K\\right\\rangle =1$ in the $w\\rightarrow \\infty $ limit, where consensus is reached with certainty. Its dependence on $w$ is observed to be steeper for the exponential distribution. These observations are corroborated by the asymptotic analysis of Appendix SECREF5. For the uniform distribution, (DISPLAY_FORM126) yields the scaling law $\\left\\langle K\\right\\rangle \\approx (N/w)^{1/2}$. Concomitantly, the consensus probability becomes sizeable for $w\\sim N$ (see (DISPLAY_FORM124)). For the exponential distribution, (DISPLAY_FORM130) yields the decay law $\\left\\langle K\\right\\rangle \\approx 1/w$, irrespective of $N$, and the consensus probability is strictly independent of $N$ (see (DISPLAY_FORM128))." ], [ "As mentioned in the Introduction, different competing languages may coexist in distinct geographical areas, because they are more or less favoured locally, despite the homogenising effects of migration and language shift BIBREF8, BIBREF9, BIBREF10. The aim of this section is to provide a quantitative understanding of this scenario. We continue to use the approach and the formalism of Section SECREF2. We however take the liberty of adopting slightly different notations, as both sections are entirely independent.", "We consider the dynamics of two competing languages in a structured territory comprising several distinct geographic areas. For definiteness, we assume that the population of each area is homogeneous. We restrict ourselves to the geometry of an array of $M$ areas, where individuals can only migrate along the links joining neighbouring areas, as shown in Figure FIGREF51. We assume for simplicity that the migration rates $\\gamma $ between neighbouring areas are uniform, so that in the very long run single individuals eventually perform random walks across the territory. The relative attractivenesses of both competing languages are distributed inhomogeneously among the various areas, so that the net conversion rate $C_m$ from language 2 to language 1 depends on the area number $m$. Finally, in order to emphasise the effects of spatial inhomogeneity on their own, we simplify the model by neglecting imbalance and thus set $q=1$.", "Let $X_m(t)$ and $Y_m(t)$ denote the respective numbers of speakers of language 1 and of language 2 in area number $m=1,\\dots ,M$ at time $t$. The dynamics of the model is defined by the coupled rate equations", "The extremal sites $m=1$ and $m=M$ have only one neighbour. The corresponding equations have to be modified accordingly. The resulting boundary conditions can be advantageously recast as", "and similarly for other quantities. These are known as Neumann boundary conditions.", "The total populations $P_m(t)=X_m(t)+Y_m(t)$ of the various areas obey", "irrespective of the conversion rates $C_m$. As a consequence, in the stationary state all areas have the same population, which reads $P_m=1$ in our reduced units. The corresponding stability matrix is given in (DISPLAY_FORM137). The population profile $P_m(t)$ therefore converges exponentially fast to its uniform stationary value, with unit relaxation time ($\\omega =1$).", "From now on we assume, for simplicity, that the total population of each area is unity in the initial state. This property is preserved by the dynamics, i.e., we have $P_m(t)=1$ for all $m$ and $t$, so that the rate equations (DISPLAY_FORM52) simplify to", "The rate equations (DISPLAY_FORM55) for the fractions $X_m(t)$ of speakers of language 1 in the various areas provide another example of the broad class of replicator equations (see e.g. BIBREF19, BIBREF20, BIBREF21). The above equations are the starting point of the subsequent analysis. In the situation where language 1 is uniformly favoured or disfavoured, so that the conversion rates are constant ($C_m=C$), the above rate equations boil down to the discrete Fisher-Kolmogorov-Petrovsky-Piscounov (FKPP) equation BIBREF29, BIBREF30, which is known to exhibit traveling fronts, just as the well-known FKPP equation in the continuum BIBREF31, BIBREF32. In the present context, the focus will however be on stationary solutions on finite arrays, obeying" ], [ "We begin with the case of two geographic areas connected by a single link. The problem is simple enough to allow for an explicit exposition of its full solution. The rate equations (DISPLAY_FORM55) become", "Because of the migration fluxes, for any non-zero $\\gamma $ it is impossible for any of the languages to become extinct in one area and survive in the other one. The only possibility is that of a uniform consensus, where one and the same language survives in all areas. The consensus state where language 1 survives is described by the stationary solution $X_1=X_2=1$. The corresponding stability matrix is", "where $\\mathop {{\\rm diag}}(\\dots )$ denotes a diagonal matrix (whose entries are listed), whereas ${\\Delta }_2$ is defined in (DISPLAY_FORM135). The stability condition amounts to", "Similarly, the consensus state where language 2 survives is described by the stationary solution $X_1=X_2=0$. The corresponding stability matrix is", "The conditions for the latter to be stable read", "Figure FIGREF66 shows the phase diagram of the model in the $C_1$–$C_2$ plane for $\\gamma =1$. Region I1 is the consensus phase where language 1 survives. It is larger than the quadrant where this language is everywhere favoured (i.e., $C_1$ and $C_2$ are positive), as its boundary (red curve) reads $C_1C_2+\\gamma (C_1+C_2)=0$. Similarly, region I2 is the consensus phase where language 2 survives. It is larger than the quadrant where this language is everywhere favoured (i.e., $C_1$ and $C_2$ are negative), as its boundary (blue curve) reads $C_1C_2-\\gamma (C_1+C_2)=0$. The regions marked IIA and IIB are coexistence phases. These phases are located symmetrically around the line $C_1+C_2=0$ (black dashed line) where none of the languages is globally favoured. There, the fractions $X_1$ and $X_2$ of speakers of language 1 in both areas vary continuously between zero on the blue curve and unity on the red one, according to", "with", "We have therefore", "all over the coexistence phases IIA and IIB. The right-hand-side equals 0 on the blue curve, 1 on the black dashed line, and 2 on the red curve." ], [ "From now on we consider the general situation of $M$ geographic areas, as shown in Figure FIGREF51. The basic properties of the model can be inferred from the case of two areas, studied in section SECREF57. In full generality, because of migration fluxes, it is impossible for any of the languages to become extinct in some areas and survive in some other ones. The only possibility is that of a uniform consensus, where one and the same language survives in all areas.", "The consensus state where language 1 survives is described by the uniform stationary solution where $X_m=1$ for all $m=1,\\dots ,M$. The corresponding stability matrix is", "Similarly, the consensus state where language 2 survives corresponds to the stationary solution where $X_m=0$ for all $m=1,\\dots ,M$. The corresponding stability matrix is", "These expressions respectively generalise (DISPLAY_FORM59) and (DISPLAY_FORM61).", "If all the conversion rates $C_m$ vanish, both the above matrices read $-\\gamma {\\Delta }_M$, whose spectrum comprises one vanishing eigenvalue (see (DISPLAY_FORM136)). In the regime where all the conversion rates $C_m$ are small with respect to $\\gamma $, perturbation theory tells us that the largest eigenvalues of ${S}_M^{(0)}$ and ${S}_M^{(1)}$ respectively read $\\overline{C}$ and $-\\overline{C}$, to leading order, where", "We therefore predict that the average conversion rate $\\overline{C}$ determines the fate of the system in the regime where conversion rates are small with respect to $\\gamma $. If language 1 is globally favoured, i.e., $\\overline{C}>0$, the system reaches the consensus where language 1 survives, and vice versa.", "In the generic situation where the conversion rates $C_m$ are comparable to $\\gamma $, their dispersion around their spatial average $\\overline{C}$ broadens the spectra of the matrices ${S}_M^{(1)}$ and ${S}_M^{(0)}$. As a consequence, the condition $\\overline{C}>0$ (resp. $\\overline{C}<0$) is necessary, albeit not sufficient, for the consensus where language 1 (resp. language 2) survives to be stable.", "In the following we shall successively consider ordered attractiveness profiles in Section SECREF71 and random ones in Section SECREF84." ], [ "This section is devoted to a simple situation where the attractiveness profiles of both languages are ordered spatially. More specifically, we consider the case where language 1 is favoured in the $K$ first (i.e., leftmost) areas, whereas language 2 is favoured in the $L$ last (i.e., rightmost) areas, with $K\\ge L$ and $K+L=M$. For the sake of simplicity, we choose to describe this situation by conversion rates that have unit magnitude, as shown in Figure FIGREF73:", "The symmetric situation where $M$ is even and $K=L=M/2$, so that $\\overline{C}=0$, can be viewed as a generalisation of the case of two geographic areas, studied in Section SECREF57, for $C_1+C_2=0$, i.e., along the black dashed line of Figure FIGREF66. Both languages play symmetric roles, so that no language is globally preferred, and no consensus can be reached. As a consequence, both languages survive everywhere, albeit with non-trivial spatial profiles, which can be thought of as avatars of the FKPP traveling fronts mentioned above, rendered stationary by being pinned by boundary conditions. The upper panel of Figure FIGREF76 shows the stationary fraction $X_m$ of speakers of language 1 against area number, for $M=20$ (i.e., $K=L=10$) and several $\\gamma $. The abscissa $m-1/2$ is chosen in order to have a symmetric plot. As one might expect, each language is preferred in the areas where it is favoured, i.e., we have $X_m>1/2$ for $m=1,\\dots ,K$, whereas $X_m<1/2$ for $m=K+1,\\dots ,M$. Profiles get smoother as the migration rate $\\gamma $ is increased. The width $\\xi $ of the transition region is indeed expected to grow as", "This scaling law is nothing but the large $\\gamma $ behaviour of the exact dispersion relation", "(see (DISPLAY_FORM148)) between $\\gamma $ and the decay rate $\\mu $ such that either $X_m$ or $1-X_m$ falls off as ${\\rm e}^{\\pm m\\mu }$, with the natural identification $\\xi =1/\\mu $.", "The asymmetric situation where $K>L$, so that $\\overline{C}=(K-L)/M>0$, implying that language 1 is globally favoured, is entirely different. The system indeed reaches a consensus state where the favoured language survives, whenever the migration rate $\\gamma $ exceeds some threshold $\\gamma _c$. This threshold, corresponding to the consensus state becoming marginally stable, only depends on the integers $K$ and $L$. It is derived in Appendix SECREF6 and given by the largest solution of (DISPLAY_FORM153).", "This is illustrated in the lower panel of Figure FIGREF76, showing $X_m$ against $m-1/2$ for $K=12$ and $L=8$, and the same values of $\\gamma $ as on the upper panel. The corresponding threshold reads $\\gamma _c=157.265$. The whole profile shifts upwards while it broadens as $\\gamma $ is increased. It tends uniformly to unity as $\\gamma $ tends to $\\gamma _c$, demonstrating the continuous nature of the transition where consensus is formed.", "The threshold migration rate $\\gamma _c$ assumes a scaling form in the regime where $K$ and $L$ are large and comparable. Setting", "so that the excess fraction $f$ identifies with the average conversion rate $\\overline{C}$, the threshold rate $\\gamma _c$ grows quadratically with the system size $M$, according to", "where $g(f)$ is the smallest positive solution of the implicit equation", "which is a rescaled form of (DISPLAY_FORM153).", "The quadratic growth law (DISPLAY_FORM78) is a consequence of the diffusive nature of migrations. The following limiting cases deserve special mention.", "For $f\\rightarrow 0$, i.e., $K$ and $L$ relatively close to each other ($K-L\\ll M$), we have", "yielding to leading order", "For $f\\rightarrow 1$, i.e., $L\\ll K$, we have $g(f)\\approx \\pi /(4(1-f))$, up to exponentially small corrections, so that", "The situation considered in the lower panel of Figure FIGREF76, i.e., $M=20$, $K=12$ and $L=8$, corresponds to $f=1/5$, hence $g=0.799622814\\dots $, so that", "This scaling result predicts $\\gamma _c\\approx 156.397$ for $M=20$, a good approximation to the exact value $\\gamma _c=157.265$." ], [ "We now consider the situation of randomly disordered attractiveness profiles. The conversion rates $C_m$ are modelled as independent random variables drawn from some symmetric distribution $f(C)$, such that $\\left\\langle C_m\\right\\rangle =0$ and $\\left\\langle C_m^2\\right\\rangle =w^2$.", "The first quantity we will focus on is the consensus probability ${\\cal P}$. It is clear from a dimensional analysis of the rate equations (DISPLAY_FORM56) that ${\\cal P}$ depends on the ratio $\\gamma /w$, the system size $M$, and the distribution $f(C)$. Furthermore, ${\\cal P}$ is expected to increase with $\\gamma /w$. It can be estimated as follows in the limiting situations where $\\gamma /w$ is either very small or very large.", "In the regime where $\\gamma \\ll w$ (e.g. far from the center in Figure FIGREF66), conversion effects dominate migration effects. There, a consensus where language 1 (resp. language 2) survives can only be reached if all conversion rates $C_m$ are positive (resp. negative). The total consensus probability thus scales as", "Consensus is therefore highly improbable in this regime. In other words, coexistence of both languages is overwhelmingly the rule.", "In the opposite regime where $\\gamma \\gg w$ (e.g. in the vicinity of the center in Figure FIGREF66), migration effects dominate conversion effects. There, we have seen in Section SECREF67 that the average conversion rate defined in (DISPLAY_FORM70) essentially determines the fate of the system. If language 1 is globally favoured, i.e., $\\overline{C}>0$, then the system reaches the uniform consensus where language 1 survives, and vice versa. Coexistence is therefore rare in this regime, as it requires $\\overline{C}$ to be atypically small. The probability ${\\cal Q}$ for this to occur, to be identified with $1-{\\cal P}$, has been given a precise definition in Appendix SECREF6 by means of the expansion (DISPLAY_FORM143) of $D_M=\\det {S}_M^{(1)}$ as a power series in the $C_m$, and estimated within a simplified Gaussian setting. In spite of the heuristic character of its derivation, the resulting estimate (DISPLAY_FORM147) demonstrates that the consensus probability scales as", "all over the regime where the ratio $\\gamma /w$ and the system size $M$ are both large. Furthermore, taking (DISPLAY_FORM147) literally, we obtain the following heuristic prediction for the finite-size scaling function:", "The scaling result (DISPLAY_FORM86) shows that the scale of the migration rate $\\gamma $ which is relevant to describe the consensus probability for a typical disordered profile of attractivenesses reads", "This estimate grows less rapidly with $M$ than the corresponding threshold for ordered profiles, which obeys a quadratic growth law (see (DISPLAY_FORM78)). The exponent $3/2$ of the scaling law (DISPLAY_FORM88) can be put in perspective with the anomalous scaling of the localisation length in one-dimensional Anderson localisation near band edges. There is indeed a formal analogy between the stability matrices of the present problem and the Hamiltonian of a tight-binding electron in a disordered potential, with the random conversion rates $C_m$ replacing the disordered on-site energies. For the tight-binding problem, the localisation length is known to diverge as $\\xi \\sim 1/w^2$ in the bulk of the spectrum, albeit only as $\\xi \\sim 1/w^{2/3}$ in the vicinity of band edges BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37. Replacing $\\xi $ by the system size $M$ and remembering that $w$ stands for $w/\\gamma $, we recover (DISPLAY_FORM88). The exponent $3/2$ is therefore nothing but the inverse of the exponent $2/3$ of anomalous band-edge localisation.", "Figure FIGREF89 shows a finite-size scaling plot of the consensus probability ${\\cal P}$ against $x=\\gamma /M^{3/2}$. Data correspond to arrays of length $M=20$ with uniform and Gaussian distributions of conversion rates with $w=1$. Each data point is the outcome of $10^6$ independent realisations. The thin black curve is a guide to the eye, suggesting that the finite-size scaling function $\\Phi $ is universal, i.e., independent of details of the conversion rate distribution. It has indeed been checked that the weak residual dependence of data points on the latter distribution becomes even smaller as $M$ is further increased. The full green curve shows the heuristic prediction (DISPLAY_FORM87), providing a semi-quantitative picture of the finite-size scaling function. For instance, consensus is reached with probability ${\\cal P}=1/2$ and ${\\cal P}=2/3$ respectively for $x\\approx 0.18$ and $x\\approx 0.33$, according to actual data, whereas (DISPLAY_FORM87) respectively predicts $x=1/\\sqrt{12}=0.288675\\dots $ and $x=1/2$.", "Besides the value of the consensus probability ${\\cal P}$, the next question is what determines whether or not the system reaches consensus. In Section SECREF67 it has been demonstrated that the average conversion rate $\\overline{C}$ defined in (DISPLAY_FORM70) essentially determines the fate of the system in the regime where migration effects dominate conversion effects. It has also been shown that the consensus denoted by I1, where language 1 survives, can only be stable for $\\overline{C}>0$, whereas the consensus denoted by I2, where language 2 survives, can only be stable for $\\overline{C}<0$. The above statements are made quantitative in Figure FIGREF90, showing the probability distribution of the average conversion rate $\\overline{C}$, for a Gaussian distribution of conversion rates with $w=1$. The total (i.e., unconditioned) distribution (black curves) is Gaussian. Red and blue curves show the distributions conditioned on consensus. They are indeed observed to live entirely on $\\overline{C}>0$ for I1 and on $\\overline{C}<0$ for I2. Finally, the distributions conditioned on coexistence (green curves, denoted by II) exhibit narrow symmetric shapes around the origin. Values of the migration rate $\\gamma $ are chosen so as to have three partial histograms with equal weights, i.e., a consensus probability ${\\cal P}=2/3$. This fixes $\\gamma \\approx 0.351$ for $M=2$ (top) and $\\gamma \\approx 10.22$ for $M=10$ (bottom)." ], [ "An area of interest that is common to both physicists and linguists concerns the evolution of competing languages. It was long assumed that such competition would result in the dominance of one language above all its competitors, until some recent work hinted that coexistence might be possible under specific circumstances. We argue here that coexistence of two or more competing languages can result from two symmetry-breaking mechanisms – due respectively to imbalanced internal dynamics and spatial heterogeneity – and engage in a quantitative exploration of the circumstances which lead to this coexistence. In this work, both symmetry-breaking scenarios are dealt with on an equal footing.", "In the first case of competing languages in a single geographical area, our introduction of an interpolation parameter $q$, which measures the amount of imbalance in the internal dynamics, turns out to be crucial for the investigation of language coexistence. It is conceptually somewhat subtle, since it appears only in the saturation terms in the coupled logistic equations used here to describe language competition; in contrast to the conversion terms (describing language shift from a less to a more favoured language), its appearance is symmetric with respect to both languages. For multiply many competing languages, the ensuing rate equations for the fractions of speakers are seen to bear a strong resemblance to a broad range of models used in theoretical ecology, including Lotka-Volterra or predator-prey systems.", "We first consider the case where the $N$ languages in competition in a single area have equally spaced attractivenesses. This simple situation allows for an exact characterisation of the stationary state. The range of attractivenesses is measured by the mean attractiveness $g$. As this parameter is increased, the number $K$ of surviving languages decreases progressively, as the least favoured languages successively become extinct at threshold values of $g$. Importantly, the range of values of $g$ between the start of the disappearances and the appearance of consensus grows proportionally to $N^2$. There is therefore a substantial amount of coexistence between languages that are significantly attractive.", "In the general situation, where the attractivenesses of the competing languages are modelled as random variables with an arbitrary distribution, the outcomes of numerical studies at finite $N$ are corroborated by a detailed asymptotic analysis in the regime of large $N$. One of the key results is that the quantity $W=Nw$ (the product of the number of languages $N$ with the mean attractiveness $w$) determines many quantities of interest, including the mean fraction $R=\\left\\langle K\\right\\rangle /N$ of surviving languages. The relation between $W$ and $R$ is however non-universal, as it depends on the full attractiveness distribution. This non-universality is most prominent in the regime where the mean attractiveness is large, so that only the few most favoured languages survive in the stationary state. The number of such survivors is found to obey a scaling law, whose non-universal critical exponent is dictated by the specific form of the attractiveness distribution near its upper edge.", "As far as symmetry breaking via spatial heterogeneity is concerned, we consider the paradigmatic case of two competing languages in a linear array of $M$ geographic areas, whose neighbours are linked via a uniform migration rate $\\gamma $. In the simplest situation of two areas, we determine the full phase diagram of the model as a function of $\\gamma $ as well as the conversion rates ruling language shift in each area. This allows us to associate different regions of phase space with either consensus or coexistence. Our analysis is then generalised to longer arrays of $M$ linked geographical regions. We first consider ordered attractiveness profiles, where language 1 is favoured in the $K$ leftmost areas, while language 2 is favoured in the $L$ rightmost ones. If the two blocks are of equal size so that no language is globally preferred, coexistence always results; however, the spatial profiles of the language speakers themselves are rather non-trivial. For blocks of unequal size, there is a transition from a situation of coexistence at low migration rates to a situation of uniform consensus at high migration rates, where the language favoured in the larger block is the only survivor in all areas. The critical migration rate at this transition grows as $M^2$. We next investigate disordered attractiveness profiles, where conversion rates are modelled as random variables. There, the probability of observing a uniform consensus is given by a universal scaling function of $x=\\gamma /(M^{3/2}w)$, where $w$ is the width of the symmetric distribution of conversion rates.", "The ratio between migration and conversion rates beyond which there is consensus – either with certainty or with a sizeable probability – grows with the number of geographic areas as $M^2$ for ordered profiles of attractivenesses, and as $M^{3/2}$ for disordered ones. The first exponent is a consequence of the diffusive nature of migrations, whereas the second one has been derived in Appendix SECREF134 and related to anomalous band-edge scaling in one-dimensional Anderson localisation. If geographical areas were arranged according to a more complex geometric structure, these exponents would respectively read $2d/d_s$ and $(4-d_s)/(2d_s)$, with $d$ and $d_s$ being the fractal and spectral dimensions of the underlying structure (see BIBREF38, BIBREF39, and BIBREF40, BIBREF41 for reviews).", "Finally, we remark on another striking formal analogy – that between the rate equations (DISPLAY_FORM20) presented here, and those of a spatially extended model of competitive dynamics BIBREF42, itself inspired by a model of interacting black holes BIBREF43. In the latter, the non-trivial patterns of survivors on various networks and other geometrical structures were a particular focus of investigation, and led to the unearthing of universal behaviour. We believe that a network model of competing languages which combines both the symmetry-breaking scenarios discussed in this paper, so that every node corresponds to a geographical area with its own imbalanced internal dynamics, might lead to the discovery of similar universalities.", "AM warmly thanks the Leverhulme Trust for the Visiting Professorship that funded this research, as well as the Faculty of Linguistics, Philology and Phonetics at the University of Oxford, for their hospitality.", "Both authors contributed equally to the present work, were equally involved in the preparation of the manuscript, and have read and approved the final manuscript." ], [ "This Appendix is devoted to an analytical investigation of the statistics of surviving languages in a single geographic area, in the regime where the numbers $N$ of competing languages is large.", "The properties of the attractiveness distribution of the languages are key to determining whether coexistence or consensus will prevail. In particular the transition to consensus depends critically, and non-universally, on the way in which the attractiveness distribution decays, as will be shown below.", "Statistical fluctuations between various instances of the model become negligible for large $N$, so that sharp (i.e., self-averaging) expressions can be obtained for many quantities of interest.", "Let us begin with the simplest situation where all languages survive. When the number $N$ of competing languages is large, the condition for this to occur assumes a simple form. Consider the expression (DISPLAY_FORM36) for $x_N$. The law of large numbers ensures that the sum $S$ converges to", "whereas $a_N$ is relatively negligible. The condition that all the $N$ competing languages survive therefore takes the form of a sharp inequality at large $N$, i.e.,", "All over this regime, the expression for $x_N$ simplifies to", "The above analysis can be extended to the general situation where the numbers $N$ of competing languages and $K$ of surviving ones are large and comparable, with the fraction of surviving languages,", "taking any value in the range $0<R<1$.", "The rescaled attractiveness of the least favoured surviving language, namely", "turns out to play a key role in the subsequent analysis. Let us introduce for further reference the truncated moments ($k=0,1,2$)", "First of all, the relationship between $R$ and $\\eta $ becomes sharp in the large-$N$ regime. We have indeed", "The limits of all quantities of interest can be similarly expressed in terms of $\\eta $. We have for instance", "for the sum introduced in (DISPLAY_FORM37). The marginal stability condition, namely that language number $K$ is at the verge of becoming extinct, translates to", "The asymptotic dependence of the fraction $R$ of surviving languages on the rescaled mean attractiveness $W$ is therefore given in parametric form by (DISPLAY_FORM97) and (DISPLAY_FORM99). The identity", "demonstrates that $R$ is a decreasing function of $W$, as it should be.", "When the parameter $W$ reaches unity from above, the model exhibits a continuous transition from the situation where all languages survive. The parameter $\\eta $ vanishes linearly as", "with unit prefactor, irrespective of the attractiveness distribution. The fraction of surviving languages departs linearly from unity, according to", "In the regime where $W\\gg 1$, the fraction $R$ of surviving languages is expected to fall off to zero. As a consequence of (DISPLAY_FORM97), $R\\ll 1$ corresponds to the parameter $\\eta $ being close to the upper edge of the attractiveness distribution $f(\\xi )$. This is to be expected, as the last surviving languages are the most attractive ones. As a consequence, the form of the relationship between $W$ and $R$ for $W\\gg 1$ is highly non-universal, as it depends on the behavior of the distribution $f(\\xi )$ near its upper edge. It turns out that the following two main classes of attractiveness distributions have to be considered.", "Class 1: Power law at finite distance.", "Consider the situation where the distribution $f(\\xi )$ has a finite upper edge $\\xi _0$, and either vanishes or diverges as a power law near this edge, i.e.,", "The exponent $\\alpha $ is positive. The density $f(\\xi )$ diverges near its upper edge $\\xi _0$ for $0<\\alpha <1$, whereas it vanishes near $\\xi _0$ for $\\alpha >1$, and takes a constant value $f(\\xi _0)=A$ for $\\alpha =1$.", "In the relevant regime where $\\eta $ is close to $\\xi _0$, the expressions (DISPLAY_FORM97) and (DISPLAY_FORM99) simplify to", "Eliminating $\\eta $ between both above estimates, we obtain the following power-law relationship between $W$ and $R$:", "In terms of the original quantities $K$ and $w$, the above result reads", "Setting $K=1$ in this estimate, we predict that the consensus probability ${\\cal P}$ becomes appreciable when", "Class 2: Power law at infinity.", "Consider now the situation where the distribution extends up to infinity, and falls off as a power law, i.e.,", "The exponent $\\beta $ is larger than 2, in order for the first two moments of $\\xi $ to be convergent.", "In the relevant regime where $\\eta $ is large, the expressions (DISPLAY_FORM97) and (DISPLAY_FORM99) simplify to", "Eliminating $\\eta $ between both above estimates, we obtain the following power-law relationship between $W$ and $R$:", "In terms of the original quantities $K$ and $w$, the above result reads", "Setting $K=1$ in this estimate, we predict that the consensus probability ${\\cal P}$ becomes appreciable when", "We now summarise the above discussion. In the regime where $W\\gg 1$, the fraction $R$ of surviving languages falls off as a power law of the form", "where the positive exponent $\\lambda $ varies continuously, according to whether the distribution of attractivenesses extends up to a finite distance or infinity (see (DISPLAY_FORM106), (DISPLAY_FORM112)):", "In the marginal situation between both classes mentioned above, comprising e.g. the exponential distribution, the decay exponent sticks to its borderline value", "The decay law $R\\sim 1/W$ might however be affected by logarithmic corrections.", "Another view of the above scaling laws goes as follows. When the number of languages $N$ is large, the number of surviving languages decreases from $K=N$ to $K=1$ over a very broad range of mean attractivenesses. The condition for all languages to survive (see (DISPLAY_FORM92)) sets the beginning of this range as", "The occurrence of a sizeable consensus probability ${\\cal P}$ sets the end of this range as", "where the exponent $\\mu >-1/2$ varies continuously, according to (see (DISPLAY_FORM108), (DISPLAY_FORM114)):", "In the marginal situation between both classes, the above exponent sticks to its borderline value", "The extension of the dynamical range, defined as the ratio between both scales defined above, diverges as", "We predict in particular a linear divergence for the exponential distribution ($\\mu =0$) and a quadratic divergence for the uniform distribution ($\\mu =1$). This explains the qualitative difference observed in Figure FIGREF50. The slowest growth of the dynamical range is the square-root law observed for distributions falling off as a power-law with $\\beta \\rightarrow 2$, so that $\\mu =-1/2$.", "To close, let us underline that most of the quantities met above assume simple forms for the uniform and exponential distributions (see (DISPLAY_FORM44)).", "Uniform distribution.", "The consensus probability (see (DISPLAY_FORM42)) reads", "For large $N$, this becomes ${\\cal P}\\approx \\exp (-N/(2w))$, namely a function of the ratio $w/N$, in agreement with (DISPLAY_FORM119) and (DISPLAY_FORM120), with exponent $\\mu =1$, since $\\alpha =1$.", "The truncated moments read", "We thus obtain", "with exponent $\\lambda =1/2$, in agreement with (DISPLAY_FORM106) and (DISPLAY_FORM116) for $\\alpha =1$.", "Exponential distribution.", "The consensus probability reads", "irrespective of $N$, in agreement with (DISPLAY_FORM119), with exponent $\\mu =0$ (see (DISPLAY_FORM121)).", "The truncated moments read", "We thus obtain", "with exponent $\\lambda =1$, in agreement with (DISPLAY_FORM117)." ], [ "This Appendix is devoted to stability matrices and their spectra. Let us begin by reviewing some general background (see e.g. BIBREF44 for a comprehensive overview). Consider an autonomous dynamical system defined by a vector field ${E}({x})$ in $N$ dimensions, i.e., by $N$ coupled first-order equations of the form", "with $m,n=1,\\dots ,N$, where the right-hand sides depend on the dynamical variables $\\lbrace x_n(t)\\rbrace $ themselves, but not explicitly on time.", "Assume the above dynamical system has a fixed point $\\lbrace x_m\\rbrace $, such that $E_m\\lbrace x_n\\rbrace =0$ for all $m$. Small deviations $\\lbrace \\delta x_m(t)\\rbrace $ around the fixed point $\\lbrace x_m\\rbrace $ obey the linearised dynamics given by the stability matrix ${S}$, i.e., the $N\\times N$ matrix defined by", "where right-hand sides are evaluated at the fixed point. The fixed point is stable, in the strong sense that small deviations fall off exponentially fast to zero, if all eigenvalues $\\lambda _a$ of ${S}$ have negative real parts. In this case, if all the $\\lambda _a$ are real, their opposites $\\omega _a=-\\lambda _a>0$ are the inverse relaxation times of the linearised dynamics. In particular, the opposite of the smallest eigenvalue, simply denoted by $\\omega $, characterises exponential convergence to the fixed point for a generic initial state. If some of the $\\lambda _a$ have non-zero imaginary parts, convergence is oscillatory.", "The analysis of fixed points and bifurcations in low-dimensional Lotka-Volterra and replicator equations has been the subject of extensive investigations BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28 (see also BIBREF19, BIBREF20, BIBREF21)." ], [ "The remainder of this Appendix is devoted to the stability matrices involved in the array models considered in Section SECREF3, for an arbitrarily large number $M$ of geographical areas. All those stability matrices are related to the symmetric $M\\times M$ matrix", "representing (minus) the Laplacian operator on a linear array of $M$ sites, with Neumann boundary conditions. References BIBREF45, BIBREF46 provide reviews on the Laplacian and related operators on graphs.", "The eigenvalues $\\lambda _a$ of ${\\Delta }_M$ and the corresponding normalised eigenvectors ${\\phi }_a$, such that ${\\Delta }_M{\\phi }_a=\\lambda _a{\\phi }_a$ and ${\\phi }_a\\cdot {\\phi }_b=\\delta _{ab}$, read", "($a=0,\\dots ,M-1$). The vanishing eigenvalue $\\lambda _0=0$ corresponds to the uniform eigenvector $\\phi _{0,m}=1/\\sqrt{M}$.", "Let us begin by briefly considering the simple example of the stability matrix", "of the rate equations (DISPLAY_FORM54) for the total populations $P_m(t)$. Its eigenvalues are $-1-\\gamma \\lambda _a$. The smallest of them is $-1$, so that the inverse relaxation time is given by $\\omega =1$, as announced below (DISPLAY_FORM54).", "Let us now consider the stability matrices", "respectively defined in (DISPLAY_FORM68) and (DISPLAY_FORM69), and corresponding to both uniform consensus states for an arbitrary profile of conversion rates $C_m$. The ensuing stability conditions have been written down explicitly in (DISPLAY_FORM60) and (DISPLAY_FORM62) for $M=2$. It will soon become clear that it is virtually impossible to write them down for an arbitrary size $M$. Some information can however be gained from the calculation of the determinants of the above matrices. They only differ by a global sign change of all the conversion rates $C_m$, so that it is sufficient to consider ${S}_M^{(1)}$. It is a simple matter to realise that its determinant reads", "where $u_m$ is a generalised eigenvector solving the following Cauchy problem:", "with initial conditions $u_0=u_1=1$. We thus obtain recursively", "and so on. The expression (DISPLAY_FORM141) for $D_2$ agrees with the second of the conditions (DISPLAY_FORM60) and with the equation of the red curve in Figure FIGREF66, as should be. The expression () for $D_3$ demonstrates that the complexity of the stability conditions grows rapidly with the system size $M$." ], [ "In the case of random arrays, considered in Section SECREF84, the conversion rates $C_m$ are independent random variables such that $\\left\\langle C_m\\right\\rangle =0$ and $\\left\\langle C_m^2\\right\\rangle =w^2$.", "The regime of most interest is where the conversion rates $C_n$ are small with respect to $\\gamma $. In this regime, the determinant $D_M$ can be expanded as a power series in the conversion rates. The $u_m$ solving the Cauchy problem (DISPLAY_FORM140) are close to unity. Setting", "where the $u_m^{(1)}$ are linear and the $u_m^{(2)}$ quadratic in the $C_n$, we obtain after some algebra", "where", "are respectively linear and quadratic in the $C_n$. We have", "In Section SECREF84 we need an estimate of the probability ${\\cal Q}$ that $\\overline{C}=X/M$ is atypically small. Within the present setting, it is natural to define the latter event as $\\left|X\\right|<\\left|Y\\right|$. The corresponding probability can be worked out proviso we make the ad hoc simplifying assumptions – that definitely do not hold in the real world – that $X$ and $Y$ are Gaussian and independent. Within this framework, the complex Gaussian random variable", "has an isotropic density in the complex plane. We thus obtain" ], [ "The aim of this last section is to investigate the spectrum of the stability matrix ${S}_M^{(1)}$ associated with the ordered profile of conversion rates given by (DISPLAY_FORM72).", "In this case, the generalised eigenvector $u_m$ solving the Cauchy problem (DISPLAY_FORM140) can be worked out explicitly. We have $C_m=1$ for $m=1,\\dots ,K$, and therefore $u_m=a{\\rm e}^{m\\mu }+b{\\rm e}^{-m\\mu }$, where $\\mu >0$ obeys the dispersion relation", "The initial conditions $u_0=u_1=1$ fix $a$ and $b$, and so", "Similarly, we have $C_m=-1$ for $m=K+\\ell $, with $\\ell =1,\\dots ,L$, and therefore $u_m=\\alpha {\\rm e}^{{\\rm i}\\ell q}+\\beta {\\rm e}^{-{\\rm i}\\ell q}$, where $0<q<\\pi $ obeys the dispersion relation", "Matching both solutions for $m=K$ and $K+1$ fixes $\\alpha $ and $\\beta $, and so", "Inserting the latter result into (DISPLAY_FORM139), we obtain the following expression for the determinant of ${S}_M^{(1)}$, with $M=K+L$:", "The vanishing of the above expression, i.e.,", "signals that one eigenvalue of the stability matrix ${S}^{(1)}$ vanishes. In particular, the consensus state where language 1 survives becomes marginally stable at the threshold migration rate $\\gamma _c$, where the largest eigenvalue of ${S}^{(1)}$ vanishes. Equation (DISPLAY_FORM153) amounts to a polynomial equation of the form $P_{K,L}(\\gamma )=0$, where the polynomial $P_{K,L}$ has degree $K+L-1=M-1$. All its zeros are real, and $\\gamma _c$ is the largest of them. The first of these polynomials read" ] ] }
{ "question": [ "What languages do they look at?" ], "question_id": [ "f8f13576115992b0abb897ced185a4f9d35c5de9" ], "nlp_background": [ "two" ], "topic_background": [ "unfamiliar" ], "paper_read": [ "no" ], "search_query": [ "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "0aba2d18bea1ab4b8d99a362d91795c483f2fc08" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Fig. 1. Phase diagram of the model in the q–C plane. I: consensus phase. II: coexistence phase.", "Fig. 3. Steady state for 5 competing languages with equally spaced attractivenesses. The fractions xi of speakers of surviving languages are plotted against the mean attractiveness g in each sector labelled by the number K = 1, . . . , 5 of surviving languages. The threshold values g5,2 = 5/2, g5,3 = 5/6, g5,4 = 5/12 and g5,5 = 1/4 are abbreviated as g2 to g5.", "Fig. 4. An instance of the model with N = 10, a uniform distribution of attractivenesses with w = 0.3, and K = 6. Full curves: time-dependent fractions of speakers of all languages, obtained by solving the rate equations (17) numerically. Dashed lines: stationary fractions given by (29) for i = 1, . . . , 6.", "Fig. 5. Distribution pK of the number K of surviving languages, for N = 10 (top) and N = 40 (bottom) and a uniform distribution of attractivenesses for four values of W (see legends).", "Fig. 6. Mean number 〈K〉 of surviving languages against mean attractiveness w, for N = 10 and uniform and exponential attractiveness distributions (see legend).", "Fig. 9. The ordered profile of conversion rates defined in (59).", "Fig. 8. Phase diagram in the C1–C2 plane of the model defined on two geographic areas for γ = 1. I1: consensus phase where language 1 survives. I2: consensus phase where language 2 survives. IIA and IIB: coexistence of both languages in both areas. Black dashed line: C1 +C2 = 0 (none of the languages is globally favoured).", "Fig. 10. Stationary fraction Xm of speakers of language 1 against m− 1/2 in two cases of ordered attractiveness profiles on an array of M = 20 areas, for several migration rates γ (see legends). Top: symmetric situation where K = L = 10. Bottom: asymmetric situation where K = 12 and L = 8.", "Fig. 11. Finite-size scaling plot of the consensus probability P against x = γ/M3/2. Symbols: data for M = 20 and uniform (UNI) and Gaussian (GAU) conversion rate distributions with w = 1. Thin black curve: guide to the eye pointing toward the universality of the finite-size scaling function Φ entering (70). Full green curve: heuristic (HEU) prediction (71).", "Fig. 12. Probability distribution of the average conversion rate C for a Gaussian distribution of conversion rates with w = 1. Black curves: total (i.e., unconditioned) distribution. Red curves: distribution conditioned on consensus I1. Blue curves: distribution conditioned on consensus I2. Green curves: distribution conditioned on coexistence (II). Top: M = 2 and γ = 0.351. Bottom: M = 10 and γ = 10.22." ], "file": [ "3-Figure1-1.png", "5-Figure3-1.png", "6-Figure4-1.png", "6-Figure5-1.png", "7-Figure6-1.png", "9-Figure9-1.png", "9-Figure8-1.png", "10-Figure10-1.png", "11-Figure11-1.png", "12-Figure12-1.png" ] }
1907.01413
Speaker-independent classification of phonetic segments from raw ultrasound in child speech
Ultrasound tongue imaging (UTI) provides a convenient way to visualize the vocal tract during speech production. UTI is increasingly being used for speech therapy, making it important to develop automatic methods to assist various time-consuming manual tasks currently performed by speech therapists. A key challenge is to generalize the automatic processing of ultrasound tongue images to previously unseen speakers. In this work, we investigate the classification of phonetic segments (tongue shapes) from raw ultrasound recordings under several training scenarios: speaker-dependent, multi-speaker, speaker-independent, and speaker-adapted. We observe that models underperform when applied to data from speakers not seen at training time. However, when provided with minimal additional speaker information, such as the mean ultrasound frame, the models generalize better to unseen speakers.
{ "section_name": [ "Introduction", "Ultrasound Tongue Imaging", "Related Work", "Ultrasound Data", "Data Selection", "Preprocessing and Model Architectures", "Training Scenarios and Speaker Means", "Results and Discussion", "Future Work", "Conclusion" ], "paragraphs": [ [ "Ultrasound tongue imaging (UTI) uses standard medical ultrasound to visualize the tongue surface during speech production. It provides a non-invasive, clinically safe, and increasingly inexpensive method to visualize the vocal tract. Articulatory visual biofeedback of the speech production process, using UTI, can be valuable for speech therapy BIBREF0 , BIBREF1 , BIBREF2 or language learning BIBREF3 , BIBREF4 . Ultrasound visual biofeedback combines auditory information with visual information of the tongue position, allowing users, for example, to correct inaccurate articulations in real-time during therapy or learning. In the context of speech therapy, automatic processing of ultrasound images was used for tongue contour extraction BIBREF5 and the animation of a tongue model BIBREF6 . More broadly, speech recognition and synthesis from articulatory signals BIBREF7 captured using UTI can be used with silent speech interfaces in order to help restore spoken communication for users with speech or motor impairments, or to allow silent spoken communication in situations where audible speech is undesirable BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Similarly, ultrasound images of the tongue have been used for direct estimation of acoustic parameters for speech synthesis BIBREF13 , BIBREF14 , BIBREF15 .", "Speech and language therapists (SLTs) have found UTI to be very useful in speech therapy. In this work we explore the automatic processing of ultrasound tongue images in order to assist SLTs, who currently largely rely on manual processing when using articulatory imaging in speech therapy. One task that could assist SLTs is the automatic classification of tongue shapes from raw ultrasound. This can facilitate the diagnosis and treatment of speech sound disorders, by allowing SLTs to automatically identify incorrect articulations, or by quantifying patient progress in therapy. In addition to being directly useful for speech therapy, the classification of tongue shapes enables further understanding of phonetic variability in ultrasound tongue images. Much of the previous work in this area has focused on speaker-dependent models. In this work we investigate how automatic processing of ultrasound tongue imaging is affected by speaker variation, and how severe degradations in performance can be avoided when applying systems to data from previously unseen speakers through the use of speaker adaptation and speaker normalization approaches.", "Below, we present the main challenges associated with the automatic processing of ultrasound data, together with a review of speaker-independent models applied to UTI. Following this, we present the experiments that we have performed (Section SECREF2 ), and discuss the results obtained (Section SECREF3 ). Finally we propose some future work and conclude the paper (Sections SECREF4 and SECREF5 )." ], [ "There are several challenges associated with the automatic processing of ultrasound tongue images.", "Image quality and limitations. UTI output tends to be noisy, with unrelated high-contrast edges, speckle noise, or interruptions of the tongue surface BIBREF16 , BIBREF17 . Additionally, the oral cavity is not entirely visible from the image, missing the lips, the palate, or the pharyngeal wall.", "Inter-speaker variation. Age and physiology may affect the output, with children imaging better than adults due to more moisture in the mouth and less tissue fat BIBREF16 . However, dry mouths lead to poor imaging, which might occur in speech therapy if a child is nervous during a session. Similarly, the vocal tracts of children across different ages may be more variable than those of adults.", "Probe placement. Articulators that are orthogonal to the ultrasound beam direction image well, while those at an angle tend to image poorly. Incorrect or variable probe placement during recordings may lead to high variability between otherwise similar tongue shapes. This may be controlled using helmets BIBREF18 , although it is unreasonable to expect the speaker to remain still throughout the recording session, especially if working with children. Therefore, probe displacement should be expected to be a factor in image quality and consistency.", "Limited data. Although ultrasound imaging is becoming less expensive to acquire, there is still a lack of large publicly available databases to evaluate automatic processing methods. The UltraSuite Repository BIBREF19 , which we use in this work, helps alleviate this issue, but it still does not compare to standard speech recognition or image classification databases, which contain hundreds of hours of speech or millions of images." ], [ "Earlier work concerned with speech recognition from ultrasound data has mostly been focused on speaker-dependent systems BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 . An exception is the work of Xu et al. BIBREF24 , which investigates the classification of tongue gestures from ultrasound data using convolutional neural networks. Some results are presented for a speaker-independent system, although the investigation is limited to two speakers generalizing to a third. Fabre et al BIBREF5 present a method for automatic tongue contour extraction from ultrasound data. The system is evaluated in a speaker-independent way by training on data from eight speakers and evaluating on a single held out speaker. In both of these studies, a large drop in accuracy was observed when using speaker-independent systems in comparison to speaker-dependent systems. Our investigation differs from previous work in that we focus on child speech while using a larger number of speakers (58 children). Additionally, we use cross-validation to evaluate the performance of speaker-independent systems across all speakers, rather than using a small held out subset." ], [ "We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). The data was aligned at the phone-level, according to the methods described in BIBREF19 , BIBREF25 . For this work, we discarded the acoustic data and focused only on the B-Mode ultrasound images capturing a midsaggital view of the tongue. The data was recorded using an Ultrasonix SonixRP machine using Articulate Assistant Advanced (AAA) software at INLINEFORM0 121fps with a 135 field of view. A single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames). For this work, we only use UXTD type A (semantically unrelated words, such as pack, tap, peak, tea, oak, toe) and type B (non-words designed to elicit the articulation of target phones, such as apa, eepee, opo) utterances." ], [ "For this investigation, we define a simplified phonetic segment classification task. We determine four classes corresponding to distinct places of articulation. The first consists of bilabial and labiodental phones (e.g. /p, b, v, f, .../). The second class includes dental, alveolar, and postalveolar phones (e.g. /th, d, t, z, s, sh, .../). The third class consists of velar phones (e.g. /k, g, .../). Finally, the fourth class consists of alveolar approximant /r/. Figure FIGREF1 shows examples of the four classes for two speakers.", "For each speaker, we divide all available utterances into disjoint train, development, and test sets. Using the force-aligned phone boundaries, we extract the mid-phone frame for each example across the four classes, which leads to a data imbalance. Therefore, for all utterances in the training set, we randomly sample additional examples within a window of 5 frames around the center phone, to at least 50 training examples per class per speaker. It is not always possible to reach the target of 50 examples, however, if no more data is available to sample from. This process gives a total of INLINEFORM0 10700 training examples with roughly 2000 to 3000 examples per class, with each speaker having an average of 185 examples. Because the amount of data varies per speaker, we compute a sampling score, which denotes the proportion of sampled examples to the speaker's total training examples. We expect speakers with high sampling scores (less unique data overall) to underperform when compared with speakers with more varied training examples." ], [ "For each system, we normalize the training data to zero mean and unit variance. Due to the high dimensionality of the data (63x412 samples per frame), we have opted to investigate two preprocessing techniques: principal components analysis (PCA, often called eigentongues in this context) and a 2-dimensional discrete cosine transform (DCT). In this paper, Raw input denotes the mean-variance normalized raw ultrasound frame. PCA applies principal components analysis to the normalized training data and preserves the top 1000 components. DCT applies the 2D DCT to the normalized raw ultrasound frame and the upper left 40x40 submatrix (1600 coefficients) is flattened and used as input.", "The first type of classifier we evaluate in this work are feedforward neural networks (DNNs) consisting of 3 hidden layers, each with 512 rectified linear units (ReLUs) with a softmax activation function. The networks are optimized for 40 epochs with a mini-batch of 32 samples using stochastic gradient descent. Based on preliminary experiments on the validation set, hyperparameters such learning rate, decay rate, and L2 weight vary depending on the input format (Raw, PCA, or DCT). Generally, Raw inputs work better with smaller learning rates and heavier regularization to prevent overfitting to the high-dimensional data. As a second classifier to evaluate, we use convolutional neural networks (CNNs) with 2 convolutional and max pooling layers, followed by 2 fully-connected ReLU layers with 512 nodes. The convolutional layers use 16 filters, 8x8 and 4x4 kernels respectively, and rectified units. The fully-connected layers use dropout with a drop probability of 0.2. Because CNN systems take longer to converge, they are optimized over 200 epochs. For all systems, at the end of every epoch, the model is evaluated on the development set, and the best model across all epochs is kept." ], [ "We train speaker-dependent systems separately for each speaker, using all of their training data (an average of 185 examples per speaker). These systems use less data overall than the remaining systems, although we still expect them to perform well, as the data matches in terms of speaker characteristics. Realistically, such systems would not be viable, as it would be unreasonable to collect large amounts of data for every child who is undergoing speech therapy. We further evaluate all trained systems in a multi-speaker scenario. In this configuration, the speaker sets for training, development, and testing are equal. That is, we evaluate on speakers that we have seen at training time, although on different utterances. A more realistic configuration is a speaker-independent scenario, which assumes that the speaker set available for training and development is disjoint from the speaker set used at test time. This scenario is implemented by leave-one-out cross-validation. Finally, we investigate a speaker adaptation scenario, where training data for the target speaker becomes available. This scenario is realistic, for example, if after a session, the therapist were to annotate a small number of training examples. In this work, we use the held-out training data to finetune a pretrained speaker-independent system for an additional 6 epochs in the DNN systems and 20 epochs for the CNN systems. We use all available training data across all training scenarios, and we investigate the effect of the number of samples on one of the top performing systems.", "This work is primarily concerned with generalizing to unseen speakers. Therefore, we investigate a method to provide models with speaker-specific inputs. A simple approach is to use the speaker mean, which is the pixel-wise mean of all raw frames associated with a given speaker, illustrated in Figure FIGREF8 . The mean frame might capture an overall area of tongue activity, average out noise, and compensate for probe placement differences across speakers. Speaker means are computed after mean variance normalization. For PCA-based systems, matrix decomposition is applied on the matrix of speaker means for the training data with 50 components being kept, while the 2D DCT is applied normally to each mean frame. In the DNN systems, the speaker mean is appended to the input vector. In the CNN system, the raw speaker mean is given to the network as a second channel. All model configurations are similar to those described earlier, except for the DNN using Raw input. Earlier experiments have shown that a larger number of parameters are needed for good generalization with a large number of inputs, so we use layers of 1024 nodes rather than 512." ], [ "Results for all systems are presented in Table TABREF10 . When comparing preprocessing methods, we observe that PCA underperforms when compared with the 2 dimensional DCT or with the raw input. DCT-based systems achieve good results when compared with similar model architectures, especially when using smaller amounts of data as in the speaker-dependent scenario. When compared with raw input DNNs, the DCT-based systems likely benefit from the reduced dimensionality. In this case, lower dimensional inputs allow the model to generalize better and the truncation of the DCT matrix helps remove noise from the images. Compared with PCA-based systems, it is hypothesized the observed improvements are likely due to the DCT's ability to encode the 2-D structure of the image, which is ignored by PCA. However, the DNN-DCT system does not outperform a CNN with raw input, ranking last across adapted systems.", "When comparing training scenarios, as expected, speaker-independent systems underperform, which illustrates the difficulty involved in the generalization to unseen speakers. Multi-speaker systems outperform the corresponding speaker-dependent systems, which shows the usefulness of learning from a larger database, even if variable across speakers. Adapted systems improve over the dependent systems, except when using DCT. It is unclear why DCT-based systems underperform when adapting pre-trained models. Figure FIGREF11 shows the effect of the size of the adaptation data when finetuning a pre-trained speaker-independent system. As expected, the more data is available, the better that system performs. It is observed that, for the CNN system, with roughly 50 samples, the model outperforms a similar speaker-dependent system with roughly three times more examples.", "Speaker means improve results across all scenarios. It is particularly useful for speaker-independent systems. The ability to generalize to unseen speakers is clear in the CNN system. Using the mean as a second channel in the convolutional network has the advantage of relating each pixel to its corresponding speaker mean value, allowing the model to better generalize to unseen speakers.", "Figure FIGREF12 shows pair-wise scatterplots for the CNN system. Training scenarios are compared in terms of the effect on individual speakers. It is observed, for example, that the performance of a speaker-adapted system is similar to a multi-speaker system, with most speakers clustered around the identity line (bottom left subplot). Figure FIGREF12 also illustrates the variability across speakers for each of the training scenarios. The classification task is easier for some speakers than others. In an attempt to understand this variability, we can look at correlation between accuracy scores and various speaker details. For the CNN systems, we have found some correlation (Pearson's product-moment correlation) between accuracy and age for the dependent ( INLINEFORM0 ), multi-speaker ( INLINEFORM1 ), and adapted ( INLINEFORM2 ) systems. A very small correlation ( INLINEFORM3 ) was found for the independent system. Similarly, some correlation was found between accuracy and sampling score ( INLINEFORM4 ) for the dependent system, but not for the remaining scenarios. No correlation was found between accuracy and gender (point biserial correlation)." ], [ "There are various possible extensions for this work. For example, using all frames assigned to a phone, rather than using only the middle frame. Recurrent architectures are natural candidates for such systems. Additionally, if using these techniques for speech therapy, the audio signal will be available. An extension of these analyses should not be limited to the ultrasound signal, but instead evaluate whether audio and ultrasound can be complementary. Further work should aim to extend the four classes to more a fine-grained place of articulation, possibly based on phonological processes. Similarly, investigating which classes lead to classification errors might help explain some of the observed results. Although we have looked at variables such as age, gender, or amount of data to explain speaker variation, there may be additional factors involved, such as the general quality of the ultrasound image. Image quality could be affected by probe placement, dry mouths, or other factors. Automatically identifying or measuring such cases could be beneficial for speech therapy, for example, by signalling the therapist that the data being collected is sub-optimal." ], [ "In this paper, we have investigated speaker-independent models for the classification of phonetic segments from raw ultrasound data. We have shown that the performance of the models heavily degrades when evaluated on data from unseen speakers. This is a result of the variability in ultrasound images, mostly due to differences across speakers, but also due to shifts in probe placement. Using the mean of all ultrasound frames for a new speaker improves the generalization of the models to unseen data, especially when using convolutional neural networks. We have also shown that adapting a pre-trained speaker-independent system using as few as 50 ultrasound frames can outperform a corresponding speaker-dependent system." ] ] }
{ "question": [ "Do they report results only on English data?", "Do they propose any further additions that could be made to improve generalisation to unseen speakers?", "What are the characteristics of the dataset?", "What type of models are used for classification?", "Do they compare to previous work?", "How many instances does their dataset have?", "What model do they use to classify phonetic segments? ", "How many speakers do they have in the dataset?" ], "question_id": [ "1fdcc650c65c11908f6bde67d5052087245f3dde", "abad9beb7295d809d7e5e1407cbf673c9ffffd19", "265c9b733e4dfffb76acfbade4c0c9b14d3ccde1", "0f928732f226185c76ad5960402e9342c0619310", "11c5b12e675cfd8d1113724f019d8476275bd700", "d24acc567ebaec1efee52826b7eaadddc0a89e8b", "2d62a75af409835e4c123a615b06235a352a67fe", "fffbd6cafef96eeeee2f9fa5d8ab2b325ec528e6" ], "nlp_background": [ "five", "five", "five", "five", "five", "five", "five", "five" ], "topic_background": [ "", "", "", "", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "", "", "", "", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "", "", "", "" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7", "5053f146237e8fc8859ed3984b5d3f02f39266b7" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "ac59e957670efafab9eb6665e8577277f2bc2818" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "There are various possible extensions for this work. For example, using all frames assigned to a phone, rather than using only the middle frame. Recurrent architectures are natural candidates for such systems. Additionally, if using these techniques for speech therapy, the audio signal will be available. An extension of these analyses should not be limited to the ultrasound signal, but instead evaluate whether audio and ultrasound can be complementary. Further work should aim to extend the four classes to more a fine-grained place of articulation, possibly based on phonological processes. Similarly, investigating which classes lead to classification errors might help explain some of the observed results. Although we have looked at variables such as age, gender, or amount of data to explain speaker variation, there may be additional factors involved, such as the general quality of the ultrasound image. Image quality could be affected by probe placement, dry mouths, or other factors. Automatically identifying or measuring such cases could be beneficial for speech therapy, for example, by signalling the therapist that the data being collected is sub-optimal." ], "highlighted_evidence": [ "There are various possible extensions for this work. For example, using all frames assigned to a phone, rather than using only the middle frame." ] } ], "annotation_id": [ "0ac64b2c1e4d9c93bd909789874e24fe03ea46e6" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male)", "data was aligned at the phone-level", "121fps with a 135 field of view", "single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). The data was aligned at the phone-level, according to the methods described in BIBREF19 , BIBREF25 . For this work, we discarded the acoustic data and focused only on the B-Mode ultrasound images capturing a midsaggital view of the tongue. The data was recorded using an Ultrasonix SonixRP machine using Articulate Assistant Advanced (AAA) software at INLINEFORM0 121fps with a 135 field of view. A single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames). For this work, we only use UXTD type A (semantically unrelated words, such as pack, tap, peak, tea, oak, toe) and type B (non-words designed to elicit the articulation of target phones, such as apa, eepee, opo) utterances." ], "highlighted_evidence": [ "We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). The data was aligned at the phone-level, according to the methods described in BIBREF19 , BIBREF25 .", "The data was recorded using an Ultrasonix SonixRP machine using Articulate Assistant Advanced (AAA) software at INLINEFORM0 121fps with a 135 field of view. A single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames)." ] } ], "annotation_id": [ "154104a7b62c31b65c1fe20259e158275f2c394d" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "feedforward neural networks (DNNs)", "convolutional neural networks (CNNs)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The first type of classifier we evaluate in this work are feedforward neural networks (DNNs) consisting of 3 hidden layers, each with 512 rectified linear units (ReLUs) with a softmax activation function. The networks are optimized for 40 epochs with a mini-batch of 32 samples using stochastic gradient descent. Based on preliminary experiments on the validation set, hyperparameters such learning rate, decay rate, and L2 weight vary depending on the input format (Raw, PCA, or DCT). Generally, Raw inputs work better with smaller learning rates and heavier regularization to prevent overfitting to the high-dimensional data. As a second classifier to evaluate, we use convolutional neural networks (CNNs) with 2 convolutional and max pooling layers, followed by 2 fully-connected ReLU layers with 512 nodes. The convolutional layers use 16 filters, 8x8 and 4x4 kernels respectively, and rectified units. The fully-connected layers use dropout with a drop probability of 0.2. Because CNN systems take longer to converge, they are optimized over 200 epochs. For all systems, at the end of every epoch, the model is evaluated on the development set, and the best model across all epochs is kept." ], "highlighted_evidence": [ "The first type of classifier we evaluate in this work are feedforward neural networks (DNNs) consisting of 3 hidden layers, each with 512 rectified linear units (ReLUs) with a softmax activation function.", "As a second classifier to evaluate, we use convolutional neural networks (CNNs) with 2 convolutional and max pooling layers, followed by 2 fully-connected ReLU layers with 512 nodes." ] } ], "annotation_id": [ "78a72c2a12eb4a9351cf263d361b10edbf08d1a7" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "8ec1decab8ba8463e6115b68193038dfe0b37853" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "10700" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For each speaker, we divide all available utterances into disjoint train, development, and test sets. Using the force-aligned phone boundaries, we extract the mid-phone frame for each example across the four classes, which leads to a data imbalance. Therefore, for all utterances in the training set, we randomly sample additional examples within a window of 5 frames around the center phone, to at least 50 training examples per class per speaker. It is not always possible to reach the target of 50 examples, however, if no more data is available to sample from. This process gives a total of INLINEFORM0 10700 training examples with roughly 2000 to 3000 examples per class, with each speaker having an average of 185 examples. Because the amount of data varies per speaker, we compute a sampling score, which denotes the proportion of sampled examples to the speaker's total training examples. We expect speakers with high sampling scores (less unique data overall) to underperform when compared with speakers with more varied training examples." ], "highlighted_evidence": [ "This process gives a total of INLINEFORM0 10700 training examples with roughly 2000 to 3000 examples per class, with each speaker having an average of 185 examples." ] } ], "annotation_id": [ "8bb31e8aea9495752eb829337259bc29016be6b8" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "feedforward neural networks", "convolutional neural networks" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The first type of classifier we evaluate in this work are feedforward neural networks (DNNs) consisting of 3 hidden layers, each with 512 rectified linear units (ReLUs) with a softmax activation function. The networks are optimized for 40 epochs with a mini-batch of 32 samples using stochastic gradient descent. Based on preliminary experiments on the validation set, hyperparameters such learning rate, decay rate, and L2 weight vary depending on the input format (Raw, PCA, or DCT). Generally, Raw inputs work better with smaller learning rates and heavier regularization to prevent overfitting to the high-dimensional data. As a second classifier to evaluate, we use convolutional neural networks (CNNs) with 2 convolutional and max pooling layers, followed by 2 fully-connected ReLU layers with 512 nodes. The convolutional layers use 16 filters, 8x8 and 4x4 kernels respectively, and rectified units. The fully-connected layers use dropout with a drop probability of 0.2. Because CNN systems take longer to converge, they are optimized over 200 epochs. For all systems, at the end of every epoch, the model is evaluated on the development set, and the best model across all epochs is kept." ], "highlighted_evidence": [ "The first type of classifier we evaluate in this work are feedforward neural networks (DNNs) consisting of 3 hidden layers, each with 512 rectified linear units (ReLUs) with a softmax activation function. ", "As a second classifier to evaluate, we use convolutional neural networks (CNNs) with 2 convolutional and max pooling layers, followed by 2 fully-connected ReLU layers with 512 nodes. " ] } ], "annotation_id": [ "35b395686eaf19263de92304622647d784dbde0e" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "58" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). The data was aligned at the phone-level, according to the methods described in BIBREF19 , BIBREF25 . For this work, we discarded the acoustic data and focused only on the B-Mode ultrasound images capturing a midsaggital view of the tongue. The data was recorded using an Ultrasonix SonixRP machine using Articulate Assistant Advanced (AAA) software at INLINEFORM0 121fps with a 135 field of view. A single ultrasound frame consists of 412 echo returns from each of the 63 scan lines (63x412 raw frames). For this work, we only use UXTD type A (semantically unrelated words, such as pack, tap, peak, tea, oak, toe) and type B (non-words designed to elicit the articulation of target phones, such as apa, eepee, opo) utterances." ], "highlighted_evidence": [ "We use the Ultrax Typically Developing dataset (UXTD) from the publicly available UltraSuite repository BIBREF19 . This dataset contains synchronized acoustic and ultrasound data from 58 typically developing children, aged 5-12 years old (31 female, 27 male). " ] } ], "annotation_id": [ "2cc03c335d671b820b4f04b30bbdd522332ff8ec" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Fig. 1. Ultrasound samples for the four output classes based on place of articulation. The top row contains samples from speaker 12 (male, aged six), and the bottom row from speaker 13 (female, aged eleven). All samples show a midsaggital view of the oral cavity with the tip of the tongue facing right. Each sample is the mid-point frame of a phone uttered in an aCa context (e.g. apa, ata, ara, aka). See the UltraSuite repository2 for details on interpreting ultrasound tongue images.", "Fig. 2. Ultrasound mean image for speaker 12 (top row) and speaker 13 (bottom row). Means on the left column are taken over the training data, while means on the right are taken over the test data.", "Table 1. Phonetic segment accuracy for the four training scenarios.", "Fig. 3. Accuracy scores for adapted CNN Raw, varying amount of adaptation examples. We separately restrict training and development data to either n or all examples, whichever is smallest.", "Fig. 4. Pair-wise scatterplots for the CNN system without speaker mean. Each sample is a speaker with axes representing accuracy under a training scenario. Percentages in top left and bottom right corners indicate amount of speakers above or below the dashed identity line, respectively. Speaker accuracies are compared after being rounded to two decimal places." ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "4-Table1-1.png", "5-Figure3-1.png", "5-Figure4-1.png" ] }
1908.07816
A Multi-Turn Emotionally Engaging Dialog Model
Open-domain dialog systems (also known as chatbots) have increasingly drawn attention in natural language processing. Some of the recent work aims at incorporating affect information into sequence-to-sequence neural dialog modeling, making the response emotionally richer, while others use hand-crafted rules to determine the desired emotion response. However, they do not explicitly learn the subtle emotional interactions captured in human dialogs. In this paper, we propose a multi-turn dialog system aimed at learning and generating emotional responses that so far only humans know how to do. Compared with two baseline models, offline experiments show that our method performs the best in perplexity scores. Further human evaluations confirm that our chatbot can keep track of the conversation context and generate emotionally more appropriate responses while performing equally well on grammar.
{ "section_name": [ "Introduction", "Related Work", "Model", "Model ::: Hierarchical Attention", "Model ::: Emotion Encoder", "Model ::: Decoding", "Evaluation", "Evaluation ::: Datasets", "Evaluation ::: Baselines and Implementation", "Evaluation ::: Evaluation Metrics", "Evaluation ::: Evaluation Metrics ::: Human evaluation setup", "Evaluation ::: Results", "Evaluation ::: Results ::: Case Study", "Conclusion and Future Work" ], "paragraphs": [ [ "Recent development in neural language modeling has generated significant excitement in the open-domain dialog generation community. The success of sequence-to-sequence learning BIBREF0, BIBREF1 in the field of neural machine translation has inspired researchers to apply the recurrent neural network (RNN) encoder-decoder structure to response generation BIBREF2. Specifically, the encoder RNN reads the input message, encodes it into a fixed context vector, and the decoder RNN uses it to generate the response. Shang et al. BIBREF3 applied the same structure combined with attention mechanism BIBREF4 on Twitter-style microblogging data. Following the vanilla sequence-to-sequence structure, various improvements have been made on the neural conversation model—for example, increasing the diversity of the response BIBREF5, BIBREF6, modeling personalities of the speakers BIBREF7, and developing topic aware dialog systems BIBREF8.", "Some of the recent work aims at incorporating affect information into neural conversational models. While making the responses emotionally richer, existing approaches either explicitly require an emotion label as input BIBREF9, or rely on hand-crafted rules to determine the desired emotion responses BIBREF10, BIBREF11, ignoring the subtle emotional interactions captured in multi-turn conversations, which we believe to be an important aspect of human dialogs. For example, Gottman BIBREF12 found that couples are likely to practice the so called emotional reciprocity. When an argument starts, one partner's angry and aggressive utterance is often met with equally furious and negative utterance, resulting in more heated exchanges. On the other hand, responding with complementary emotions (such as reassurance and sympathy) is more likely to lead to a successful relationship. However, to the best of our knowledge, the psychology and social science literature does not offer clear rules for emotional interaction. It seems such social and emotional intelligence is captured in our conversations. This is why we believe that the data driven approach will have an advantage.", "In this paper, we propose an end-to-end data driven multi-turn dialog system capable of learning and generating emotionally appropriate and human-like responses with the ultimate goal of reproducing social behaviors that are habitual in human-human conversations. We chose the multi-turn setting because only in such cases is the emotion appropriateness most necessary. To this end, we employ the latest multi-turn dialog model by Xing et al. BIBREF13, but we add an additional emotion RNN to process the emotional information in each history utterance. By leveraging an external text analysis program, we encode the emotion aspects of each utterance into a fixed-sized one-zero vector. This emotion RNN reads and encodes the input affect information, and then uses the final hidden state as the emotion representation vector for the context. When decoding, at each time step, this emotion vector is concatenated with the hidden state of the decoder and passed to the softmax layer to produce the probability distribution over the vocabulary.", "Thereby, our contributions are threefold. (1) We propose a novel emotion-tracking dialog generation model that learns the emotional interactions directly from the data. This approach is free of human-defined heuristic rules, and hence, is more robust and fundamental than those described in existing work BIBREF9, BIBREF10, BIBREF11. (2) We apply the emotion-tracking mechanism to multi-turn dialogs, which has never been attempted before. Human evaluation shows that our model produces responses that are emotionally more appropriate than the baselines, while slightly improving the language fluency. (3) We illustrate a human-evaluation approach for judging machine-produced emotional dialogs. We consider factors such as the balance of positive and negative sentiments in test dialogs, a well-chosen range of topics, and dialogs that our human evaluators can relate. It is the first time such an approach is designed with consideration for the human judges. Our main goal is to increase the objectivity of the results and reduce judges' mistakes due to out-of-context dialogs they have to evaluate.", "The rest of the paper unfolds as follows. Section SECREF2 discusses some related work. In Section SECREF3, we give detailed description of the methodology. We present experimental results and some analysis in Section SECREF4. The paper is concluded in Section SECREF5, followed by some future work we plan to do." ], [ "Many early open-domain dialog systems are rule-based and often require expert knowledge to develop. More recent work in response generation seeks data-driven solutions, leveraging on machine learning techniques and the availability of data. Ritter et al. BIBREF14 first applied statistical machine translation (SMT) methods to this area. However, it turns out that bilingual translation and response generation are different. The source and target sentences in translation share the same meaning; thus the words in the two sentences tend to align well with each other. However, for response generation, one could have many equally good responses for a single input. Later studies use the sequence-to-sequence neural framework to model dialogs, followed by various improving work on the quality of the responses, especially the emotional aspects of the conversations.", "The vanilla RNN encoder-decoder is usually applied to single-turn response generation, where the response is generated based on one single input message. In multi-turn settings, where a context with multiple history utterances is given, the same structure often ignores the hierarchical characteristic of the context. Some recent work addresses this problem by adopting a hierarchical recurrent encoder-decoder (HRED) structure BIBREF15, BIBREF16, BIBREF17. To give attention to different parts of the context while generating responses, Xing et al. BIBREF13 proposed the hierarchical recurrent attention network (HRAN) that uses a hierarchical attention mechanism. However, these multi-turn dialog models do not take into account the turn-taking emotional changes of the dialog.", "Recent work on incorporating affect information into natural language processing tasks, such as building emotional dialog systems and affect language models, has inspired our current work. For example, the Emotional Chatting Machine (ECM) BIBREF9 takes as input a post and a specified emotional category and generates a response that belongs to the pre-defined emotion category. The main idea is to use an internal memory module to capture the emotion dynamics during decoding, and an external memory module to model emotional expressions explicitly by assigning different probability values to emotional words as opposed to regular words. However, the problem setting requires an emotional label as an input, which might be unpractical in real scenarios. Asghar et al. BIBREF10 proposed to augment the word embeddings with a VAD (valence, arousal, and dominance) affective space by using an external dictionary, and designed three affect-related loss functions, namely minimizing affective dissonance, maximizing affective dissonance, and maximizing affective content. The paper also proposed the affectively diverse beam search during decoding, so that the generated candidate responses are as affectively diverse as possible. However, literature in affective science does not necessarily validate such rules. In fact, the best strategy to speak to an angry customer is the de-escalation strategy (using neutral words to validate anger) rather than employing equally emotional words (minimizing affect dissonance) or words that convey happiness (maximizing affect dissonance). Zhong et al. BIBREF11 proposed a biased attention mechanism on affect-rich words in the input message, also by taking advantage of the VAD embeddings. The model is trained with a weighted cross-entropy loss function, which encourages the generation of emotional words. However, these models only deal with single-turn conversations. More importantly, they all adopt hand-coded emotion responding mechanisms. To our knowledge, we are the first to consider modeling the emotional flow and its appropriateness in a multi-turn dialog system by learning from humans." ], [ "In this paper, we consider the problem of generating response $\\mathbf {y}$ given a context $\\mathbf {X}$ consisting of multiple previous utterances by estimating the probability distribution $p(\\mathbf {y}\\,|\\,\\mathbf {X})$ from a data set $\\mathcal {D}=\\lbrace (\\mathbf {X}^{(i)},\\mathbf {y}^{(i)})\\rbrace _{i=1}^N$ containing $N$ context-response pairs. Here", "is a sequence of $m_i$ utterances, and", "is a sequence of $n_{ij}$ words. Similarly,", "is the response with $T_i$ words.", "Usually the probability distribution $p(\\mathbf {y}\\,|\\,\\mathbf {X})$ can be modeled by an RNN language model conditioned on $\\mathbf {X}$. When generating the word $y_t$ at time step $t$, the context $\\mathbf {X}$ is encoded into a fixed-sized dialog context vector $\\mathbf {c}_t$ by following the hierarchical attention structure in HRAN BIBREF13. Additionally, we extract the emotion information from the utterances in $\\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\\mathbf {e}$, which is combined with $\\mathbf {c}_t$ to produce the distribution. The overall architecture of the model is depicted in Figure FIGREF4. We are going to elaborate on how to obtain $\\mathbf {c}_t$ and $\\mathbf {e}$, and how they are combined in the decoding part." ], [ "The hierarchical attention structure involves two encoders to produce the dialog context vector $\\mathbf {c}_t$, namely the word-level encoder and the utterance-level encoder. The word-level encoder is essentially a bidirectional RNN with gated recurrent units (GRU) BIBREF1. For utterance $\\mathbf {x}_j$ in $\\mathbf {X}$ ($j=1,2,\\dots ,m$), the bidirectional encoder produces two hidden states at each word position $k$, the forward hidden state $\\mathbf {h}^\\mathrm {f}_{jk}$ and the backward hidden state $\\mathbf {h}^\\mathrm {b}_{jk}$. The final hidden state $\\mathbf {h}_{jk}$ is then obtained by concatenating the two,", "The utterance-level encoder is a unidirectional RNN with GRU that goes from the last utterance in the context to the first, with its input at each step as the summary of the corresponding utterance, which is obtained by applying a Bahdanau-style attention mechanism BIBREF4 on the word-level encoder output. More specifically, at decoding step $t$, the summary of utterance $\\mathbf {x}_j$ is a linear combination of $\\mathbf {h}_{jk}$, for $k=1,2,\\dots ,n_j$,", "Here $\\alpha _{jk}^t$ is the word-level attention score placed on $\\mathbf {h}_{jk}$, and can be calculated as", "where $\\mathbf {s}_{t-1}$ is the previous hidden state of the decoder, $\\mathbf {\\ell }_{j+1}^t$ is the previous hidden state of the utterance-level encoder, and $\\mathbf {v}_a$, $\\mathbf {U}_a$, $\\mathbf {V}_a$ and $\\mathbf {W}_a$ are word-level attention parameters. The final dialog context vector $\\mathbf {c}_t$ is then obtained as another linear combination of the outputs of the utterance-level encoder $\\mathbf {\\ell }_{j}^t$, for $j=1,2,\\dots ,m$,", "Here $\\beta _{j}^t$ is the utterance-level attention score placed on $\\mathbf {\\ell }_{j}^t$, and can be calculated as", "where $\\mathbf {s}_{t-1}$ is the previous hidden state of the decoder, and $\\mathbf {v}_b$, $\\mathbf {U}_b$ and $\\mathbf {W}_b$ are utterance-level attention parameters." ], [ "In order to capture the emotion information carried in the context $\\mathbf {X}$, we utilize an external text analysis program called the Linguistic Inquiry and Word Count (LIWC) BIBREF18. LIWC accepts text files as input, and then compares each word in the input with a user-defined dictionary, assigning it to one or more of the pre-defined psychologically-relevant categories. We make use of five of these categories, related to emotion, namely positive emotion, negative emotion, anxious, angry, and sad. Using the newest version of the program LIWC2015, we are able to map each utterance $\\mathbf {x}_j$ in the context to a six-dimensional indicator vector ${1}(\\mathbf {x}_j)$, with the first five entries corresponding to the five emotion categories, and the last one corresponding to neutral. If any word in $\\mathbf {x}_j$ belongs to one of the five categories, then the corresponding entry in ${1}(\\mathbf {x}_j)$ is set to 1; otherwise, $\\mathbf {x}_j$ is treated as neutral, with the last entry of ${1}(\\mathbf {x}_j)$ set to 1. For example, assuming $\\mathbf {x}_j=$ “he is worried about me”, then", "since the word “worried” is assigned to both negative emotion and anxious. We apply a dense layer with sigmoid activation function on top of ${1}(\\mathbf {x}_j)$ to embed the emotion indicator vector into a continuous space,", "where $\\mathbf {W}_e$ and $\\mathbf {b}_e$ are trainable parameters. The emotion flow of the context $\\mathbf {X}$ is then modeled by an unidirectional RNN with GRU going from the first utterance in the context to the last, with its input being $\\mathbf {a}_j$ at each step. The final emotion context vector $\\mathbf {e}$ is obtained as the last hidden state of this emotion encoding RNN." ], [ "The probability distribution $p(\\mathbf {y}\\,|\\,\\mathbf {X})$ can be written as", "We model the probability distribution using an RNN language model along with the emotion context vector $\\mathbf {e}$. Specifically, at time step $t$, the hidden state of the decoder $\\mathbf {s}_t$ is obtained by applying the GRU function,", "where $\\mathbf {w}_{y_{t-1}}$ is the word embedding of $y_{t-1}$. Similar to Affect-LM BIBREF19, we then define a new feature vector $\\mathbf {o}_t$ by concatenating $\\mathbf {s}_t$ with the emotion context vector $\\mathbf {e}$,", "on which we apply a softmax layer to obtain a probability distribution over the vocabulary,", "Each term in Equation (DISPLAY_FORM16) is then given by", "We use the cross-entropy loss as our objective function" ], [ "We trained our model using two different datasets and compared its performance with HRAN as well as the basic sequence-to-sequence model by performing both offline and online testings." ], [ "We use two different dialog corpora to train our model—the Cornell Movie Dialogs Corpus BIBREF20 and the DailyDialog dataset BIBREF21.", "Cornell Movie Dialogs Corpus. The dataset contains 83,097 dialogs (220,579 conversational exchanges) extracted from raw movie scripts. In total there are 304,713 utterances.", "DailyDialog. The dataset is developed by crawling raw data from websites used for language learners to learn English dialogs in daily life. It contains 13,118 dialogs in total.", "We summarize some of the basic information regarding the two datasets in Table TABREF25.", "In our experiments, the models are first trained on the Cornell Movie Dialogs Corpus, and then fine-tuned on the DailyDialog dataset. We adopted this training pattern because the Cornell dataset is bigger but noisier, while DailyDialog is smaller but more daily-based. To create a training set and a validation set for each of the two datasets, we take segments of each dialog with number of turns no more than six, to serve as the training/validation examples. Specifically, for each dialog $\\mathbf {D}=(\\mathbf {x}_1,\\mathbf {x}_2,\\dots ,\\mathbf {x}_M)$, we create $M-1$ context-response pairs, namely $\\mathbf {U}_i=(\\mathbf {x}_{s_i},\\dots ,\\mathbf {x}_i)$ and $\\mathbf {y}_i=\\mathbf {x}_{i+1}$, for $i=1,2,\\dots ,M-1$, where $s_i=\\max (1,i-4)$. We filter out those pairs that have at least one utterance with length greater than 30. We also reduce the frequency of those pairs whose responses appear too many times (the threshold is set to 10 for Cornell, and 5 for DailyDialog), to prevent them from dominating the learning procedure. See Table TABREF25 for the sizes of the training and validation sets. The test set consists of 100 dialogs with four turns. We give more detailed description of how we create the test set in Section SECREF31." ], [ "We compared our multi-turn emotionally engaging dialog model (denoted as MEED) with two baselines—the vanilla sequence-to-sequence model (denoted as S2S) and HRAN. We chose S2S and HRAN as baselines because we would like to evaluate our model's capability to keep track of the multi-turn context and to produce emotionally more appropriate responses, respectively. In order to adapt S2S to the multi-turn setting, we concatenate all the history utterances in the context into one.", "For all the models, the vocabulary consists of 20,000 most frequent words in the Cornell and DailyDialog datasets, plus three extra tokens: <unk> for words that do not exist in the vocabulary, <go> indicating the begin of an utterance, and <eos> indicating the end of an utterance. Here we summarize the configurations and parameters of our experiments:", "We set the word embedding size to 256. We initialized the word embeddings in the models with word2vec BIBREF22 vectors first trained on Cornell and then fine-tuned on DailyDialog, consistent with the training procedure of the models.", "We set the number of hidden units of each RNN to 256, the word-level attention depth to 256, and utterance-level 128. The output size of the emotion embedding layer is 256.", "We optimized the objective function using the Adam optimizer BIBREF23 with an initial learning rate of 0.001. We stopped training the models when the lowest perplexity on the validation sets was achieved.", "For prediction, we used beam search BIBREF24 with a beam width of 256." ], [ "The evaluation of chatbots remains an open problem in the field. Recent work BIBREF25 has shown that the automatic evaluation metrics borrowed from machine translation such as BLEU score BIBREF26 tend to align poorly with human judgement. Therefore, in this paper, we mainly adopt human evaluation, along with perplexity, following the existing work." ], [ "To develop a test set for human evaluation, we first selected the emotionally colored dialogs with exactly four turns from the DailyDialog dataset. In the dataset each dialog turn is annotated with a corresponding emotional category, including the neutral one. For our purposes we filtered out only those dialogs where more than a half of utterances have non-neutral emotional labels. This gave us 78 emotionally positive dialogs and 14 emotionally negative dialogs. In order to have a balanced test set with equal number of positive and negative dialogs, we recruited two English-speaking students from our university without any relationship to the authors' lab and instructed them to create five negative dialogs with four turns, as if they were interacting with another human, according to each of the following topics: relationships, entertainment, service, work and study, and everyday situations. Thus each person produced 25 dialogs, and in total we obtained 50 emotionally negative daily dialogs in addition to the 14 already available. To form the test set, we randomly selected 50 emotionally positive and 50 emotionally negative dialogs from the two pools of dialogs described above (78 positive dialogs from DailyDialog, 64 negative dialogs from DailyDialog and human-generated).", "For human evaluation of the models, we recruited another four English-speaking students from our university without any relationship to the authors' lab to rate the responses generated by the models. Specifically, we randomly shuffled the 100 dialogs in the test set, then we used the first three utterances of each dialog as the input to the three models being compared and let them generate the responses. According to the context given, the raters were instructed to evaluate the quality of the responses based on three criteria: (1) grammatical correctness—whether or not the response is fluent and free of grammatical mistakes; (2) contextual coherence—whether or not the response is context sensitive to the previous dialog history; (3) emotional appropriateness—whether or not the response conveys the right emotion and feels as if it had been produced by a human. For each criterion, the raters gave scores of either 0, 1 or 2, where 0 means bad, 2 means good, and 1 indicates neutral." ], [ "Table TABREF34 gives the perplexity scores obtained by the three models on the two validation sets and the test set. As shown in the table, MEED achieves the lowest perplexity score on all three sets. We also conducted t-test on the perplexity obtained, and results show significant improvements (with $p$-value $<0.05$).", "Table TABREF34, TABREF35 and TABREF35 summarize the human evaluation results on the responses' grammatical correctness, contextual coherence, and emotional appropriateness, respectively. In the tables, we give the percentage of votes each model received for the three scores, the average score obtained with improvements over S2S, and the agreement score among the raters. Note that we report Fleiss' $\\kappa $ score BIBREF27 for contextual coherence and emotional appropriateness, and Finn's $r$ score BIBREF28 for grammatical correctness. We did not use Fleiss' $\\kappa $ score for grammatical correctness. As agreement is extremely high, this can make Fleiss' $\\kappa $ very sensitive to prevalence BIBREF29. On the contrary, we did not use Finn's $r$ score for contextual coherence and emotional appropriateness because it is only reasonable when the observed variance is significantly less than the chance variance BIBREF30, which did not apply to these two criteria. As shown in the tables, we got high agreement among the raters for grammatical correctness, and fair agreement among the raters for contextual coherence and emotional appropriateness. For grammatical correctness, all three models achieved high scores, which means all models are capable of generating fluent utterances that make sense. For contextual coherence and emotional appropriateness, MEED achieved higher average scores than S2S and HRAN, which means MEED keeps better track of the context and can generate responses that are emotionally more appropriate and natural. We conducted Friedman test BIBREF31 on the human evaluation results, showing the improvements of MEED are significant (with $p$-value $<0.01$)." ], [ "We present four sample dialogs in Table TABREF36, along with the responses generated by the three models. Dialog 1 and 2 are emotionally positive and dialog 3 and 4 are negative. For the first two examples, we can see that MEED is able to generate more emotional content (like “fun” and “congratulations”) that is appropriate according to the context. For dialog 4, MEED responds in sympathy to the other speaker, which is consistent with the second utterance in the context. On the contrary, HRAN poses a question in reply, contradicting the dialog history." ], [ "According to the Media Equation Theory BIBREF32, people respond to computers socially. This means humans expect talking to computers as they talk to other human beings. This is why we believe reproducing social and conversational intelligence will make social chatbots more believable and socially engaging. In this paper, we propose a multi-turn dialog system capable of generating emotionally appropriate responses, which is the first step toward such a goal. We have demonstrated how to do so by (1) modeling utterances with extra affect vectors, (2) creating an emotional encoding mechanism that learns emotion exchanges in the dataset, (3) curating a multi-turn dialog dataset, and (4) evaluating the model with offline and online experiments.", "As future work, we would like to investigate the diversity issue of the responses generated, possibly by extending the mutual information objective function BIBREF5 to multi-turn settings. We would also like to evaluate our model on a larger dataset, for example by extracting multi-turn dialogs from the OpenSubtitles corpus BIBREF33." ] ] }
{ "question": [ "How better is proposed method than baselines perpexity wise?", "How does the multi-turn dialog system learns?", "How is human evaluation performed?", "Is some other metrics other then perplexity measured?", "What two baseline models are used?" ], "question_id": [ "c034f38a570d40360c3551a6469486044585c63c", "9cbea686732b5b85f77868ca47d2f93cf34516ed", "6aee16c4f319a190c2a451c1c099b66162299a28", "4d4b9ff2da51b9e0255e5fab0b41dfe49a0d9012", "180047e1ccfc7c98f093b8d1e1d0479a4cca99cc" ], "nlp_background": [ "zero", "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Perplexity of proposed MEED model is 19.795 vs 19.913 of next best result on test set.", "evidence": [ "Table TABREF34 gives the perplexity scores obtained by the three models on the two validation sets and the test set. As shown in the table, MEED achieves the lowest perplexity score on all three sets. We also conducted t-test on the perplexity obtained, and results show significant improvements (with $p$-value $<0.05$).", "FLOAT SELECTED: Table 2: Perplexity scores achieved by the models. Validation set 1 comes from the Cornell dataset, while validation set 2 comes from the DailyDialog dataset." ], "highlighted_evidence": [ "Table TABREF34 gives the perplexity scores obtained by the three models on the two validation sets and the test set. As shown in the table, MEED achieves the lowest perplexity score on all three sets.", "FLOAT SELECTED: Table 2: Perplexity scores achieved by the models. Validation set 1 comes from the Cornell dataset, while validation set 2 comes from the DailyDialog dataset." ] } ], "annotation_id": [ "3e9e850087de48e5d3228f9b691cf66ce2f76a7d" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "we extract the emotion information from the utterances in $\\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\\mathbf {e}$, which is combined with $\\mathbf {c}_t$ to produce the distribution" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Usually the probability distribution $p(\\mathbf {y}\\,|\\,\\mathbf {X})$ can be modeled by an RNN language model conditioned on $\\mathbf {X}$. When generating the word $y_t$ at time step $t$, the context $\\mathbf {X}$ is encoded into a fixed-sized dialog context vector $\\mathbf {c}_t$ by following the hierarchical attention structure in HRAN BIBREF13. Additionally, we extract the emotion information from the utterances in $\\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\\mathbf {e}$, which is combined with $\\mathbf {c}_t$ to produce the distribution. The overall architecture of the model is depicted in Figure FIGREF4. We are going to elaborate on how to obtain $\\mathbf {c}_t$ and $\\mathbf {e}$, and how they are combined in the decoding part." ], "highlighted_evidence": [ "When generating the word $y_t$ at time step $t$, the context $\\mathbf {X}$ is encoded into a fixed-sized dialog context vector $\\mathbf {c}_t$ by following the hierarchical attention structure in HRAN BIBREF13. Additionally, we extract the emotion information from the utterances in $\\mathbf {X}$ by leveraging an external text analysis program, and use an RNN to encode it into an emotion context vector $\\mathbf {e}$, which is combined with $\\mathbf {c}_t$ to produce the distribution." ] } ], "annotation_id": [ "1237400bc18aa2feb5b5b332cf59adb203fd6651" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "(1) grammatical correctness", "(2) contextual coherence", "(3) emotional appropriateness" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For human evaluation of the models, we recruited another four English-speaking students from our university without any relationship to the authors' lab to rate the responses generated by the models. Specifically, we randomly shuffled the 100 dialogs in the test set, then we used the first three utterances of each dialog as the input to the three models being compared and let them generate the responses. According to the context given, the raters were instructed to evaluate the quality of the responses based on three criteria: (1) grammatical correctness—whether or not the response is fluent and free of grammatical mistakes; (2) contextual coherence—whether or not the response is context sensitive to the previous dialog history; (3) emotional appropriateness—whether or not the response conveys the right emotion and feels as if it had been produced by a human. For each criterion, the raters gave scores of either 0, 1 or 2, where 0 means bad, 2 means good, and 1 indicates neutral." ], "highlighted_evidence": [ "According to the context given, the raters were instructed to evaluate the quality of the responses based on three criteria: (1) grammatical correctness—whether or not the response is fluent and free of grammatical mistakes; (2) contextual coherence—whether or not the response is context sensitive to the previous dialog history; (3) emotional appropriateness—whether or not the response conveys the right emotion and feels as if it had been produced by a human. For each criterion, the raters gave scores of either 0, 1 or 2, where 0 means bad, 2 means good, and 1 indicates neutral." ] } ], "annotation_id": [ "f2435e8054869e57ba5863e7f59aa3d71f02a192" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "The evaluation of chatbots remains an open problem in the field. Recent work BIBREF25 has shown that the automatic evaluation metrics borrowed from machine translation such as BLEU score BIBREF26 tend to align poorly with human judgement. Therefore, in this paper, we mainly adopt human evaluation, along with perplexity, following the existing work." ], "highlighted_evidence": [ "The evaluation of chatbots remains an open problem in the field. Recent work BIBREF25 has shown that the automatic evaluation metrics borrowed from machine translation such as BLEU score BIBREF26 tend to align poorly with human judgement. Therefore, in this paper, we mainly adopt human evaluation, along with perplexity, following the existing work." ] } ], "annotation_id": [ "0acfb84fc15d0f06485d0196203c9178db36f859" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ " sequence-to-sequence model (denoted as S2S)", "HRAN" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We compared our multi-turn emotionally engaging dialog model (denoted as MEED) with two baselines—the vanilla sequence-to-sequence model (denoted as S2S) and HRAN. We chose S2S and HRAN as baselines because we would like to evaluate our model's capability to keep track of the multi-turn context and to produce emotionally more appropriate responses, respectively. In order to adapt S2S to the multi-turn setting, we concatenate all the history utterances in the context into one." ], "highlighted_evidence": [ "We compared our multi-turn emotionally engaging dialog model (denoted as MEED) with two baselines—the vanilla sequence-to-sequence model (denoted as S2S) and HRAN." ] } ], "annotation_id": [ "8d48966aa92b8ab8b8e1a03c138e1db25ba93db5" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: The overall architecture of our model.", "Table 1: Statistics of the two datasets.", "Table 2: Perplexity scores achieved by the models. Validation set 1 comes from the Cornell dataset, while validation set 2 comes from the DailyDialog dataset.", "Table 5: Human evaluation results on emotional appropriateness.", "Table 4: Human evaluation results on contextual coherence.", "Table 6: Sample responses for the three models." ], "file": [ "3-Figure1-1.png", "5-Table1-1.png", "7-Table2-1.png", "7-Table5-1.png", "7-Table4-1.png", "8-Table6-1.png" ] }
1703.03097
Information Extraction in Illicit Domains
Extracting useful entities and attribute values from illicit domains such as human trafficking is a challenging problem with the potential for widespread social impact. Such domains employ atypical language models, have `long tails' and suffer from the problem of concept drift. In this paper, we propose a lightweight, feature-agnostic Information Extraction (IE) paradigm specifically designed for such domains. Our approach uses raw, unlabeled text from an initial corpus, and a few (12-120) seed annotations per domain-specific attribute, to learn robust IE models for unobserved pages and websites. Empirically, we demonstrate that our approach can outperform feature-centric Conditional Random Field baselines by over 18\% F-Measure on five annotated sets of real-world human trafficking datasets in both low-supervision and high-supervision settings. We also show that our approach is demonstrably robust to concept drift, and can be efficiently bootstrapped even in a serial computing environment.
{ "section_name": [ "Introduction", "Related Work", "Approach", "Preprocessing", "Deriving Word Representations", "Applying High-Recall Recognizers", "Supervised Contextual Classifier", "Datasets and Ground-truths", "System", "Baselines", "Setup and Parameters", "Results", "Discussion", "Conclusion" ], "paragraphs": [ [ "Building knowledge graphs (KG) over Web corpora is an important problem that has galvanized effort from multiple communities over two decades BIBREF0 , BIBREF1 . Automated knowledge graph construction from Web resources involves several different phases. The first phase involves domain discovery, which constitutes identification of sources, followed by crawling and scraping of those sources BIBREF2 . A contemporaneous ontology engineering phase is the identification and design of key classes and properties in the domain of interest (the domain ontology) BIBREF3 .", "Once a set of (typically unstructured) data sources has been identified, an Information Extraction (IE) system needs to extract structured data from each page in the corpus BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . In IE systems based on statistical learning, sequence labeling models like Conditional Random Fields (CRFs) can be trained and used for tagging the scraped text from each data source with terms from the domain ontology BIBREF8 , BIBREF7 . With enough data and computational power, deep neural networks can also be used for a range of collective natural language tasks, including chunking and extraction of named entities and relationships BIBREF9 .", "While IE has been well-studied both for cross-domain Web sources (e.g. Wikipedia) and for traditional domains like biomedicine BIBREF10 , BIBREF11 , it is less well-studied (Section \"Preprocessing\" ) for dynamic domains that undergo frequent changes in content and structure. Such domains include news feeds, social media, advertising, and online marketplaces, but also illicit domains like human trafficking. Automatically constructing knowledge graphs containing important information like ages (of human trafficking victims), locations, prices of services and posting dates over such domains could have widespread social impact, since law enforcement and federal agencies could query such graphs to glean rapid insights BIBREF12 .", "Illicit domains pose some formidable challenges for traditional IE systems, including deliberate information obfuscation, non-random misspellings of common words, high occurrences of out-of-vocabulary and uncommon words, frequent (and non-random) use of Unicode characters, sparse content and heterogeneous website structure, to only name a few BIBREF12 , BIBREF13 , BIBREF14 . While some of these characteristics are shared by more traditional domains like chat logs and Twitter, both information obfuscation and extreme content heterogeneity are unique to illicit domains. While this paper only considers the human trafficking domain, similar kinds of problems are prevalent in other illicit domains that have a sizable Web (including Dark Web) footprint, including terrorist activity, and sales of illegal weapons and counterfeit goods BIBREF15 .", "As real-world illustrative examples, consider the text fragments `Hey gentleman im neWYOrk and i'm looking for generous...' and `AVAILABLE NOW! ?? - (4 two 4) six 5 two - 0 9 three 1 - 21'. In the first instance, the correct extraction for a Name attribute is neWYOrk, while in the second instance, the correct extraction for an Age attribute is 21. It is not obvious what features should be engineered in a statistical learning-based IE system to achieve robust performance on such text.", "To compound the problem, wrapper induction systems from the Web IE literature cannot always be applied in such domains, as many important attributes can only be found in text descriptions, rather than template-based Web extractors that wrappers traditionally rely on BIBREF6 . Constructing an IE system that is robust to these problems is an important first step in delivering structured knowledge bases to investigators and domain experts.", "In this paper, we study the problem of robust information extraction in dynamic, illicit domains with unstructured content that does not necessarily correspond to a typical natural language model, and that can vary tremendously between different Web domains, a problem denoted more generally as concept drift BIBREF16 . Illicit domains like human trafficking also tend to exhibit a `long tail'; hence, a comprehensive solution should not rely on information extractors being tailored to pages from a small set of Web domains.", "There are two main technical challenges that such domains present to IE systems. First, as the brief examples above illustrate, feature engineering in such domains is difficult, mainly due to the atypical (and varying) representation of information. Second, investigators and domain experts require a lightweight system that can be quickly bootstrapped. Such a system must be able to generalize from few ( $\\approx $ 10-150) manual annotations, but be incremental from an engineering perspective, especially since a given illicit Web page can quickly (i.e. within hours) become obsolete in the real world, and the search for leads and information is always ongoing. In effect, the system should be designed for streaming data.", "We propose an information extraction approach that is able to address the challenges above, especially the variance between Web pages and the small training set per attribute, by combining two sequential techniques in a novel paradigm. The overall approach is illustrated in Figure 1 . First, a high-recall recognizer, which could range from an exhaustive Linked Data source like GeoNames (e.g. for extracting locations) to a simple regular expression (e.g. for extracting ages), is applied to each page in the corpus to derive a set of candidate annotations for an attribute per page. In the second step, we train and apply a supervised feature-agnostic classification algorithm, based on learning word representations from random projections, to classify each candidate as correct/incorrect for its attribute.", "Contributions We summarize our main contributions as follows: (1) We present a lightweight feature-agnostic information extraction system for a highly heterogeneous, illicit domain like human trafficking. Our approach is simple to implement, does not require extensive parameter tuning, infrastructure setup and is incremental with respect to the data, which makes it suitable for deployment in streaming-corpus settings. (2) We show that the approach shows good generalization even when only a small corpus is available after the initial domain-discovery phase, and is robust to the problem of concept drift encountered in large Web corpora. (3) We test our approach extensively on a real-world human trafficking corpus containing hundreds of thousands of Web pages and millions of unique words, many of which are rare and highly domain-specific. Evaluations show that our approach outperforms traditional Named Entity Recognition baselines that require manual feature engineering. Specific empirical highlights are provided below.", "Empirical highlights Comparisons against CRF baselines based on the latest Stanford Named Entity Resolution system (including pre-trained models as well as new models that we trained on human trafficking data) show that, on average, across five ground-truth datasets, our approach outperforms the next best system on the recall metric by about 6%, and on the F1-measure metric by almost 20% in low-supervision settings (30% training data), and almost 20% on both metrics in high-supervision settings (70% training data). Concerning efficiency, in a serial environment, we are able to derive word representations on a 43 million word corpus in under an hour. Degradation in average F1-Measure score achieved by the system is less than 2% even when the underlying raw corpus expands by a factor of 18, showing that the approach is reasonably robust to concept drift.", "Structure of the paper Section \"Preprocessing\" describes some related work on Information Extraction. Section \"Approach\" provides details of key modules in our approach. Section \"Evaluations\" describes experimental evaluations, and Section \"Conclusion\" concludes the work." ], [ "Information Extraction (IE) is a well-studied research area both in the Natural Language Processing community and in the World Wide Web, with the reader referred to the survey by Chang et al. for an accessible coverage of Web IE approaches BIBREF17 . In the NLP literature, IE problems have predominantly been studied as Named Entity Recognition and Relationship Extraction BIBREF7 , BIBREF18 . The scope of Web IE has been broad in recent years, extending from wrappers to Open Information Extraction (OpenIE) BIBREF6 , BIBREF19 .", "In the Semantic Web, domain-specific extraction of entities and properties is a fundamental aspect in constructing instance-rich knowledge bases (from unstructured corpora) that contribute to the Semantic Web vision and to ecosystems like Linked Open Data BIBREF20 , BIBREF21 . A good example of such a system is Lodifier BIBREF22 . This work is along the same lines, in that we are interested in user-specified attributes and wish to construct a knowledge base (KB) with those attribute values using raw Web corpora. However, we are not aware of any IE work in the Semantic Web that has used word representations to accomplish this task, or that has otherwise outperformed state-of-the-art systems without manual feature engineering.", "The work presented in this paper is structurally similar to the geolocation prediction system (from Twitter) by Han et al. and also ADRMine, an adverse drug reaction (ADR) extraction system from social media BIBREF23 , BIBREF24 . Unlike these works, our system is not optimized for specific attributes like locations and drug reactions, but generalizes to a range of attributes. Also, as mentioned earlier, illicit domains involve challenges not characteristic of social media, notably information obfuscation.", "In recent years, state-of-the-art results have been achieved in a variety of NLP tasks using word representation methods like neural embeddings BIBREF25 . Unlike the problem covered in this paper, those papers typically assume an existing KB (e.g. Freebase), and attempt to infer additional facts in the KB using word representations. In contrast, we study the problem of constructing and populating a KB per domain-specific attribute from scratch with only a small set of initial annotations from crawled Web corpora.", "The problem studied in this paper also has certain resemblances to OpenIE BIBREF19 . One assumption in OpenIE systems is that a given fact (codified, for example, as an RDF triple) is observed in multiple pages and contexts, which allows the system to learn new `extraction patterns' and rank facts by confidence. In illicit domains, a `fact' may only be observed once; furthermore, the arcane and high-variance language models employed in the domain makes direct application of any extraction pattern-based approach problematic. To the best of our knowledge, the specific problem of devising feature-agnostic, low-supervision IE approaches for illicit Web domains has not been studied in prior work." ], [ "Figure 1 illustrates the architecture of our approach. The input is a Web corpus containing relevant pages from the domain of interest, and high-recall recognizers (described in Section \"Applying High-Recall Recognizers\" ) typically adapted from freely available Web resources like Github and GeoNames. In keeping with the goals of this work, we do not assume that this initial corpus is static. That is, following an initial short set-up phase, more pages are expected to be added to the corpus in a streaming fashion. Given a set of pre-defined attributes (e.g. City, Name, Age) and around 10-100 manually verified annotations for each attribute, the goal is to learn an IE model that accurately extracts attribute values from each page in the corpus without relying on expert feature engineering. Importantly, while the pages are single-domain (e.g. human trafficking) they are multi-Web domain, meaning that the system must not only handle pages from new websites as they are added to the corpus, but also concept drift in the new pages compared to the initial corpus." ], [ "The first module in Figure 1 is an automated pre-processing algorithm that takes as input a streaming set of HTML pages. In real-world illicit domains, the key information of interest to investigators (e.g. names and ages) typically occurs either in the text or the title of the page, not the template of the website. Even when the information occasionally occurs in a template, it must be appropriately disambiguated to be useful. Wrapper-based IE systems BIBREF6 are often inapplicable as a result. As a first step in building a more suitable IE model, we scrape the text from each HTML website by using a publicly available text extractor called the Readability Text Extractor (RTE). Although multiple tools are available for text extraction from HTML BIBREF26 , our early trials showed that RTE is particularly suitable for noisy Web domains, owing to its tuneability, robustness and support for developers. We tune RTE to achieve high recall, thus ensuring that the relevant text in the page is captured in the scraped text with high probability. Note that, because of the varied structure of websites, such a setting also introduces noise in the scraped text (e.g. wayward HTML tags). Furthermore, unlike natural language documents, scraped text can contain many irrelevant numbers, Unicode and punctuation characters, and may not be regular. Because of the presence of numerous tab and newline markers, there is no obvious natural language sentence structure in the scraped text. In the most general case, we found that RTE returned a set of strings, with each string corresponding to a set of sentences.", "To serialize the scraped text as a list of tokens, we use the word and sentence tokenizers from the NLTK package on each RTE string output BIBREF27 . We apply the sentence tokenizer first, and to each sentence returned (which often does not correspond to an actual sentence due to rampant use of extraneous punctuation characters) by the sentence tokenizer, we apply the standard NLTK word tokenizer. The final output of this process is a list of tokens. In the rest of this section, this list of tokens is assumed as representing the HTML page from which the requisite attribute values need to be extracted." ], [ "In principle, given some annotated data, a sequence labeling model like a Conditional Random Field (CRF) can be trained and applied on each block of scraped text to extract values for each attribute BIBREF8 , BIBREF7 . In practice, as we empirically demonstrate in Section \"Evaluations\" , CRFs prove to be problematic for illicit domains. First, the size of the training data available for each CRF is relatively small, and because of the nature of illicit domains, methods like distant supervision or crowdsourcing cannot be used in an obvious timely manner to elicit annotations from users. A second problem with CRFs, and other traditional machine learning models, is the careful feature engineering that is required for good performance. With small amounts of training data, good features are essential for generalization. In the case of illicit domains, it is not always clear what features are appropriate for a given attribute. Even common features like capitalization can be misleading, as there are many capitalized words in the text that are not of interest (and vice versa).", "To alleviate feature engineering and manual annotation effort, we leverage the entire raw corpus in our model learning phase, rather than just the pages that have been annotated. Specifically, we use an unsupervised algorithm to represent each word in the corpus in a low-dimensional vector space. Several algorithms exist in the literature for deriving such representations, including neural embedding algorithms such as Word2vec BIBREF25 and the algorithm by Bollegala et al. BIBREF28 , as well as simpler alternatives BIBREF29 .", "Given the dynamic nature of streaming illicit-domain data, and the numerous word representation learning algorithms in the literature, we adapted the random indexing (RI) algorithm for deriving contextual word representations BIBREF29 . Random indexing methods mathematically rely on the Johnson-Lindenstrauss Lemma, which states that if points in a vector space are of sufficiently high dimension, then they may be projected into a suitable lower-dimensional space in a way which approximately preserves the distances between the points.", "The original random indexing algorithm was designed for incremental dimensionality reduction and text mining applications. We adapt this algorithm for learning word representations in illicit domains. Before describing these adaptations, we define some key concepts below.", "definitionDefinition Given parameters $d \\in \\mathbb {Z}^{+}$ and $r \\in [0, 1]$ , a context vector is defined as a $d-$ dimensional vector, of which exactly $\\lfloor d r \\rfloor $ elements are randomly set to $+1$ , exactly $\\lfloor d r \\rfloor $ elements are randomly set to $-1$ and the remaining $d-2\\lfloor d r \\rfloor $ elements are set to 0.", "We denote the parameters $d$ and $r$ in the definition above as the dimension and sparsity ratio parameters respectively.", "Intuitively, a context vector is defined for every atomic unit in the corpus. Let us denote the universe of atomic units as $U$ , assumed to be a partially observed countably infinite set. In the current scenario, every unigram (a single `token') in the dataset is considered an atomic unit. Extending the definition to also include higher-order ngrams is straightforward, but was found to be unnecessary in our early empirical investigations. The universe is only partially observed because of the incompleteness (i.e. streaming, dynamic nature) of the initial corpus.", "The actual vector space representation of an atomic unit is derived by defining an appropriate context for the unit. Formally, a context is an abstract notion that is used for assigning distributional semantics to the atomic unit. The distributional semantics hypothesis (also called Firth's axiom) states that the semantics of an atomic unit (e.g. a word) is defined by the contexts in which it occurs BIBREF30 .", "In this paper, we only consider short contexts appropriate for noisy streaming data. In this vein, we define the notion of a $(u, v)$ -context window below:", "Given a list $t$ of atomic units and an integer position $0<i\\le |t|$ , a $(u, v)$ -context window is defined by the set $S-t[i]$ , where $S$ is the set of atomic units inclusively spanning positions $max(i-u, 1)$ and $min(i+v, |t|)$ ", "Using just these two definitions, a naive version of the RI algorithm is illustrated in Figure 2 for the sentence `the cow jumped over the moon', assuming a $(2,2)$ -context window and unigrams as atomic units. For each new word encountered by the algorithm, a context vector (Definition \"Deriving Word Representations\" ) is randomly generated, and the representation vector for the word is initialized to the 0 vector. Once generated, the context vector for the word remains fixed, but the representation vector is updated with each occurrence of the word.", "The update happens as follows. Given the context of the word (ranging from a set of 2-4 words), an aggregation is first performed on the corresponding context vectors. In Figure 2 , for example, the aggregation is an unweighted sum. Using the aggregated vector (denoted by the symbol $\\vec{a}$ ), we update the representation vector using the equation below, with $\\vec{w}_i$ being the representation vector derived after the $i^{th}$ occurrence of word $w$ : ", "$$\\vec{w}_{i+1} = \\vec{w}_i+\\vec{a}$$ (Eq. 9) ", "In principle, using this simple algorithm, we could learn a vector space representation for every atomic unit. One issue with a naive embedding of every atomic unit into a vector space is the presence of rare atomic units. These are especially prevalent in illicit domains, not just in the form of rare words, but also as sequences of Unicode characters, sequences of HTML tags, and numeric units (e.g. phone numbers), each of which only occurs a few times (often, only once) in the corpus.", "To address this issue, we define below the notion of a compound unit that is based on a pre-specified condition.", "Given a universe $U$ of atomic units and a binary condition $R: U \\rightarrow \\lbrace True,False\\rbrace $ , the compound unit $C_R$ is defined as the largest subset of $U$ such that $R$ evaluates to True on every member of $C_R$ .", "Example: For `rare' words, we could define the compound unit high-idf-units to contain all atomic units that are below some document frequency threshold (e.g. 1%) in the corpus.", "In our implemented prototype, we defined six mutually exclusive compound units, described and enumerated in Table 1 . We modify the naive RI algorithm by only learning a single vector for each compound unit. Intuitively, each atomic unit $w$ in a compound unit $C$ is replaced by a special dummy symbol $w_C$ ; hence, after algorithm execution, each atomic unit in $C$ is represented by the single vector $\\vec{w}_C$ ." ], [ "For a given attribute (e.g. City) and a given corpus, we define a recognizer as a function that, if known, can be used to exactly determine the instances of the attribute occurring in the corpus. Formally, A recognizer $R_A$ for attribute $A$ is a function that takes a list $t$ of tokens and positions $i$ and $j >= i$ as inputs, and returns True if the tokens contiguously spanning $t[i]:t[j]$ are instances of $A$ , and False otherwise. It is important to note that, per the definition above, a recognizer cannot annotate latent instances that are not directly observed in the list of tokens.", "Since the `ideal' recognizer is not known, the broad goal of IE is to devise models that approximate it (for a given attribute) with high accuracy. Accuracy is typically measured in terms of precision and recall metrics. We formulate a two-pronged approach whereby, rather than develop a single recognizer that has both high precision and recall (and requires considerable expertise to design), we first obtain a list of candidate annotations that have high recall in expectation, and then use supervised classification in a second step to improve precision of the candidate annotations.", "More formally, let $R_A$ be denoted as an $\\eta $ -recall recognizer if the expected recall of $R_A$ is at least $\\eta $ . Due to the explosive growth in data, many resources on the Web can be used for bootstrapping recognizers that are `high-recall' in that $\\eta $ is in the range of 90-100%. The high-recall recognizers currently used in the prototype described in this paper (detailed further in Section \"System\" ) rely on knowledge bases (e.g. GeoNames) from Linked Open Data BIBREF20 , dictionaries from the Web and broad heuristics, such as regular expression extractors, found in public Github repositories. In our experience, we found that even students with basic knowledge of GitHub and Linked Open Data sources are able to construct such recognizers. One important reason why constructing such recognizers is relatively hassle-free is because they are typically monotonic i.e. new heuristics and annotation sources can be freely integrated, since we do not worry about precision at this step.", "We note that in some cases, domain knowledge alone is enough to guarantee 100% recall for well-designed recognizers for certain attributes. In HT, this is true for location attributes like city and state, since advertisements tend to state locations without obfuscation, and we use GeoNames, an exhaustive knowledge base of locations, as our recognizer. Manual inspection of the ground-truth data showed that the recall of utilized recognizers for attributes like Name and Age are also high (in many cases, 100%). Thus, although 100% recall cannot be guaranteed for any recognizer, it is still reasonable to assume that $\\eta $ is high.", "A much more difficult problem is engineering a recognizer to simultaneously achieve high recall and high precision. Even for recognizers based on curated knowledge bases like GeoNames, many non-locations get annotated as locations. For example, the word `nice' is a city in France, but is also a commonly occurring adjective. Other common words like `for', `hot', `com', `kim' and `bella' also occur in GeoNames as cities and would be annotated. Using a standard Named Entity Recognition system does not always work because of the language modeling problem (e.g. missing capitalization) in illicit domains. In the next section, we show how the context surrounding the annotated word can be used to classify the annotation as correct or incorrect. We note that, because the recognizers are high-recall, a successful classifier would yield both high precision and recall." ], [ "To address the precision problem, we train a classifier using contextual features. Rather than rely on a domain expert to provide a set of hand-crafted features, we derive a feature vector per candidate annotation using the notion of a context window (Definition \"Deriving Word Representations\" ) and the word representation vectors derived in Section \"Deriving Word Representations\" . This process of supervised contextual classification is illustrated in Figure 3 .", "Specifically, for each annotation (which could comprise multiple contiguous tokens e.g. `Salt Lake City' in the list of tokens representing the website) annotated by a recognizer, we consider the tokens in the $(u, v)$ -context window around the annotation. We aggregate the vectors of those tokens into a single vector by performing an unweighted sum, followed by $l2$ -normalization. We use this aggregate vector as the contextual feature vector for that annotation. Note that, unlike the representation learning phase, where the surrounding context vectors were aggregated into an existing representation vector, the contextual feature vector is obtained by summing the actual representation vectors.", "For each attribute, a supervised machine learning classifier (e.g. random forest) is trained using between 12-120 labeled annotations, and for new data, the remaining annotations can be classified using the trained classifier. Although the number of dimensions in the feature vectors is quite low compared to tf-idf vectors (hundreds vs. millions), a second round of dimensionality reduction can be applied by using (either supervised or unsupervised) feature selection for further empirical benefits (Section \"Evaluations\" )." ], [ "We train the word representations on four real-world human trafficking datasets of increasing size, the details of which are provided in Table 2 . Since we assume a `streaming' setting in this paper, each larger dataset in Table 2 is a strict superset of the smaller datasets. The largest dataset is itself a subset of the overall human trafficking corpus that was scraped as part of research conducted in the DARPA MEMEX program.", "Since ground-truth extractions for the corpus are unknown, we randomly sampled websites from the overall corpus, applied four high-recall recognizers described in Section \"System\" , and for each annotated set, manually verified whether the extractions were correct or incorrect for the corresponding attribute. The details of this sampled ground-truth are captured in Table 3 . Each annotation set is named using the format GT-{RawField}-{AnnotationAttribute}, where RawField can be either the HTML title or the scraped text (Section \"Preprocessing\" ). and AnnotationAttribute is the attribute of interest for annotation purposes." ], [ "The overall system requires developing two components for each attribute: a high-recall recognizer and a classifier for pruning annotations. We developed four high-recall recognizers, namely GeoNames-Cities, GeoNames-States, RegEx-Ages and Dictionary-Names. The first two of these relies on the freely available GeoNames dataset BIBREF31 ; we use the entire dataset for our experiments, which involves modeling each GeoNames dictionary as a trie, owing to its large memory footprint. For extracting ages, we rely on simple regular expressions and heuristics that were empirically verified to capture a broad set of age representations. For the name attribute, we gather freely available Name dictionaries on the Web, in multiple countries and languages, and use the dictionaries in a case-insensitive recognition algorithm to locate names in the raw field (i.e. text or title)." ], [ "We use different variants of the Stanford Named Entity Recognition system (NER) as our baselines BIBREF7 . For the first set of baselines, we use two pre-trained models trained on different English language corpora. Specifically, we use the 3-Class and 4-Class pre-trained models. We use the LOCATION class label for determining city and state annotations, and the PERSON label for name annotations. Unfortunately, there is no specific label corresponding to age annotations in the pre-trained models; hence, we do not use the pre-trained models as age annotation baselines.", "It is also possible to re-train the underlying NER system on a new dataset. For the second set of baselines, therefore, we re-train the NER models by randomly sampling 30% and 70% of each annotation set in Table 3 respectively, with the remaining annotations used for testing. The features and values that were employed in the re-trained models are enumerated in Table 4 . Further documentation on these feature settings may be found on the NERFeatureFactory page. All training and testing experiments were done in ten independent trials. We use default parameter settings, and report average results for each experimental run. Experimentation using other configurations, features and values is left for future studies." ], [ "Parameter tuning System parameters were set as follows. The number of dimensions in Definition \"Deriving Word Representations\" was set at 200, and the sparsity ratio was set at 0.01. These parameters are similar to those suggested in previous word representation papers; they were also found to yield intuitive results on semantic similarity experiments (described further in Section \"Discussion\" ). To avoid the problem of rare words, numbers, punctuation and tags, we used the six compound unit classes earlier described in Table 1 . In all experiments where defining a context was required, we used symmetric $(2,2)$ -context windows; using bigger windows was not found to offer much benefit. We trained a random forest model with default hyperparameters (10 trees, with Gini Impurity as the split criterion) as the supervised classifier, used supervised k-best feature selection with $k$ set to 20 (Section \"Supervised Contextual Classifier\" ), and with the Analysis of Variance (ANOVA) F-statistic between class label and feature used as the feature scoring function.", "Because of the class skew in Table 3 (i.e. the `positive' class is typically much smaller than the `negative' class) we oversampled the positive class for balanced training of the supervised contextual classifier.", "Metrics The metrics used for evaluating IE effectiveness are Precision, Recall and F1-measure.", "Implementation In the interests of demonstrating a reasonably lightweight system, all experiments in this paper were run on a serial iMac with a 4 GHz Intel core i7 processor and 32 GB RAM. All code (except the Stanford NER code) was written in the Python programming language, and has been made available on a public Github repository with documentation and examples. We used Python's Scikit-learn library (v0.18) for the machine learning components of the prototype." ], [ "Performance against baselines Table 5 illustrates system performance on Precision, Recall and F1-Measure metrics against the re-trained and pre-trained baseline models, where the re-trained model and our approach were trained on 30% of the annotations in Table 3 . We used the word representations derived from the D-ALL corpus. On average, the proposed system performs the best on F1-Measure and recall metrics. The re-trained NER is the most precise system, but at the cost of much less recall ( $<$ 30%). The good performance of the pre-trained baseline on the City attribute demonstrates the importance of having a large training corpus, even if the corpus is not directly from the test domain. On the other hand, the complete failure of the pre-trained baseline on the Name attribute illustrates the dangers of using out-of-domain training data. As noted earlier, language models in illicit domains can significantly differ from natural language models; in fact, names in human trafficking websites are often represented in a variety of misleading ways.", "Recognizing that 30% training data may constitute a sample size too small to make reliable judgments, we also tabulate the results in Table 6 when the training percentage is set at 70. Performance improves for both the re-trained baseline and our system. Performance declines for the pre-trained baseline, but this may be because of the sparseness of positive annotations in the smaller test set.", "We also note that performance is relatively well-balanced for our system; on all datasets and all metrics, the system achieves scores greater than 50%. This suggests that our approach has a degree of robustness that the CRFs are unable to achieve; we believe that this is a direct consequence of using contextual word representation-based feature vectors.", "Runtimes We recorded the runtimes for learning word representations using the random indexing algorithm described earlier on the four datasets in Table 2 , and plot the runtimes in Figure 4 as a function of the total number of words in each corpus.", "In agreement with the expected theoretical time-complexity of random indexing, the empirical run-time is linear in the number of words, for fixed parameter settings. More importantly, the absolute times show that the algorithm is extremely lightweight: on the D-ALL corpus, we are able to learn representations in under an hour.", "We note that these results do not employ any obvious parallelization or the multi-core capabilities of the machine. The linear scaling properties of the algorithm show that it can be used even for very large Web corpora. In future, we will investigate an implementation of the algorithm in a distributed setting.", "Robustness to corpus size and quality One issue with using large corpora to derive word representations is concept drift. The D-ALL corpora, for example, contains tens of different Web domains, even though they all pertain to human trafficking. An interesting empirical issue is whether a smaller corpus (e.g. D-10K or D-50K) contains enough data for the derived word representations to converge to reasonable values. Not only would this alleviate initial training times, but it would also partially compensate for concept drift, since it would be expected to contain fewer unique Web domains.", "Tables 7 and 8 show that such generalization is possible. The best F1-Measure performance, in fact, is achieved for D-10K, although the average F1-Measures vary by a margin of less than 2% on all cases. We cite this as further evidence of the robustness of the overall approach.", "Effects of feature selection Finally, we evaluate the effects of feature selection in Figure 5 on the GT-Text-Name dataset, with training percentage set at 30. The results show that, although performance is reasonably stable for a wide range of $k$ , some feature selection is necessary for better generalization." ], [ "Table 9 contains some examples (in bold) of cities that got correctly extracted, with the bold term being assigned the highest score by the contextual classifier that was trained for cities. The examples provide good evidence for the kinds of variation (i.e. concept drift) that are often observed in real-world human trafficking data over multiple Web domains. Some domains, for example, were found to have the same kind of structured format as the second row of Table 9 (i.e. Location: followed by the actual locations), but many other domains were far more heterogeneous.", "The results in this section also illustrate the merits of unsupervised feature engineering and contextual supervision. In principle, there is no reason why the word representation learning module in Figure 1 cannot be replaced by a more adaptive algorithm like Word2vec BIBREF25 . We note again that, before applying such algorithms, it is important to deal with the heterogeneity problem that arises from having many different Web domains present in the corpus. While earlier results in this section (Tables 7 and 8 ) showed that random indexing is reasonably stable as more websites are added to the corpus, we also verify this robustness qualitatively using a few domain-specific examples in Table 10 . We ran the qualitative experiment as follows: for each seed token (e.g. `tall'), we searched for the two nearest neighbors in the semantic space induced by random indexing by applying cosine similarity, using two different word representation datasets (D-10K and D-ALL). As the results in Table 10 show, the induced distributional semantics are stable; even when the nearest neighbors are different (e.g. for `tall'), their semantics still tend to be similar.", "Another important point implied by both the qualitative and quantitative results on D-10K is that random indexing is able to generalize quickly even on small amounts of data. To the best of our knowledge, it is currently an open question (theoretically and empirically), at the time of writing, whether state-of-the-art neural embedding-based word representation learners can (1) generalize on small quantities of data, especially in a single epoch (`streaming data') (2) adequately compensate for concept drift with the same degree of robustness, and in the same lightweight manner, as the random indexing method that we adapted and evaluated in this paper. A broader empirical study on this issue is warranted.", "Concerning contextual supervision, we qualitatively visualize the inputs to the contextual city classifier using the t-SNE tool BIBREF32 . We use the ground-truth labels to determine the color of each point in the projected 2d space. The plot in Figure 6 shows that there is a reasonable separation of labels; interestingly there are also `sub-clusters' among the positively labeled points. Each sub-cluster provides evidence for a similar context; the number of sub-clusters even in this small sample of points again illustrates the heterogeneity in the underlying data.", "A last issue that we mention is the generalization of the method to more unconventional attributes than the ones evaluated herein. In ongoing work, we have experimented with more domain-specific attributes such as ethnicity (of escorts), and have achieved similar performance. In general, the presented method is applicable whenever the context around the extraction is a suitable clue for disambiguation." ], [ "In this paper, we presented a lightweight, feature-agnostic Information Extraction approach that is suitable for illicit Web domains. Our approach relies on unsupervised derivation of word representations from an initial corpus, and the training of a supervised contextual classifier using external high-recall recognizers and a handful of manually verified annotations. Experimental evaluations show that our approach can outperform feature-centric CRF-based approaches for a range of generic attributes. Key modules of our prototype are publicly available (see footnote 15) and can be efficiently bootstrapped in a serial computing environment. Some of these modules are already being used in real-world settings. For example, they were recently released as tools for graduate-level participants in the End Human Trafficking hackathon organized by the office of the District Attorney of New York. At the time of writing, the system is being actively maintained and updated.", "Acknowledgements The authors gratefully acknowledge the efforts of Lingzhe Teng, Rahul Kapoor and Vinay Rao Dandin, for sampling and producing the ground-truths in Table 3 . This research is supported by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL) under contract number FA8750- 14-C-0240. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, AFRL, or the U.S. Government." ] ] }
{ "question": [ "Do they evaluate on relation extraction?" ], "question_id": [ "fb3687ea05d38b5e65fdbbbd1572eacd82f56c0b" ], "nlp_background": [ "five" ], "topic_background": [ "familiar" ], "paper_read": [ "no" ], "search_query": [ "information extraction" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "0ad1d8f66373467a9b6614d57b8cf33c0f5897f8" ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] } ] }
{ "caption": [ "Figure 1: A high-level overview of the proposed information extraction approach", "Figure 2: An example illustrating the naive Random Indexing algorithm with unigram atomic units and a (2, 2)-context window as context", "Figure 3: An illustration of supervised contextual classification on an example annotation (‘Phoenix’)", "Table 1: The compound units implemented in the current prototype", "Table 2: Four human tra cking corpora for which word representations are (independently) learned", "Table 4: Stanford NER features that were used for re-training the model on our annotation sets", "Table 3: Five ground-truth datasets on which the classifier (Section 3.4) and baselines are evaluated", "Figure 4: Empirical run-time of the adapted random indexing algorithm on the corpora in Table 2", "Figure 5: E↵ects of additional feature selection on the GT-Text-Name dataset (30% training data)", "Table 8: A comparison of F1-Measure scores of our system (70% training data), with word representations trained on di↵erent corpora", "Table 7: A comparison of F1-Measure scores of our system (30% training data), with word representations trained on di↵erent corpora", "Table 5: Comparative results of three systems on precision (P), recall (R) and F1-Measure (F) when training percentage is 30. For the pre-trained baselines, we only report the best results across all applicable models", "Figure 6: Visualizing city contextual classifier inputs (with colors indicating ground-truth labels) using the t-SNE tool", "Table 10: Examples of semantic similarity using random indexing vectors from D-10K and D-ALL" ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "5-Figure3-1.png", "5-Table1-1.png", "6-Table2-1.png", "6-Table4-1.png", "6-Table3-1.png", "7-Figure4-1.png", "8-Figure5-1.png", "8-Table8-1.png", "8-Table7-1.png", "8-Table5-1.png", "9-Figure6-1.png", "9-Table10-1.png" ] }
1808.09409
Semantic Role Labeling for Learner Chinese: the Importance of Syntactic Parsing and L2-L1 Parallel Data
This paper studies semantic parsing for interlanguage (L2), taking semantic role labeling (SRL) as a case task and learner Chinese as a case language. We first manually annotate the semantic roles for a set of learner texts to derive a gold standard for automatic SRL. Based on the new data, we then evaluate three off-the-shelf SRL systems, i.e., the PCFGLA-parser-based, neural-parser-based and neural-syntax-agnostic systems, to gauge how successful SRL for learner Chinese can be. We find two non-obvious facts: 1) the L1-sentence-trained systems performs rather badly on the L2 data; 2) the performance drop from the L1 data to the L2 data of the two parser-based systems is much smaller, indicating the importance of syntactic parsing in SRL for interlanguages. Finally, the paper introduces a new agreement-based model to explore the semantic coherency information in the large-scale L2-L1 parallel data. We then show such information is very effective to enhance SRL for learner texts. Our model achieves an F-score of 72.06, which is a 2.02 point improvement over the best baseline.
{ "section_name": [ "Introduction", "An L2-L1 Parallel Corpus", "The Annotation Process", "Inter-annotator Agreement", "Three SRL Systems", "Main Results", "Analysis", "Enhancing SRL with L2-L1 Parallel Data", "The Method", "Experimental Setup", "Conclusion", "Acknowledgement" ], "paragraphs": [ [ "A learner language (interlanguage) is an idiolect developed by a learner of a second or foreign language which may preserve some features of his/her first language. Previously, encouraging results of automatically building the syntactic analysis of learner languages were reported BIBREF0 , but it is still unknown how semantic processing performs, while parsing a learner language (L2) into semantic representations is the foundation of a variety of deeper analysis of learner languages, e.g., automatic essay scoring. In this paper, we study semantic parsing for interlanguage, taking semantic role labeling (SRL) as a case task and learner Chinese as a case language.", "Before discussing a computation system, we first consider the linguistic competence and performance. Can human robustly understand learner texts? Or to be more precise, to what extent, a native speaker can understand the meaning of a sentence written by a language learner? Intuitively, the answer is towards the positive side. To validate this, we ask two senior students majoring in Applied Linguistics to carefully annotate some L2-L1 parallel sentences with predicate–argument structures according to the specification of Chinese PropBank BIBREF1 , which is developed for L1. A high inter-annotator agreement is achieved, suggesting the robustness of language comprehension for L2. During the course of semantic annotation, we find a non-obvious fact that we can re-use the semantic annotation specification, Chinese PropBank in our case, which is developed for L1. Only modest rules are needed to handle some tricky phenomena. This is quite different from syntactic treebanking for learner sentences, where defining a rich set of new annotation heuristics seems necessary BIBREF2 , BIBREF0 , BIBREF3 .", "Our second concern is to mimic the human's robust semantic processing ability by computer programs. The feasibility of reusing the annotation specification for L1 implies that we can reuse standard CPB data to train an SRL system to process learner texts. To test the robustness of the state-of-the-art SRL algorithms, we evaluate two types of SRL frameworks. The first one is a traditional SRL system that leverages a syntactic parser and heavy feature engineering to obtain explicit information of semantic roles BIBREF4 . Furthermore, we employ two different parsers for comparison: 1) the PCFGLA-based parser, viz. Berkeley parser BIBREF5 , and 2) a minimal span-based neural parser BIBREF6 . The other SRL system uses a stacked BiLSTM to implicitly capture local and non-local information BIBREF7 . and we call it the neural syntax-agnostic system. All systems can achieve state-of-the-art performance on L1 texts but show a significant degradation on L2 texts. This highlights the weakness of applying an L1-sentence-trained system to process learner texts.", "While the neural syntax-agnostic system obtains superior performance on the L1 data, the two syntax-based systems both produce better analyses on the L2 data. Furthermore, as illustrated in the comparison between different parsers, the better the parsing results we get, the better the performance on L2 we achieve. This shows that syntactic parsing is important in semantic construction for learner Chinese. The main reason, according to our analysis, is that the syntax-based system may generate correct syntactic analyses for partial grammatical fragments in L2 texts, which provides crucial information for SRL. Therefore, syntactic parsing helps build more generalizable SRL models that transfer better to new languages, and enhancing syntactic parsing can improve SRL to some extent.", "Our last concern is to explore the potential of a large-scale set of L2-L1 parallel sentences to enhance SRL systems. We find that semantic structures of the L2-L1 parallel sentences are highly consistent. This inspires us to design a novel agreement-based model to explore such semantic coherency information. In particular, we define a metric for comparing predicate–argument structures and searching for relatively good automatic syntactic and semantic annotations to extend the training data for SRL systems. Experiments demonstrate the value of the L2-L1 parallel sentences as well as the effectiveness of our method. We achieve an F-score of 72.06, which is a 2.02 percentage point improvement over the best neural-parser-based baseline.", "To the best of our knowledge, this is the first time that the L2-L1 parallel data is utilized to enhance NLP systems for learner texts.", "For research purpose, we have released our SRL annotations on 600 sentence pairs and the L2-L1 parallel dataset ." ], [ "An L2-L1 parallel corpus can greatly facilitate the analysis of a learner language BIBREF9 . Following mizumoto:2011, we collected a large dataset of L2-L1 parallel texts of Mandarin Chinese by exploring “language exchange\" social networking services (SNS), i.e., Lang-8, a language-learning website where native speakers can freely correct the sentences written by foreign learners. The proficiency levels of the learners are diverse, but most of the learners, according to our judgment, is of intermediate or lower level.", "Our initial collection consists of 1,108,907 sentence pairs from 135,754 essays. As there is lots of noise in raw sentences, we clean up the data by (1) ruling out redundant content, (2) excluding sentences containing foreign words or Chinese phonetic alphabet by checking the Unicode values, (3) dropping overly simple sentences which may not be informative, and (4) utilizing a rule-based classifier to determine whether to include the sentence into the corpus.", "The final corpus consists of 717,241 learner sentences from writers of 61 different native languages, in which English and Japanese constitute the majority. As for completeness, 82.78% of the Chinese Second Language sentences on Lang-8 are corrected by native human annotators. One sentence gets corrected approximately 1.53 times on average.", "In this paper, we manually annotate the predicate–argument structures for the 600 L2-L1 pairs as the basis for the semantic analysis of learner Chinese. It is from the above corpus that we carefully select 600 pairs of L2-L1 parallel sentences. We would choose the most appropriate one among multiple versions of corrections and recorrect the L1s if necessary. Because word structure is very fundamental for various NLP tasks, our annotation also contains gold word segmentation for both L2 and L1 sentences. Note that there are no natural word boundaries in Chinese text. We first employ a state-of-the-art word segmentation system to produce initial segmentation results and then manually fix segmentation errors.", "The dataset includes four typologically different mother tongues, i.e., English (ENG), Japanese (JPN), Russian (RUS) and Arabic (ARA). Sub-corpus of each language consists of 150 sentence pairs. We take the mother languages of the learners into consideration, which have a great impact on grammatical errors and hence automatic semantic analysis. We hope that four selected mother tongues guarantee a good coverage of typologies. The annotated corpus can be used both for linguistic investigation and as test data for NLP systems." ], [ "Semantic role labeling (SRL) is the process of assigning semantic roles to constituents or their head words in a sentence according to their relationship to the predicates expressed in the sentence. Typical semantic roles can be divided into core arguments and adjuncts. The core arguments include Agent, Patient, Source, Goal, etc, while the adjuncts include Location, Time, Manner, Cause, etc.", "To create a standard semantic-role-labeled corpus for learner Chinese, we first annotate a 50-sentence trial set for each native language. Two senior students majoring in Applied Linguistics conducted the annotation. Based on a total of 400 sentences, we adjudicate an initial gold standard, adapting and refining CPB specification as our annotation heuristics. Then the two annotators proceed to annotate a 100-sentence set for each language independently. It is on these larger sets that we report the inter-annotator agreement.", "In the final stage, we also produce an adjudicated gold standard for all 600 annotated sentences. This was achieved by comparing the annotations selected by each annotator, discussing the differences, and either selecting one as fully correct or creating a hybrid representing the consensus decision for each choice point. When we felt that the decisions were not already fully guided by the existing annotation guidelines, we worked to articulate an extension to the guidelines that would support the decision.", "During the annotation, the annotators apply both position labels and semantic role labels. Position labels include S, B, I and E, which are used to mark whether the word is an argument by itself, or at the beginning or in the middle or at the end of a argument. As for role labels, we mainly apply representations defined by CPB BIBREF1 . The predicate in a sentence was labeled as rel, the core semantic roles were labeled as AN and the adjuncts were labeled as AM." ], [ "For inter-annotator agreement, we evaluate the precision (P), recall (R), and F1-score (F) of the semantic labels given by the two annotators. Table TABREF5 shows that our inter-annotator agreement is promising. All L1 texts have F-score above 95, and we take this as a reflection that our annotators are qualified. F-scores on L2 sentences are all above 90, just a little bit lower than those of L1, indicating that L2 sentences can be greatly understood by native speakers. Only modest rules are needed to handle some tricky phenomena:", "The labeled argument should be strictly limited to the core roles defined in the frameset of CPB, though the number of arguments in L2 sentences may be more or less than the number defined.", "For the roles in L2 that cannot be labeled as arguments under the specification of CPB, if they provide semantic information such as time, location and reason, we would labeled them as adjuncts though they may not be well-formed adjuncts due to the absence of function words.", "For unnecessary roles in L2 caused by mistakes of verb subcategorization (see examples in Figure FIGREF30 ), we would leave those roles unlabeled.", "Table TABREF10 further reports agreements on each argument (AN) and adjunct (AM) in detail, according to which the high scores are attributed to the high agreement on arguments (AN). The labels of A3 and A4 have no disagreement since they are sparse in CPB and are usually used to label specific semantic roles that have little ambiguity.", "We also conducted in-depth analysis on inter-annotator disagreement. For further details, please refer to duan2018argument." ], [ "The work on SRL has included a broad spectrum of machine learning and deep learning approaches to the task. Early work showed that syntactic information is crucial for learning long-range dependencies, syntactic constituency structure and global constraints BIBREF10 , BIBREF11 , while initial studies on neural methods achieved state-of-the-art results with little to no syntactic input BIBREF12 , BIBREF13 , BIBREF14 , BIBREF7 . However, the question whether fully labeled syntactic structures provide an improvement for neural SRL is still unsettled pending further investigation.", "To evaluate the robustness of state-of-the-art SRL algorithms, we evaluate two representative SRL frameworks. One is a traditional syntax-based SRL system that leverages a syntactic parser and manually crafted features to obtain explicit information to find semantic roles BIBREF15 , BIBREF16 In particular, we employ the system introduced in BIBREF4 . This system first collects all c-commanders of a predicate in question from the output of a parser and puts them in order. It then employs a first order linear-chain global linear model to perform semantic tagging. For constituent parsing, we use two parsers for comparison, one is Berkeley parser BIBREF5 , a well-known implementation of the unlexicalized latent variable PCFG model, the other is a minimal span-based neural parser based on independent scoring of labels and spans BIBREF6 . As proposed in BIBREF6 , the second parser is capable of achieving state-of-the-art single-model performance on the Penn Treebank. On the Chinese TreeBank BIBREF17 , it also outperforms the Berkeley parser for the in-domain test. We call the corresponding SRL systems as the PCFGLA-parser-based and neural-parser-based systems.", "The second SRL framework leverages an end-to-end neural model to implicitly capture local and non-local information BIBREF12 , BIBREF7 . In particular, this framework treats SRL as a BIO tagging problem and uses a stacked BiLSTM to find informative embeddings. We apply the system introduced in BIBREF7 for experiments. Because all syntactic information (including POS tags) is excluded, we call this system the neural syntax-agnostic system.", "To train the three SRL systems as well as the supporting parsers, we use the CTB and CPB data . In particular, the sentences selected for the CoNLL 2009 shared task are used here for parameter estimation. Note that, since the Berkeley parser is based on PCFGLA grammar, it may fail to get the syntactic outputs for some sentences, while the other parser does not have that problem. In this case, we have made sure that both parsers can parse all 1,200 sentences successfully." ], [ "The overall performances of the three SRL systems on both L1 and L2 data (150 parallel sentences for each mother tongue) are shown in Table TABREF11 . For all systems, significant decreases on different mother languages can be consistently observed, highlighting the weakness of applying L1-sentence-trained systems to process learner texts. Comparing the two syntax-based systems with the neural syntax-agnostic system, we find that the overall INLINEFORM0 F, which denotes the F-score drop from L1 to L2, is smaller in the syntax-based framework than in the syntax-agnostic system. On English, Japanese and Russian L2 sentences, the syntax-based system has better performances though it sometimes works worse on the corresponding L1 sentences, indicating the syntax-based systems are more robust when handling learner texts.", "Furthermore, the neural-parser-based system achieves the best overall performance on the L2 data. Though performing slightly worse than the neural syntax-agnostic one on the L1 data, it has much smaller INLINEFORM0 F, showing that as the syntactic analysis improves, the performances on both the L1 and L2 data grow, while the gap can be maintained. This demonstrates again the importance of syntax in semantic constructions, especially for learner texts.", "Table TABREF45 summarizes the SRL results of the baseline PCFGLA-parser-based model as well as its corresponding retrained models. Since both the syntactic parser and the SRL classifier can be retrained and thus enhanced, we report the individual impact as well as the combined one. We can clearly see that when the PCFGLA parser is retrained with the SRL-consistent sentence pairs, it is able to provide better SRL-oriented syntactic analysis for the L2 sentences as well as their corrections, which are essentially L1 sentences. The outputs of the L1 sentences that are generated by the deep SRL system are also useful for improving the linear SRL classifier. A non-obvious fact is that such a retrained model yields better analysis for not only L1 but also L2 sentences. Fortunately, combining both results in further improvement.", "Table TABREF46 shows the results of the parallel experiments based on the neural parser. Different from the PCFGLA model, the SRL-consistent trees only yield a slight improvement on the L2 data. On the contrary, retraining the SRL classifier is much more effective. This experiment highlights the different strengths of different frameworks for parsing. Though for standard in-domain test, the neural parser performs better and thus is more and more popular, for some other scenarios, the PCFGLA model is stronger.", "Table TABREF47 further shows F-scores for the baseline and the both-retrained model relative to each role type in detail. Given that the F-scores for both models are equal to 0 on A3 and A4, we just omit this part. From the figure we can observe that, all the semantic roles achieve significant improvements in performances." ], [ "To better understand the overall results, we further look deep into the output by addressing the questions:", "What types of error negatively impact both systems over learner texts?", "What types of error are more problematic for the neural syntax-agnostic one over the L2 data but can be solved by the syntax-based one to some extent?", "We first carry out a suite of empirical investigations by breaking down error types for more detailed evaluation. To compare two systems, we analyze results on ENG-L2 and JPN-L2 given that they reflect significant advantages of the syntax-based systems over the neural syntax-agnostic system. Note that the syntax-based system here refers to the neural-parser-based one. Finally, a concrete study on the instances in the output is conducted, as to validate conclusions in the previous step.", "We employ 6 oracle transformations designed by he2017deep to fix various prediction errors sequentially (see details in Table TABREF19 ), and observe the relative improvements after each operation, as to obtain fine-grained error types. Figure FIGREF21 compares two systems in terms of different mistakes on ENG-L2 and JPN-L2 respectively. After fixing the boundaries of spans, the neural syntax-agnostic system catches up with the other, illustrating that though both systems handle boundary detection poorly on the L2 sentences, the neural syntax-agnostic one suffers more from this type of errors.", "Excluding boundary errors (after moving, merging, splitting spans and fixing boundaries), we also compare two systems on L2 in terms of detailed label identification, so as to observe which semantic role is more likely to be incorrectly labeled. Figure FIGREF24 shows the confusion matrices. Comparing (a) with (c) and (b) with (d), we can see that the syntax-based and the neural system often overly label A1 when processing learner texts. Besides, the neural syntax-agnostic system predicts the adjunct AM more than necessary on L2 sentences by 54.24% compared with the syntax-based one.", "On the basis of typical error types found in the previous stage, specifically, boundary detection and incorrect labels, we further conduct an on-the-spot investigation on the output sentences.", "Previous work has proposed that the drop in performance of SRL systems mainly occurs in identifying argument boundaries BIBREF18 . According to our results, this problem will be exacerbated when it comes to L2 sentences, while syntactic structure sometimes helps to address this problem.", "Figure FIGREF30 is an example of an output sentence. The Chinese word “也” (also) usually serves as an adjunct but is now used for linking the parallel structure “用 汉语 也 说话 快” (using Chinese also speaking quickly) in this sentence, which is ill-formed to native speakers and negatively affects the boundary detection of A0 for both systems.", "On the other hand, the neural system incorrectly takes the whole part before “很 难” (very hard) as A0, regardless of the adjunct “对 我 来说” (for me), while this can be figured out by exploiting syntactic analysis, as illustrated in Figure FIGREF30 . The constituent “对 我 来说” (for me) has been recognized as a prepositional phrase (PP) attached to the VP, thus labeled as AM. This shows that by providing information of some well-formed sub-trees associated with correct semantic roles, the syntactic system can perform better than the neural one on SRL for learner texts.", "A second common source of errors is wrong labels, especially for A1. Based on our quantitative analysis, as reported in Table TABREF37 , these phenomena are mainly caused by mistakes of verb subcategorization, where the systems label more arguments than allowed by the predicates. Besides, the deep end-to-end system is also likely to incorrectly attach adjuncts AM to the predicates.", "Figure FIGREF30 is another example. The Chinese verb “做饭” (cook-meal) is intransitive while this sentence takes it as a transitive verb, which is very common in L2. Lacking in proper verb subcategorization, both two systems fail to recognize those verbs allowing only one argument and label the A1 incorrectly.", "As for AM, the neural system mistakenly adds the adjunct to the predicate, which can be avoided by syntactic information of the sentence shown in Figure FIGREF30 . The constituent “常常” (often) are adjuncts attached to VP structure governed by the verb “练习”(practice), which will not be labeled as AM in terms of the verb “做饭”(cook-meal). In other words, the hierarchical structure can help in argument identification and assignment by exploiting local information." ], [ "We explore the valuable information about the semantic coherency encoded in the L2-L1 parallel data to improve SRL for learner Chinese. In particular, we introduce an agreement-based model to search for high-quality automatic syntactic and semantic role annotations, and then use these annotations to retrain the two parser-based SRL systems." ], [ "For the purpose of harvesting the good automatic syntactic and semantic analysis, we consider the consistency between the automatically produced analysis of a learner sentence and its corresponding well-formed sentence. Determining the measurement metric for comparing predicate–argument structures, however, presents another challenge, because the words of the L2 sentence and its L1 counterpart do not necessarily match. To solve the problem, we use an automatic word aligner. BerkeleyAligner BIBREF19 , a state-of-the-art tool for obtaining a word alignment, is utilized.", "The metric for comparing SRL results of two sentences is based on recall of INLINEFORM0 tuples, where INLINEFORM1 is a predicate, INLINEFORM2 is a word that is in the argument or adjunct of INLINEFORM3 and INLINEFORM4 is the corresponding role. Based on a word alignment, we define the shared tuple as a mutual tuple between two SRL results of an L2-L1 sentence pair, meaning that both the predicate and argument words are aligned respectively, and their role relations are the same. We then have two recall values:", "L2-recall is (# of shared tuples) / (# of tuples of the result in L2)", "L1-recall is (# of shared tuples) / (# of tuples of the result in L1)", "In accordance with the above evaluation method, we select the automatic analysis of highest scoring sentences and use them to expand the training data. Sentences whose L1 and L2 recall are both greater than a threshold INLINEFORM0 are taken as good ones. A parser-based SRL system consists of two essential modules: a syntactic parser and a semantic classifier. To enhance the syntactic parser, the automatically generated syntactic trees of the sentence pairs that exhibit high semantic consistency are directly used to extend training data. To improve a semantic classifier, besides the consistent semantic analysis, we also use the outputs of the L1 but not L2 data which are generated by the neural syntax-agnostic SRL system." ], [ "Our SRL corpus contains 1200 sentences in total that can be used as an evaluation for SRL systems. We separate them into three data sets. The first data set is used as development data, which contains 50 L2-L1 sentence pairs for each language and 200 pairs in total. Hyperparameters are tuned using the development set. The second data set contains all other 400 L2 sentences, which is used as test data for L2. Similarly, all other 400 L1 sentences are used as test data for L1.", "The sentence pool for extracting retraining annotations includes all English- and Japanese-native speakers' data along with its corrections. Table TABREF43 presents the basic statistics. Around 8.5 – 11.9% of the sentence can be taken as high L1/L2 recall sentences, which serves as a reflection that argument structure is vital for language acquisition and difficult for learners to master, as proposed in vazquez2004learning and shin2010contribution. The threshold ( INLINEFORM0 ) for selecting sentences is set upon the development data. For example, we use additional 156,520 sentences to enhance the Berkeley parser." ], [ "Statistical models of annotating learner texts are making rapid progress. Although there have been some initial studies on defining annotation specification as well as corpora for syntactic analysis, there is almost no work on semantic parsing for interlanguages. This paper discusses this topic, taking Semantic Role Labeling as a case task and learner Chinese as a case language. We reveal three unknown facts that are important towards a deeper analysis of learner languages: (1) the robustness of language comprehension for interlanguage, (2) the weakness of applying L1-sentence-trained systems to process learner texts, and (3) the significance of syntactic parsing and L2-L1 parallel data in building more generalizable SRL models that transfer better to L2. We have successfully provided a better SRL-oriented syntactic parser as well as a semantic classifier for processing the L2 data by exploring L2-L1 parallel data, supported by a significant numeric improvement over a number of state-of-the-art systems. To the best of our knowledge, this is the first work that demonstrates the effectiveness of large-scale L2-L1 parallel data to enhance the NLP system for learner texts." ], [ "This work was supported by the National Natural Science Foundation of China (61772036, 61331011) and the Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers and for their helpful comments. We also thank Nianwen Xue for useful comments on the final version. Weiwei Sun is the corresponding author." ] ] }
{ "question": [ "What is the baseline model for the agreement-based mode?", "Do the authors suggest why syntactic parsing is so important for semantic role labelling for interlanguages?", "Who manually annotated the semantic roles for the set of learner texts?" ], "question_id": [ "b5d6357d3a9e3d5fdf9b344ae96cddd11a407875", "f33a21c6a9c75f0479ffdbb006c40e0739134716", "8a1d4ed00d31c1f1cb05bc9d5e4f05fe87b0e5a4" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "irony", "irony", "irony" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "PCFGLA-based parser, viz. Berkeley parser BIBREF5", "minimal span-based neural parser BIBREF6" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our second concern is to mimic the human's robust semantic processing ability by computer programs. The feasibility of reusing the annotation specification for L1 implies that we can reuse standard CPB data to train an SRL system to process learner texts. To test the robustness of the state-of-the-art SRL algorithms, we evaluate two types of SRL frameworks. The first one is a traditional SRL system that leverages a syntactic parser and heavy feature engineering to obtain explicit information of semantic roles BIBREF4 . Furthermore, we employ two different parsers for comparison: 1) the PCFGLA-based parser, viz. Berkeley parser BIBREF5 , and 2) a minimal span-based neural parser BIBREF6 . The other SRL system uses a stacked BiLSTM to implicitly capture local and non-local information BIBREF7 . and we call it the neural syntax-agnostic system. All systems can achieve state-of-the-art performance on L1 texts but show a significant degradation on L2 texts. This highlights the weakness of applying an L1-sentence-trained system to process learner texts." ], "highlighted_evidence": [ "Furthermore, we employ two different parsers for comparison: 1) the PCFGLA-based parser, viz. Berkeley parser BIBREF5 , and 2) a minimal span-based neural parser BIBREF6 ." ] } ], "annotation_id": [ "0adb8e4cfb7d0907d69fb75e06419e00bdeee18b" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "syntax-based system may generate correct syntactic analyses for partial grammatical fragments" ], "yes_no": null, "free_form_answer": "", "evidence": [ "While the neural syntax-agnostic system obtains superior performance on the L1 data, the two syntax-based systems both produce better analyses on the L2 data. Furthermore, as illustrated in the comparison between different parsers, the better the parsing results we get, the better the performance on L2 we achieve. This shows that syntactic parsing is important in semantic construction for learner Chinese. The main reason, according to our analysis, is that the syntax-based system may generate correct syntactic analyses for partial grammatical fragments in L2 texts, which provides crucial information for SRL. Therefore, syntactic parsing helps build more generalizable SRL models that transfer better to new languages, and enhancing syntactic parsing can improve SRL to some extent." ], "highlighted_evidence": [ "While the neural syntax-agnostic system obtains superior performance on the L1 data, the two syntax-based systems both produce better analyses on the L2 data. Furthermore, as illustrated in the comparison between different parsers, the better the parsing results we get, the better the performance on L2 we achieve. This shows that syntactic parsing is important in semantic construction for learner Chinese. The main reason, according to our analysis, is that the syntax-based system may generate correct syntactic analyses for partial grammatical fragments in L2 texts, which provides crucial information for SRL." ] } ], "annotation_id": [ "7391d39fcb6dbaedfc5ab71e250256e0ca7bcfdc" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Authors", "evidence": [ "In this paper, we manually annotate the predicate–argument structures for the 600 L2-L1 pairs as the basis for the semantic analysis of learner Chinese. It is from the above corpus that we carefully select 600 pairs of L2-L1 parallel sentences. We would choose the most appropriate one among multiple versions of corrections and recorrect the L1s if necessary. Because word structure is very fundamental for various NLP tasks, our annotation also contains gold word segmentation for both L2 and L1 sentences. Note that there are no natural word boundaries in Chinese text. We first employ a state-of-the-art word segmentation system to produce initial segmentation results and then manually fix segmentation errors." ], "highlighted_evidence": [ "In this paper, we manually annotate the predicate–argument structures for the 600 L2-L1 pairs as the basis for the semantic analysis of learner Chinese." ] } ], "annotation_id": [ "67be6b92cdb2ea380a1c9a3b33f5f6a9236b1503" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Table 1: Inter-annotator agreement.", "Table 2: Inter-annotator agreement (F-scores) relative to languages and role types.", "Table 3: Performances of the syntax-based and neural syntax-agnostic SRL systems on the L1 and L2 data. “ALL” denotes the overall performance.", "Table 4: Oracle transformations paired with the relative error reduction after each operation. The operations are permitted only if they do not cause any overlapping arguments", "Figure 1: Relative improvements of performance after doing each type of oracle transformation in sequence over ENG-L2 and JPN-L2", "Figure 2: Confusion matrix for each semantic role (here we add up matrices of ENG-L2 and JPNL2). The predicted labels are only counted in three cases: (1) The predicated boundaries match the gold span boundaries. (2) The predicated argument does not overlap with any the gold span (Gold labeled as “O”). (3) The gold argument does not overlap with any predicted span (Prediction labeled as “O”).", "Figure 3: Two examples for SRL outputs of both systems and the corresponding syntactic analysis for the L2 sentences", "Table 5: Causes of labeling unnecessary A1", "Table 6: Statistics of unlabeled data.", "Table 7: Accuracies different PCFGLA-parserbased models on the two test data sets.", "Table 8: Accuracies of different neural-parserbased models on the two test data sets.", "Table 9: F-scores of the baseline and the bothretrained models relative to role types on the two data sets. We only list results of the PCFGLAparser-based system." ], "file": [ "3-Table1-1.png", "4-Table2-1.png", "5-Table3-1.png", "5-Table4-1.png", "6-Figure1-1.png", "6-Figure2-1.png", "7-Figure3-1.png", "7-Table5-1.png", "8-Table6-1.png", "8-Table7-1.png", "9-Table8-1.png", "9-Table9-1.png" ] }
1808.00265
Interpretable Visual Question Answering by Visual Grounding from Attention Supervision Mining
A key aspect of VQA models that are interpretable is their ability to ground their answers to relevant regions in the image. Current approaches with this capability rely on supervised learning and human annotated groundings to train attention mechanisms inside the VQA architecture. Unfortunately, obtaining human annotations specific for visual grounding is difficult and expensive. In this work, we demonstrate that we can effectively train a VQA architecture with grounding supervision that can be automatically obtained from available region descriptions and object annotations. We also show that our model trained with this mined supervision generates visual groundings that achieve a higher correlation with respect to manually-annotated groundings, meanwhile achieving state-of-the-art VQA accuracy.
{ "section_name": [ "Introduction", "Related Work", "VQA Model Structure", "Mining Attention Supervision from Visual Genome", "Implementation Details", "Datasets", "Results", "Conclusions" ], "paragraphs": [ [ "We are interested in the problem of visual question answering (VQA), where an algorithm is presented with an image and a question that is formulated in natural language and relates to the contents of the image. The goal of this task is to get the algorithm to correctly answer the question. The VQA task has recently received significant attention from the computer vision community, in particular because obtaining high accuracies would presumably require precise understanding of both natural language as well as visual stimuli. In addition to serving as a milestone towards visual intelligence, there are practical applications such as development of tools for the visually impaired.", "The problem of VQA is challenging due to the complex interplay between the language and visual modalities. On one hand, VQA algorithms must be able to parse and interpret the input question, which is provided in natural language BIBREF0 , BIBREF1 , BIBREF2 . This may potentially involve understanding of nouns, verbs and other linguistic elements, as well as their visual significance. On the other hand, the algorithms must analyze the image to identify and recognize the visual elements relevant to the question. Furthermore, some questions may refer directly to the contents of the image, but may require external, common sense knowledge to be answered correctly. Finally, the algorithms should generate a textual output in natural language that correctly answers the input visual question. In spite of the recent research efforts to address these challenges, the problem remains largely unsolved BIBREF3 .", "We are particularly interested in giving VQA algorithms the ability to identify the visual elements that are relevant to the question. In the VQA literature, such ability has been implemented by attention mechanisms. Such attention mechanisms generate a heatmap over the input image, which highlights the regions of the image that lead to the answer. These heatmaps are interpreted as groundings of the answer to the most relevant areas of the image. Generally, these mechanisms have either been considered as latent variables for which there is no supervision, or have been treated as output variables that receive direct supervision from human annotations. Unfortunately, both of these approaches have disadvantages. First, unsupervised training of attention tends to lead to models that cannot ground their decision in the image in a human interpretable manner. Second, supervised training of attention is difficult and expensive: human annotators may consider different regions to be relevant for the question at hand, which entails ambiguity and increased annotation cost. Our goal is to leverage the best of both worlds by providing VQA algorithms with interpretable grounding of their answers, without the need of direct and explicit manual annotation of attention.", "From a practical point of view, as autonomous machines are increasingly finding real world applications, there is an increasing need to provide them with suitable capabilities to explain their decisions. However, in most applications, including VQA, current state-of-the-art techniques operate as black-box models that are usually trained using a discriminative approach. Similarly to BIBREF4 , in this work we show that, in the context of VQA, such approaches lead to internal representations that do not capture the underlying semantic relations between textual questions and visual information. Consequently, as we show in this work, current state-of-the-art approaches for VQA are not able to support their answers with a suitable interpretable representation.", "In this work, we introduce a methodology that provides VQA algorithms with the ability to generate human interpretable attention maps which effectively ground the answer to the relevant image regions. We accomplish this by leveraging region descriptions and object annotations available in the Visual Genome dataset, and using these to automatically construct attention maps that can be used for attention supervision, instead of requiring human annotators to manually provide grounding labels. Our framework achieves competitive state-of-the-art VQA performance, while generating visual groundings that outperform other algorithms that use human annotated attention during training.", "The contributions of this paper are: (1) we introduce a mechanism to automatically obtain meaningful attention supervision from both region descriptions and object annotations in the Visual Genome dataset; (2) we show that by using the prediction of region and object label attention maps as auxiliary tasks in a VQA application, it is possible to obtain more interpretable intermediate representations. (3) we experimentally demonstrate state-of-the-art performances in VQA benchmarks as well as visual grounding that closely matches human attention annotations." ], [ "Since its introduction BIBREF0 , BIBREF1 , BIBREF2 , the VQA problem has attracted an increasing interest BIBREF3 . Its multimodal nature and more precise evaluation protocol than alternative multimodal scenarios, such as image captioning, help to explain this interest. Furthermore, the proliferation of suitable datasets and potential applications, are also key elements behind this increasing activity. Most state-of-the-art methods follow a joint embedding approach, where deep models are used to project the textual question and visual input to a joint feature space that is then used to build the answer. Furthermore, most modern approaches pose VQA as a classification problem, where classes correspond to a set of pre-defined candidate answers. As an example, most entries to the VQA challenge BIBREF2 select as output classes the most common 3000 answers in this dataset, which account for 92% of the instances in the validation set.", "The strategy to combine the textual and visual embeddings and the underlying structure of the deep model are key design aspects that differentiate previous works. Antol et al. BIBREF2 propose an element-wise multiplication between image and question embeddings to generate spatial attention map. Fukui et al. BIBREF5 propose multimodal compact bilinear pooling (MCB) to efficiently implement an outer product operator that combines visual and textual representations. Yu et al. BIBREF6 extend this pooling scheme by introducing a multi-modal factorized bilinear pooling approach (MFB) that improves the representational capacity of the bilinear operator. They achieve this by adding an initial step that efficiently expands the textual and visual embeddings to a high-dimensional space. In terms of structural innovations, Noh et al. BIBREF7 embed the textual question as an intermediate dynamic bilinear layer of a ConvNet that processes the visual information. Andreas et al. BIBREF8 propose a model that learns a set of task-specific neural modules that are jointly trained to answer visual questions.", "Following the successful introduction of soft attention in neural machine translation applications BIBREF9 , most modern VQA methods also incorporate a similar mechanism. The common approach is to use a one-way attention scheme, where the embedding of the question is used to generate a set of attention coefficients over a set of predefined image regions. These coefficients are then used to weight the embedding of the image regions to obtain a suitable descriptor BIBREF10 , BIBREF11 , BIBREF5 , BIBREF12 , BIBREF6 . More elaborated forms of attention has also been proposed. Xu and Saenko BIBREF13 suggest use word-level embedding to generate attention. Yang et al. BIBREF14 iterates the application of a soft-attention mechanism over the visual input as a way to progressively refine the location of relevant cues to answer the question. Lu et al. BIBREF15 proposes a bidirectional co-attention mechanism that besides the question guided visual attention, also incorporates a visual guided attention over the input question.", "In all the previous cases, the attention mechanism is applied using an unsupervised scheme, where attention coefficients are considered as latent variables. Recently, there have been also interest on including a supervised attention scheme to the VQA problem BIBREF4 , BIBREF16 , BIBREF17 . Das et al. BIBREF4 compare the image areas selected by humans and state-of-the-art VQA techniques to answer the same visual question. To achieve this, they collect the VQA human attention dataset (VQA-HAT), a large dataset of human attention maps built by asking humans to select images areas relevant to answer questions from the VQA dataset BIBREF2 . Interestingly, this study concludes that current machine-generated attention maps exhibit a poor correlation with respect to the human counterpart, suggesting that humans use different visual cues to answer the questions. At a more fundamental level, this suggests that the discriminative nature of most current VQA systems does not effectively constraint the attention modules, leading to the encoding of discriminative cues instead of the underlying semantic that relates a given question-answer pair. Our findings in this work support this hypothesis.", "Related to the work in BIBREF4 , Gan et al. BIBREF16 apply a more structured approach to identify the image areas used by humans to answer visual questions. For VQA pairs associated to images in the COCO dataset, they ask humans to select the segmented areas in COCO images that are relevant to answer each question. Afterwards, they use these areas as labels to train a deep learning model that is able to identify attention features. By augmenting a standard VQA technique with these attention features, they are able to achieve a small boost in performance. Closely related to our approach, Qiao et al. BIBREF17 use the attention labels in the VQA-HAT dataset to train an attention proposal network that is able to predict image areas relevant to answer a visual question. This network generates a set of attention proposals for each image in the VQA dataset, which are used as labels to supervise attention in the VQA model. This strategy results in a small boost in performance compared with a non-attentional strategy. In contrast to our approach, these previous works are based on a supervised attention scheme that does not consider an automatic mechanism to obtain the attention labels. Instead, they rely on human annotated groundings as attention supervision. Furthermore, they differ from our work in the method to integrate attention labels to a VQA model." ], [ "Figure FIGREF2 shows the main pipeline of our VQA model. We mostly build upon the MCB model in BIBREF5 , which exemplifies current state-of-the-art techniques for this problem. Our main innovation to this model is the addition of an Attention Supervision Module that incorporates visual grounding as an auxiliary task. Next we describe the main modules behind this model.", "Question Attention Module: Questions are tokenized and passed through an embedding layer, followed by an LSTM layer that generates the question features INLINEFORM0 , where INLINEFORM1 is the maximum number of words in the tokenized version of the question and INLINEFORM2 is the dimensionality of the hidden state of the LSTM. Additionally, following BIBREF12 , a question attention mechanism is added that generates question attention coefficients INLINEFORM3 , where INLINEFORM4 is the so-called number of “glimpses”. The purpose of INLINEFORM5 is to allow the model to predict multiple attention maps so as to increase its expressiveness. Here, we use INLINEFORM6 . The weighted question features INLINEFORM7 are then computed using a soft attention mechanism BIBREF9 , which is essentially a weighted sum of the INLINEFORM8 word features followed by a concatenation according to INLINEFORM9 .", "Image Attention Module: Images are passed through an embedding layer consisting of a pre-trained ConvNet model, such as Resnet pretrained with the ImageNet dataset BIBREF18 . This generates image features INLINEFORM0 , where INLINEFORM1 , INLINEFORM2 and INLINEFORM3 are depth, height, and width of the extracted feature maps. Fusion Module I is then used to generate a set of image attention coefficients. First, question features INLINEFORM4 are tiled as the same spatial shape of INLINEFORM5 . Afterwards, the fusion module models the joint relationship INLINEFORM6 between questions and images, mapping them to a common space INLINEFORM7 . In the simplest case, one can implement the fusion module using either concatenation or Hadamard product BIBREF19 , but more effective pooling schemes can be applied BIBREF5 , BIBREF20 , BIBREF12 , BIBREF6 . The design choice of the fusion module remains an on-going research topic. In general, it should both effectively capture the latent relationship between multi-modal features meanwhile be easy to optimize. The fusion results are then passed through an attention module that computes the visual attention coefficient INLINEFORM8 , with which we can obtain attention-weighted visual features INLINEFORM9 . Again, INLINEFORM10 is the number of “glimpses”, where we use INLINEFORM11 .", "Classification Module: Using the compact representation of questions INLINEFORM0 and visual information INLINEFORM1 , the classification module applies first the Fusion Module II that provides the feature representation of answers INLINEFORM2 , where INLINEFORM3 is the latent answer space. Afterwards, it computes the logits over a set of predefined candidate answers. Following previous work BIBREF5 , we use as candidate outputs the top 3000 most frequent answers in the VQA dataset. At the end of this process, we obtain the highest scoring answer INLINEFORM4 .", "Attention Supervision Module: As a main novelty of the VQA model, we add an Image Attention Supervision Module as an auxiliary classification task, where ground-truth visual grounding labels INLINEFORM0 are used to guide the model to focus on meaningful parts of the image to answer each question. To do that, we simply treat the generated attention coefficients INLINEFORM1 as a probability distribution, and then compare it with the ground-truth using KL-divergence. Interestingly, we introduce two attention maps, corresponding to relevant region-level and object-level groundings, as shown in Figure FIGREF3 . Sections SECREF4 and SECREF5 provide details about our proposed method to obtain the attention labels and to train the resulting model, respectively." ], [ "Visual Genome (VG) BIBREF21 includes the largest VQA dataset currently available, which consists of 1.7M QA pairs. Furthermore, for each of its more than 100K images, VG also provides region and object annotations by means of bounding boxes. In terms of visual grounding, these region and object annotations provide complementary information. As an example, as shown in Figure FIGREF3 , for questions related to interaction between objects, region annotations result highly relevant. In contrast, for questions related to properties of specific objects, object annotations result more valuable. Consequently, in this section we present a method to automatically select region and object annotations from VG that can be used as labels to implement visual grounding as an auxiliary task for VQA.", "For region annotations, we propose a simple heuristic to mine visual groundings: for each INLINEFORM0 we enumerate all the region descriptions of INLINEFORM1 and pick the description INLINEFORM2 that has the most (at least two) overlapped informative words with INLINEFORM3 and INLINEFORM4 . Informative words are all nouns and verbs, where two informative words are matched if at least one of the following conditions is met: (1) Their raw text as they appear in INLINEFORM5 or INLINEFORM6 are the same; (2) Their lemmatizations (using NLTK BIBREF22 ) are the same; (3) Their synsets in WordNet BIBREF23 are the same; (4) Their aliases (provided from VG) are the same. We refer to the resulting labels as region-level groundings. Figure FIGREF3 (a) illustrates an example of a region-level grounding.", "In terms of object annotations, for each image in a INLINEFORM0 triplet we select the bounding box of an object as a valid grounding label, if the object name matches one of the informative nouns in INLINEFORM1 or INLINEFORM2 . To score each match, we use the same criteria as region-level groundings. Additionally, if a triplet INLINEFORM3 has a valid region grounding, each corresponding object-level grounding must be inside this region to be accepted as valid. As a further refinement, selected objects grounding are passed through an intersection over union filter to account for the fact that VG usually includes multiple labels for the same object instance. As a final consideration, for questions related to counting, region-level groundings are discarded after the corresponding object-level groundings are extracted. We refer to the resulting labels as object-level groundings. Figure FIGREF3 (b) illustrates an example of an object-level grounding.", "As a result, combining both region-level and object-level groundings, about 700K out of 1M INLINEFORM0 triplets in VG end up with valid grounding labels. We will make these labels publicly available." ], [ "We build the attention supervision on top of the open-sourced implementation of MCB BIBREF5 and MFB BIBREF12 . Similar to them, We extract the image feature from res5c layer of Resnet-152, resulting in INLINEFORM0 spatial grid ( INLINEFORM1 , INLINEFORM2 , INLINEFORM3 ). We construct our ground-truth visual grounding labels to be INLINEFORM4 glimpse maps per QA pair, where the first map is object-level grounding and the second map is region-level grounding, as discussed in Section SECREF4 . Let INLINEFORM5 be the coordinate of INLINEFORM6 selected object bounding box in the grounding labels, then the mined object-level attention maps INLINEFORM7 are: DISPLAYFORM0 ", "where INLINEFORM0 is the indicator function. Similarly, the region-level attention maps INLINEFORM1 are: DISPLAYFORM0 ", "", "Afterwards, INLINEFORM0 and INLINEFORM1 are spatially L1-normalized to represent probabilities and concatenated to form INLINEFORM2 .", "The model is trained using a multi-task loss, DISPLAYFORM0 ", "where INLINEFORM0 denotes cross-entropy and INLINEFORM1 denotes KL-divergence. INLINEFORM2 corresponds to the learned parameters. INLINEFORM3 is a scalar that weights the loss terms. This scalar decays as a function of the iteration number INLINEFORM4 . In particular, we choose to use a cosine-decay function: DISPLAYFORM0 ", "This is motivated by the fact that the visual grounding labels have some level of subjectivity. As an example, Figure FIGREF11 (second row) shows a case where the learned attention seems more accurate than the VQA-HAT ground truth. Hence, as the model learns suitable parameter values, we gradually loose the penalty on the attention maps to provide more freedom to the model to selectively decide what attention to use. It is important to note that, for training samples in VQA-2.0 or VG that do not have region-level or object-level grounding labels, INLINEFORM0 in Equation EQREF6 , so the loss is reduced to the classification term only. In our experiment, INLINEFORM1 is calibrated for each tested model based on the number of training steps. In particular, we choose INLINEFORM2 for all MCB models and INLINEFORM3 for others." ], [ "VQA-2.0: The VQA-2.0 dataset BIBREF2 consists of 204721 images, with a total of 1.1M questions and 10 crowd-sourced answers per question. There are more than 20 question types, covering a variety of topics and free-form answers. The dataset is split into training (82K images and 443K questions), validation (40K images and 214K questions), and testing (81K images and 448K questions) sets. The task is to predict a correct answer INLINEFORM0 given a corresponding image-question pair INLINEFORM1 . As a main advantage with respect to version 1.0 BIBREF2 , for every question VQA-2.0 includes complementary images that lead to different answers, reducing language bias by forcing the model to use the visual information.", "Visual Genome: The Visual Genome (VG) dataset BIBREF21 contains 108077 images, with an average of 17 QA pairs per image. We follow the processing scheme from BIBREF5 , where non-informative words in the questions and answers such as “a” and “is” are removed. Afterwards, INLINEFORM0 triplets with answers to be single keyword and overlapped with VQA-2.0 dataset are included in our training set. This adds 97697 images and about 1 million questions to the training set. Besides the VQA data, VG also provides on average 50 region descriptions and 30 object instances per image. Each region/object is annotated by one sentence/phrase description and bounding box coordinates.", "VQA-HAT: VQA-HAT dataset BIBREF4 contains 58475 human visual attention heat (HAT) maps for INLINEFORM0 triplets in VQA-1.0 training set. Annotators were shown a blurred image, a INLINEFORM1 pair and were asked to “scratch” the image until they believe someone else can answer the question by looking at the blurred image and the sharpened area. The authors also collect INLINEFORM2 HAT maps for VQA-1.0 validation sets, where each of the 1374 INLINEFORM3 were labeled by three different annotators, so one can compare the level of agreement among labels. We use VQA-HAT to evaluate visual grounding performance, by comparing the rank-correlation between human attention and model attention, as in BIBREF4 , BIBREF24 .", "VQA-X: VQA-X dataset BIBREF24 contains 2000 labeled attention maps in VQA-2.0 validation sets. In contrast to VQA-HAT, VQA-X attention maps are in the form of instance segmentations, where annotators were asked to segment objects and/or regions that most prominently justify the answer. Hence the attentions are more specific and localized. We use VQA-X to evaluate visual grounding performance by comparing the rank-correlation, as in BIBREF4 , BIBREF24 ." ], [ "We evaluate the performance of our proposed method using two criteria: i) rank-correlation BIBREF25 to evaluate visual grounding and ii) accuracy to evaluate question answering. Intuitively, rank-correlation measures the similarity between human and model attention maps under a rank-based metric. A high rank-correlation means that the model is `looking at' image areas that agree to the visual information used by a human to answer the same question. In terms of accuracy of a predicted answer INLINEFORM0 is evaluated by: DISPLAYFORM0 ", "", "Table TABREF10 reports our main results. Our models are built on top of prior works with the additional Attention Supervision Module as described in Section SECREF3 . Specifically, we denote by Attn-* our adaptation of the respective model by including our Attention Supervision Module. We highlight that MCB model is the winner of VQA challenge 2016 and MFH model is the best single model in VQA challenge 2017. In Table TABREF10 , we can observe that our proposed model achieves a significantly boost on rank-correlation with respect to human attention. Furthermore, our model outperforms alternative state-of-art techniques in terms of accuracy in answer prediction. Specifically, the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X. This indicates that our proposed methods enable VQA models to provide more meaningful and interpretable results by generating more accurate visual grounding.", "Table TABREF10 also reports the result of an experiment where the decaying factor INLINEFORM0 in Equation EQREF7 is fixed to a value of 1. In this case, the model is able to achieve higher rank-correlation, but accuracy drops by 2%. We observe that as training proceeds, attention loss becomes dominant in the final training steps, which affects the accuracy of the classification module.", "Figure FIGREF11 shows qualitative results of the resulting visual grounding, including also a comparison with respect to no-attn model." ], [ "In this work we have proposed a new method that is able to slightly outperform current state-of-the-art VQA systems, while also providing interpretable representations in the form of an explicitly trainable visual attention mechanism. Specifically, as a main result, our experiments provide evidence that the generated visual groundings achieve high correlation with respect to human-provided attention annotations, outperforming the correlation scores of previous works by a large margin.", "As further contributions, we highlight two relevant insides of the proposed approach. On one side, by using attention labels as an auxiliary task, the proposed approach demonstrates that is able to constraint the internal representation of the model in such a way that it fosters the encoding of interpretable representations of the underlying relations between the textual question and input image. On other side, the proposed approach demonstrates a method to leverage existing datasets with region descriptions and object labels to effectively supervise the attention mechanism in VQA applications, avoiding costly human labeling.", "As future work, we believe that the superior visual grounding provided by the proposed method can play a relevant role to generate natural language explanations to justify the answer to a given visual question. This scenario will help to demonstrate the relevance of our technique as a tool to increase the capabilities of AI based technologies to explain their decisions.", "", "Acknowledgements: This work was partially funded by Oppo, Panasonic and the Millennium Institute for Foundational Research on Data." ] ] }
{ "question": [ "By how much do they outperform existing state-of-the-art VQA models?", "How do they measure the correlation between manual groundings and model generated ones?", "How do they obtain region descriptions and object annotations?" ], "question_id": [ "17f5f4a5d943c91d46552fb75940b67a72144697", "83f22814aaed9b5f882168e22a3eac8f5fda3882", "ed11b4ff7ca72dd80a792a6028e16ba20fccff66" ], "nlp_background": [ "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Table TABREF10 reports our main results. Our models are built on top of prior works with the additional Attention Supervision Module as described in Section SECREF3 . Specifically, we denote by Attn-* our adaptation of the respective model by including our Attention Supervision Module. We highlight that MCB model is the winner of VQA challenge 2016 and MFH model is the best single model in VQA challenge 2017. In Table TABREF10 , we can observe that our proposed model achieves a significantly boost on rank-correlation with respect to human attention. Furthermore, our model outperforms alternative state-of-art techniques in terms of accuracy in answer prediction. Specifically, the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X. This indicates that our proposed methods enable VQA models to provide more meaningful and interpretable results by generating more accurate visual grounding." ], "highlighted_evidence": [ "Table TABREF10 reports our main results. Our models are built on top of prior works with the additional Attention Supervision Module as described in Section SECREF3 . Specifically, we denote by Attn-* our adaptation of the respective model by including our Attention Supervision Module. We highlight that MCB model is the winner of VQA challenge 2016 and MFH model is the best single model in VQA challenge 2017. In Table TABREF10 , we can observe that our proposed model achieves a significantly boost on rank-correlation with respect to human attention. Furthermore, our model outperforms alternative state-of-art techniques in terms of accuracy in answer prediction. Specifically, the rank-correlation for MFH model increases by 36.4% when is evaluated in VQA-HAT dataset and 7.7% when is evaluated in VQA-X. This indicates that our proposed methods enable VQA models to provide more meaningful and interpretable results by generating more accurate visual grounding." ] } ], "annotation_id": [ "0addc69c7a2f96afa92bfff2e2ec342bb635b4d8" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "rank-correlation BIBREF25" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We evaluate the performance of our proposed method using two criteria: i) rank-correlation BIBREF25 to evaluate visual grounding and ii) accuracy to evaluate question answering. Intuitively, rank-correlation measures the similarity between human and model attention maps under a rank-based metric. A high rank-correlation means that the model is `looking at' image areas that agree to the visual information used by a human to answer the same question. In terms of accuracy of a predicted answer INLINEFORM0 is evaluated by: DISPLAYFORM0" ], "highlighted_evidence": [ "We evaluate the performance of our proposed method using two criteria: i) rank-correlation BIBREF25 to evaluate visual grounding and ii) accuracy to evaluate question answering. Intuitively, rank-correlation measures the similarity between human and model attention maps under a rank-based metric. A high rank-correlation means that the model is `looking at' image areas that agree to the visual information used by a human to answer the same question. In terms of accuracy of a predicted answer INLINEFORM0 is evaluated by: DISPLAYFORM0\n\n" ] } ], "annotation_id": [ "ae7a841528b10c3d40718855ef440e54a412b22d" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "they are available in the Visual Genome dataset", "evidence": [ "In this work, we introduce a methodology that provides VQA algorithms with the ability to generate human interpretable attention maps which effectively ground the answer to the relevant image regions. We accomplish this by leveraging region descriptions and object annotations available in the Visual Genome dataset, and using these to automatically construct attention maps that can be used for attention supervision, instead of requiring human annotators to manually provide grounding labels. Our framework achieves competitive state-of-the-art VQA performance, while generating visual groundings that outperform other algorithms that use human annotated attention during training." ], "highlighted_evidence": [ "In this work, we introduce a methodology that provides VQA algorithms with the ability to generate human interpretable attention maps which effectively ground the answer to the relevant image regions. We accomplish this by leveraging region descriptions and object annotations available in the Visual Genome dataset, and using these to automatically construct attention maps that can be used for attention supervision, instead of requiring human annotators to manually provide grounding labels." ] } ], "annotation_id": [ "bff3cb10c3c179d03259c859c4504f5f82a54325" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Figure 1. Interpretable VQA algorithms must ground their answer into image regions that are relevant to the question. In this paper, we aim at providing this ability by leveraging existing region descriptions and object annotations to construct grounding supervision automatically.", "Figure 2. Schematic diagram of the main parts of the VQA model. It is mostly based on the model presented in [6]. Main innovation is the Attention Supervision Module that incorporates visual grounding as an auxiliary task. This module is trained through the use of a set of image attention labels that are automatically mined from the Visual Genome dataset.", "Figure 3. (a) Example region-level groundings from VG. Left: image with region description labels; Right: our mined results. Here “men” in the region description is firstly lemmatized to be “man”, whose aliases contain “people”; the word “talking” in the answer also contributes to the matching. So the selected regions have two matchings which is the most among all candidates. (b) Example object-level grounding from VG. Left: image with object instance labels; Right: our mined results. Note that in this case region-level grounding will give us the same result as in (a), but object-level grounding is clearly more localized.", "Table 1. Evaluation of different VQA models on visual grounding and answer prediction. All the listed models are trained on VQA2.0 and Visual Genome. The reported accuracies are evaluated using the VQA-2.0 test-standard set. Note that the results of MCB, MFB and MFH are taken directly from the author’s public best single model.", "Figure 4. Visual grounding comparison: the first column is the ground-truth human attention in VQA-HAT [5]; the second column shows the results from pretrained MFH model [26]; the last column are our Attn-MFH trained with attention supervision. We can see that the attention areas considered by our model mimic the attention areas used by humans, but they are more localized in space.", "Figure 5. Qualitative Results on complementary pairs generated by our Attn-MFH model; the model learns to attend to different regions even if the questions are the same." ], "file": [ "1-Figure1-1.png", "4-Figure2-1.png", "5-Figure3-1.png", "6-Table1-1.png", "7-Figure4-1.png", "8-Figure5-1.png" ] }
1810.09774
Testing the Generalization Power of Neural Network Models Across NLI Benchmarks
Neural network models have been very successful in natural language inference, with the best models reaching 90% accuracy in some benchmarks. However, the success of these models turns out to be largely benchmark specific. We show that models trained on a natural language inference dataset drawn from one benchmark fail to perform well in others, even if the notion of inference assumed in these benchmarks is the same or similar. We train six high performing neural network models on different datasets and show that each one of these has problems of generalizing when we replace the original test set with a test set taken from another corpus designed for the same task. In light of these results, we argue that most of the current neural network models are not able to generalize well in the task of natural language inference. We find that using large pre-trained language models helps with transfer learning when the datasets are similar enough. Our results also highlight that the current NLI datasets do not cover the different nuances of inference extensively enough.
{ "section_name": [ "Introduction", "Related Work", "Experimental Setup", "Data", "Model and Training Details", "Experimental Results", "Discussion and Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Natural Language Inference (NLI) has attracted considerable interest in the NLP community and, recently, a large number of neural network-based systems have been proposed to deal with the task. One can attempt a rough categorization of these systems into: a) sentence encoding systems, and b) other neural network systems. Both of them have been very successful, with the state of the art on the SNLI and MultiNLI datasets being 90.4%, which is our baseline with BERT BIBREF0 , and 86.7% BIBREF0 respectively. However, a big question with respect to these systems is their ability to generalize outside the specific datasets they are trained and tested on. Recently, BIBREF1 have shown that state-of-the-art NLI systems break considerably easily when, instead of tested on the original SNLI test set, they are tested on a test set which is constructed by taking premises from the training set and creating several hypotheses from them by changing at most one word within the premise. The results show a very significant drop in accuracy for three of the four systems. The system that was more difficult to break and had the least loss in accuracy was the system by BIBREF2 which utilizes external knowledge taken from WordNet BIBREF3 .", "In this paper we show that NLI systems that have been very successful in specific NLI benchmarks, fail to generalize when trained on a specific NLI dataset and then these trained models are tested across test sets taken from different NLI benchmarks. The results we get are in line with BIBREF1 , showing that the generalization capability of the individual NLI systems is very limited, but, what is more, they further show the only system that was less prone to breaking in BIBREF1 , breaks too in the experiments we have conducted.", "We train six different state-of-the-art models on three different NLI datasets and test these trained models on an NLI test set taken from another dataset designed for the same NLI task, namely for the task to identify for sentence pairs in the dataset if one sentence entails the other one, if they are in contradiction with each other or if they are neutral with respect to inferential relationship.", "One would expect that if a model learns to correctly identify inferential relationships in one dataset, then it would also be able to do so in another dataset designed for the same task. Furthermore, two of the datasets, SNLI BIBREF4 and MultiNLI BIBREF5 , have been constructed using the same crowdsourcing approach and annotation instructions BIBREF5 , leading to datasets with the same or at least very similar definition of entailment. It is therefore reasonable to expect that transfer learning between these datasets is possible. As SICK BIBREF6 dataset has been machine-constructed, a bigger difference in performance is expected.", "In this paper we show that, contrary to our expectations, most models fail to generalize across the different datasets. However, our experiments also show that BERT BIBREF0 performs much better than the other models in experiments between SNLI and MultiNLI. Nevertheless, even BERT fails when testing on SICK. In addition to the negative results, our experiments further highlight the power of pre-trained language models, like BERT, in NLI.", "The negative results of this paper are significant for the NLP research community as well as to NLP practice as we would like our best models to not only to be able to perform well in a specific benchmark dataset, but rather capture the more general phenomenon this dataset is designed for. The main contribution of this paper is that it shows that most of the best performing neural network models for NLI fail in this regard. The second, and equally important, contribution is that our results highlight that the current NLI datasets do not capture the nuances of NLI extensively enough." ], [ "The ability of NLI systems to generalize and related skepticism has been raised in a number of recent papers. BIBREF1 show that the generalization capabilities of state-of-the-art NLI systems, in cases where some kind of external lexical knowledge is needed, drops dramatically when the SNLI test set is replaced by a test set where the premise and the hypothesis are otherwise identical except for at most one word. The results show a very significant drop in accuracy. BIBREF7 recognize the generalization problem that comes with training on datasets like SNLI, which tend to be homogeneous and with little linguistic variation. In this context, they propose to better train NLI models by making use of adversarial examples.", "Multiple papers have reported hidden bias and annotation artifacts in the popular NLI datasets SNLI and MultiNLI allowing classification based on the hypothesis sentences alone BIBREF8 , BIBREF9 , BIBREF10 .", " BIBREF11 evaluate the robustness of NLI models using datasets where label preserving swapping operations have been applied, reporting significant performance drops compared to the results with the original dataset. In these experiments, like in the BreakingNLI experiment, the systems that seem to be performing the better, i.e. less prone to breaking, are the ones where some kind of external knowledge is used by the model (KIM by BIBREF2 is one of those systems).", "On a theoretical and methodological level, there is discussion on the nature of various NLI datasets, as well as the definition of what counts as NLI and what does not. For example, BIBREF12 , BIBREF13 present an overview of the most standard datasets for NLI and show that the definitions of inference in each of them are actually quite different, capturing only fragments of what seems to be a more general phenomenon.", " BIBREF4 show that a simple LSTM model trained on the SNLI data fails when tested on SICK. However, their experiment is limited to this single architecture and dataset pair. BIBREF5 show that different models that perform well on SNLI have lower accuracy on MultiNLI. However in their experiments they did not systematically test transfer learning between the two datasets, but instead used separate systems where the training and test data were drawn from the same corpora." ], [ "In this section we describe the datasets and model architectures included in the experiments." ], [ "We chose three different datasets for the experiments: SNLI, MultiNLI and SICK. All of them have been designed for NLI involving three-way classification with the labels entailment, neutral and contradiction. We did not include any datasets with two-way classification, e.g. SciTail BIBREF14 . As SICK is a relatively small dataset with approximately only 10k sentence pairs, we did not use it as training data in any experiment. We also trained the models with a combined SNLI + MultiNLI training set.", "For all the datasets we report the baseline performance where the training and test data are drawn from the same corpus. We then take these trained models and test them on a test set taken from another NLI corpus. For the case where the models are trained with SNLI + MultiNLI we report the baseline using the SNLI test data. All the experimental combinations are listed in Table 1 . Examples from the selected datasets are provided in Table 2 . To be more precise, we vary three things: training dataset, model and testing dataset. We should qualify this though, since the three datasets we look at, can also be grouped by text domain/genre and type of data collection, with MultiNLI and SNLI using the same data collection style, and SNLI and SICK using roughly the same domain/genre. Hopefully, our set up will let us determine which of these factors matters the most.", "We describe the source datasets in more detail below.", "The Stanford Natural Language Inference (SNLI) corpus BIBREF4 is a dataset of 570k human-written sentence pairs manually labeled with the labels entailment, contradiction, and neutral. The source for the premise sentences in SNLI were image captions taken from the Flickr30k corpus BIBREF15 .", "The Multi-Genre Natural Language Inference (MultiNLI) corpus BIBREF5 consisting of 433k human-written sentence pairs labeled with entailment, contradiction and neutral. MultiNLI contains sentence pairs from ten distinct genres of both written and spoken English. Only five genres are included in the training set. The development and test sets have been divided into matched and mismatched, where the former includes only sentences from the same genres as the training data, and the latter includes sentences from the remaining genres not present in the training data.", "We used the matched development set (MultiNLI-m) for the experiments. The MultiNLI dataset was annotated using very similar instructions as for the SNLI dataset. Therefore we can assume that the definitions of entailment, contradiction and neutral is the same in these two datasets.", "SICK BIBREF6 is a dataset that was originally constructed to test compositional distributional semantics (DS) models. The dataset contains 9,840 examples pertaining to logical inference (negation, conjunction, disjunction, apposition, relative clauses, etc.). The dataset was automatically constructed taking pairs of sentences from a random subset of the 8K ImageFlickr data set BIBREF15 and the SemEval 2012 STS MSRVideo Description dataset BIBREF16 ." ], [ "We perform experiments with six high-performing models covering the sentence encoding models, cross-sentence attention models as well as fine-tuned pre-trained language models.", "For sentence encoding models, we chose a simple one-layer bidirectional LSTM with max pooling (BiLSTM-max) with the hidden size of 600D per direction, used e.g. in InferSent BIBREF17 , and HBMP BIBREF18 . For the other models, we have chosen ESIM BIBREF19 , which includes cross-sentence attention, and KIM BIBREF2 , which has cross-sentence attention and utilizes external knowledge. We also selected two model involving a pre-trained language model, namely ESIM + ELMo BIBREF20 and BERT BIBREF0 . KIM is particularly interesting in this context as it performed significantly better than other models in the Breaking NLI experiment conducted by BIBREF1 . The success of pre-trained language models in multiple NLP tasks make ESIM + ELMo and BERT interesting additions to this experiment. Table 3 lists the different models used in the experiments.", "For BiLSTM-max we used the Adam optimizer BIBREF21 , a learning rate of 5e-4 and batch size of 64. The learning rate was decreased by the factor of 0.2 after each epoch if the model did not improve. Dropout of 0.1 was used between the layers of the multi-layer perceptron classifier, except before the last layer.The BiLSTM-max models were initialized with pre-trained GloVe 840B word embeddings of size 300 dimensions BIBREF22 , which were fine-tuned during training. Our BiLSMT-max model was implemented in PyTorch.", "For HBMP, ESIM, KIM and BERT we used the original implementations with the default settings and hyperparameter values as described in BIBREF18 , BIBREF19 , BIBREF2 and BIBREF0 respectively. For BERT we used the uncased 768-dimensional model (BERT-base). For ESIM + ELMo we used the AllenNLP BIBREF23 PyTorch implementation with the default settings and hyperparameter values." ], [ "Table 4 contains all the experimental results.", "Our experiments show that, while all of the six models perform well when the test set is drawn from the same corpus as the training and development set, accuracy is significantly lower when we test these trained models on a test set drawn from a separate NLI corpus, the average difference in accuracy being 24.9 points across all experiments.", "Accuracy drops the most when a model is tested on SICK. The difference in this case is between 19.0-29.0 points when trained on MultiNLI, between 31.6-33.7 points when trained on SNLI and between 31.1-33.0 when trained on SNLI + MultiNLI. This was expected, as the method of constructing the sentence pairs was different, and hence there is too much difference in the kind of sentence pairs included in the training and test sets for transfer learning to work. However, the drop was more dramatic than expected.", "The most surprising result was that the accuracy of all models drops significantly even when the models were trained on MultiNLI and tested on SNLI (3.6-11.1 points). This is surprising as both of these datasets have been constructed with a similar data collection method using the same definition of entailment, contradiction and neutral. The sentences included in SNLI are also much simpler compared to those in MultiNLI, as they are taken from the Flickr image captions. This might also explain why the difference in accuracy for all of the six models is lowest when the models are trained on MultiNLI and tested on SNLI. It is also very surprising that the model with the biggest difference in accuracy was ESIM + ELMo which includes a pre-trained ELMo language model. BERT performed significantly better than the other models in this experiment having an accuracy of 80.4% and only 3.6 point difference in accuracy.", "The poor performance of most of the models with the MultiNLI-SNLI dataset pair is also very surprising given that neural network models do not seem to suffer a lot from introduction of new genres to the test set which were not included in the training set, as can be seen from the small difference in test accuracies for the matched and mismatched test sets (see e.g BIBREF5 ). In a sense SNLI could be seen as a separate genre not included in MultiNLI. This raises the question if the SNLI and MultiNLI have e.g. different kinds of annotation artifacts, which makes transfer learning between these datasets more difficult.", "All the models, except BERT, perform almost equally poorly across all the experiments. Both BiLSTM-max and HBMP have an average drop in accuracy of 24.4 points, while the average for KIM is 25.5 and for ESIM + ELMo 25.6. ESIM has the highest average difference of 27.0 points. In contrast to the findings of BIBREF1 , utilizing external knowledge did not improve the model's generalization capability, as KIM performed equally poorly across all dataset combinations.", "Also including a pretrained ELMo language model did not improve the results significantly. The overall performance of BERT was significantly better than the other models, having the lowest average difference in accuracy of 22.5 points. Our baselines for SNLI (90.4%) and SNLI + MultiNLI (90.6%) outperform the previous state-of-the-art accuracy for SNLI (90.1%) by BIBREF24 .", "To understand better the types of errors made by neural network models in NLI we looked at some example failure-pairs for selected models. Tables 5 and 6 contain some randomly selected failure-pairs for two models: BERT and HBMP, and for three set-ups: SNLI $\\rightarrow $ SICK, SNLI $\\rightarrow $ MultiNLI and MultiNLI $\\rightarrow $ SICK. We chose BERT as the current the state of the art NLI model. HBMP was selected as a high performing model in the sentence encoding model type. Although the listed sentence pairs represent just a small sample of the errors made by these models, they do include some interesting examples. First, it seems that SICK has a more narrow notion of contradiction – corresponding more to logical contradiction – compared to the contradiction in SNLI and MultiNLI, where especially in SNLI the sentences are contradictory if they describe a different state of affairs. This is evident in the sentence pair: A young child is running outside over the fallen leaves and A young child is lying down on a gravel road that is covered with dead leaves, which is predicted by BERT to be contradiction although the gold label is neutral. Another interesting example is the sentence pair: A boat pear with people boarding and disembarking some boats. and people are boarding and disembarking some boats, which is incorrectly predicted by BERT to be contradiction although it has been labeled as entailment. Here the two sentences describe the same event from different points of view: the first one describing a boat pear with some people on it and the second one describing the people directly. Interestingly the added information about the boat pear seems to confuse the model." ], [ "In this paper we have shown that neural network models for NLI fail to generalize across different NLI benchmarks. We experimented with six state-of-the-art models covering sentence encoding approaches, cross-sentence attention models and pre-trained and fine-tuned language models. For all the systems, the accuracy drops between 3.6-33.7 points (the average drop being 24.9 points), when testing with a test set drawn from a separate corpus from that of the training data, as compared to when the test and training data are splits from the same corpus. Our findings, together with the previous negative findings, indicate that the state-of-the-art models fail to capture the semantics of NLI in a way that will enable them to generalize across different NLI situations.", "The results highlight two issues to be taken into consideration: a) using datasets involving a fraction of what NLI is, will fail when tested in datasets that are testing for a slightly different definition of inference. This is evident when we move from the SNLI to the SICK dataset. b) NLI is to some extent genre/context dependent. Training on SNLI and testing on MultiNLI gives worse results than vice versa. This is particularly evident in the case of BERT. These results highlight that training on multiple genres helps. However, this help is still not enough given that, even in the case of training on MultiNLI (multi genre) and training on SNLI (single genre and same definition of inference with MultiNLI), accuracy drops significantly.", "We also found that involving a large pre-trained language model helps with transfer learning when the datasets are similar enough, as is the case with SNLI and MultiNLI. Our results further corroborate the power of pre-trained and fine-tuned language models like BERT in NLI. However, not even BERT is able to generalize from SNLI and MultiNLI to SICK, possibly due to the difference between what kind of inference relations are contained in these datasets.", "Our findings motivate us to look for novel neural network architectures and approaches that better capture the semantics on natural language inference beyond individual datasets. However, there seems to be a need to start with better constructed datasets, i.e. datasets that will not only capture fractions of what NLI is in reality. Better NLI systems need to be able to be more versatile on the types of inference they can recognize. Otherwise, we would be stuck with systems that can cover only some aspects of NLI. On a theoretical level, and in connection to the previous point, we need a better understanding of the range of phenomena NLI must be able to cover and focus our future endeavours for dataset construction towards this direction. In order to do this a more systematic study is needed on the different kinds of entailment relations NLI datasets need to include. Our future work will include a more systematic and broad-coverage analysis of the types of errors the models make and in what kinds of sentence-pairs they make successful predictions." ], [ " The first author is supported by the FoTran project, funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113). ", "The first author also gratefully acknowledges the support of the Academy of Finland through project 314062 from the ICT 2023 call on Computation, Machine Learning and Artificial Intelligence.", "The second author is supported by grant 2014-39 from the Swedish Research Council, which funds the Centre for Linguistic Theory and Studies in Probability (CLASP) in the Department of Philosophy, Linguistics, and Theory of Science at the University of Gothenburg. " ] ] }
{ "question": [ "Which training dataset allowed for the best generalization to benchmark sets?", "Which model generalized the best?", "Which models were compared?", "Which datasets were used?" ], "question_id": [ "a48c6d968707bd79469527493a72bfb4ef217007", "b69897deb5fb80bf2adb44f9cbf6280d747271b3", "ad1f230f10235413d1fe501e414358245b415476", "0a521541b9e2b5c6d64fb08eb318778eba8ac9f7" ], "nlp_background": [ "five", "five", "five", "five" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "MultiNLI", "evidence": [ "FLOAT SELECTED: Table 4: Test accuracies (%). For the baseline results (highlighted in bold) the training data and test data have been drawn from the same benchmark corpus. ∆ is the difference between the test accuracy and the baseline accuracy for the same training set. Results marked with * are for the development set, as no annotated test set is openly available. Best scores with respect to accuracy and difference in accuracy are underlined." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 4: Test accuracies (%). For the baseline results (highlighted in bold) the training data and test data have been drawn from the same benchmark corpus. ∆ is the difference between the test accuracy and the baseline accuracy for the same training set. Results marked with * are for the development set, as no annotated test set is openly available. Best scores with respect to accuracy and difference in accuracy are underlined." ] } ], "annotation_id": [ "0b0ee6e9614e9c96cd79c50344c5ebbe7727bc32" ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "BERT" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Also including a pretrained ELMo language model did not improve the results significantly. The overall performance of BERT was significantly better than the other models, having the lowest average difference in accuracy of 22.5 points. Our baselines for SNLI (90.4%) and SNLI + MultiNLI (90.6%) outperform the previous state-of-the-art accuracy for SNLI (90.1%) by BIBREF24 ." ], "highlighted_evidence": [ " The overall performance of BERT was significantly better than the other models, having the lowest average difference in accuracy of 22.5 points." ] } ], "annotation_id": [ "9f5842ea139d471fa3e041b5e4a401c581e01292" ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "BiLSTM-max, HBMP, ESIM, KIM, ESIM + ELMo, and BERT", "evidence": [ "For sentence encoding models, we chose a simple one-layer bidirectional LSTM with max pooling (BiLSTM-max) with the hidden size of 600D per direction, used e.g. in InferSent BIBREF17 , and HBMP BIBREF18 . For the other models, we have chosen ESIM BIBREF19 , which includes cross-sentence attention, and KIM BIBREF2 , which has cross-sentence attention and utilizes external knowledge. We also selected two model involving a pre-trained language model, namely ESIM + ELMo BIBREF20 and BERT BIBREF0 . KIM is particularly interesting in this context as it performed significantly better than other models in the Breaking NLI experiment conducted by BIBREF1 . The success of pre-trained language models in multiple NLP tasks make ESIM + ELMo and BERT interesting additions to this experiment. Table 3 lists the different models used in the experiments." ], "highlighted_evidence": [ "For sentence encoding models, we chose a simple one-layer bidirectional LSTM with max pooling (BiLSTM-max) with the hidden size of 600D per direction, used e.g. in InferSent BIBREF17 , and HBMP BIBREF18 . For the other models, we have chosen ESIM BIBREF19 , which includes cross-sentence attention, and KIM BIBREF2 , which has cross-sentence attention and utilizes external knowledge. We also selected two model involving a pre-trained language model, namely ESIM + ELMo BIBREF20 and BERT BIBREF0 ." ] } ], "annotation_id": [ "5dccd2cfa3288c901912f44285b3f002d1cfaef6" ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "SNLI, MultiNLI and SICK" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We chose three different datasets for the experiments: SNLI, MultiNLI and SICK. All of them have been designed for NLI involving three-way classification with the labels entailment, neutral and contradiction. We did not include any datasets with two-way classification, e.g. SciTail BIBREF14 . As SICK is a relatively small dataset with approximately only 10k sentence pairs, we did not use it as training data in any experiment. We also trained the models with a combined SNLI + MultiNLI training set.", "The Stanford Natural Language Inference (SNLI) corpus BIBREF4 is a dataset of 570k human-written sentence pairs manually labeled with the labels entailment, contradiction, and neutral. The source for the premise sentences in SNLI were image captions taken from the Flickr30k corpus BIBREF15 .", "The Multi-Genre Natural Language Inference (MultiNLI) corpus BIBREF5 consisting of 433k human-written sentence pairs labeled with entailment, contradiction and neutral. MultiNLI contains sentence pairs from ten distinct genres of both written and spoken English. Only five genres are included in the training set. The development and test sets have been divided into matched and mismatched, where the former includes only sentences from the same genres as the training data, and the latter includes sentences from the remaining genres not present in the training data.", "SICK BIBREF6 is a dataset that was originally constructed to test compositional distributional semantics (DS) models. The dataset contains 9,840 examples pertaining to logical inference (negation, conjunction, disjunction, apposition, relative clauses, etc.). The dataset was automatically constructed taking pairs of sentences from a random subset of the 8K ImageFlickr data set BIBREF15 and the SemEval 2012 STS MSRVideo Description dataset BIBREF16 ." ], "highlighted_evidence": [ "We chose three different datasets for the experiments: SNLI, MultiNLI and SICK.", "The Stanford Natural Language Inference (SNLI) corpus BIBREF4 is a dataset of 570k human-written sentence pairs manually labeled with the labels entailment, contradiction, and neutral. ", "The Multi-Genre Natural Language Inference (MultiNLI) corpus BIBREF5 consisting of 433k human-written sentence pairs labeled with entailment, contradiction and neutral.", "SICK BIBREF6 is a dataset that was originally constructed to test compositional distributional semantics (DS) models. " ] } ], "annotation_id": [ "4be4b9919967b8f3f08d37fc1e0b695f43d44f92" ], "worker_id": [ "7dd5db428d7a43d2945b97c0c07fa56af4eb02ae" ] } ] }
{ "caption": [ "Table 1: Dataset combinations used in the experiments. The rows in bold are baseline experiments, where the test data comes from the same benchmark as the training and development data.", "Table 2: Example sentence pairs from the three datasets.", "Table 3: Model architectures used in the experiments.", "Table 4: Test accuracies (%). For the baseline results (highlighted in bold) the training data and test data have been drawn from the same benchmark corpus. ∆ is the difference between the test accuracy and the baseline accuracy for the same training set. Results marked with * are for the development set, as no annotated test set is openly available. Best scores with respect to accuracy and difference in accuracy are underlined.", "Table 5: Example failure-pairs for BERT.", "Table 6: Example failure-pairs for HBMP." ], "file": [ "3-Table1-1.png", "4-Table2-1.png", "5-Table3-1.png", "6-Table4-1.png", "9-Table5-1.png", "10-Table6-1.png" ] }
1910.05608
VAIS Hate Speech Detection System: A Deep Learning based Approach for System Combination
Nowadays, Social network sites (SNSs) such as Facebook, Twitter are common places where people show their opinions, sentiments and share information with others. However, some people use SNSs to post abuse and harassment threats in order to prevent other SNSs users from expressing themselves as well as seeking different opinions. To deal with this problem, SNSs have to use a lot of resources including people to clean the aforementioned content. In this paper, we propose a supervised learning model based on the ensemble method to solve the problem of detecting hate content on SNSs in order to make conversations on SNSs more effective. Our proposed model got the first place for public dashboard with 0.730 F1 macro-score and the third place with 0.584 F1 macro-score for private dashboard at the sixth international workshop on Vietnamese Language and Speech Processing 2019.
{ "section_name": [ "Introduction", "System description", "System description ::: System overview", "System description ::: Data pre-processing", "System description ::: Models architecture", "System description ::: Ensemble method", "Experiment", "Conclusion" ], "paragraphs": [ [ "Currently, social networks are so popular. Some of the biggest ones include Facebook, Twitter, Youtube,... with extremely number of users. Thus, controlling content of those platforms is essential. For years, social media companies such as Twitter, Facebook, and YouTube have been investing hundreds of millions euros on this task BIBREF0, BIBREF1. However, their effort is not enough since such efforts are primarily based on manual moderation to identify and delete offensive materials. The process is labour intensive, time consuming, and not sustainable or scalable in reality BIBREF2, BIBREF0, BIBREF3.", "In the sixth international workshop on Vietnamese Language and Speech Processing (VLSP 2019), the Hate Speech Detection (HSD) task is proposed as one of the shared-tasks to handle the problem related to controlling content in SNSs. HSD is required to build a multi-class classification model that is capable of classifying an item to one of 3 classes (hate, offensive, clean). Hate speech (hate): an item is identified as hate speech if it (1) targets individuals or groups on the basis of their characteristics; (2) demonstrates a clear intention to incite harm, or to promote hatred; (3) may or may not use offensive or profane words. Offensive but not hate speech (offensive): an item (posts/comments) may contain offensive words but it does not target individuals or groups on the basis of their characteristics. Neither offensive nor hate speech (clean): normal item, it does not contain offensive language or hate speech.", "The term `hate speech' was formally defined as `any communication that disparages a person or a group on the basis of some characteristics (to be referred to as types of hate or hate classes) such as race, colour, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics' BIBREF4. Many researches have been conducted in recent years to develop automatic methods for hate speech detection in the social media domain. These typically employ semantic content analysis techniques built on Natural Language Processing (NLP) and Machine Learning (ML) methods. The task typically involves classifying textual content into non-hate or hateful. This HSD task is much more difficult when it requires classify text in three classes, with hate and offensive class quite hard to classify even with humans.", "In this paper, we propose a method to handle this HSD problem. Our system combines multiple text representations and models architecture in order to make diverse predictions. The system is heavily based on the ensemble method. The next section will present detail of our system including data preparation (how we clean text and build text representation), architecture of the model using in the system, and how we combine them together. The third section is our experiment and result report in HSD shared-task VLSP 2019. The final section is our conclusion with advantages and disadvantages of the system following by our perspective." ], [ "In this section, we present the system architecture. It includes how we pre-process text, what types of text representation we use and models used in our system. In the end, we combine model results by using an ensemble technique." ], [ "The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16." ], [ "Content in the dataset that provided in this HSD task is very diverse. Words having the same meaning were written in various types (teen code, non tone, emojis,..) depending on the style of users. Dataset was crawled from various sources with multiple text encodes. In order to make it easy for training, all types of encoding need to be unified. This cleaning module will be used in two processes: cleaning data before training and cleaning input in inferring phase. Following is the data processing steps that we use:", "Step 1: Format encoding. Vietnamese has many accents, intonations with different Unicode typing programs which may have different outputs with the same typing type. To make it unified, we build a library named visen. For example, the input \"thíêt kê will be normalized to \"thiết kế\" as the output.", "Step 2: In social networks, people show their feelings a lot by emojis. Emoticon is often a special Unicode character, but sometimes, it is combined by multiple normal characters like `: ( = ]'. We make a dictionary mapping this emoji (combined by some characters) to a single Unicode character like other emojis to make it unified.", "Step 3: Remove unseen characters. For human, unseen character is invisible but for a computer, it makes the model harder to process and inserts space between words, punctuation and emoji. This step aims at reducing the number of words in the dictionary which is important task, especially with low dataset resources like this HSD task.", "Step 4: With model requiring Vietnamese word segmentation as the input, we use BIBREF9, BIBREF10 to tokenize the input text.", "Step 5: Make all string lower. We experimented and found that lower-case or upper-case are not a significant impact on the result, but with lower characters, the number of words in the dictionary is reduced.", "RoBERTa proposed in BIBREF8 an optimized method for pretraining self-supervised NLP systems. In our system, we use RoBERTa not only to make sentence representation but also to augment data. With mask mechanism, we replace a word in the input sentence with another word that RoBERTa model proposes. To reduce the impact of replacement word, the chosen words are all common words that appear in almost three classes of the dataset. For example, with input `nhổn làm gắt vl', we can augment to other outputs: `vl làm gắt qá', `còn làm vl vậy', `vl làm đỉnh vl' or `thanh chút gắt vl'.", "british" ], [ "Social comment dataset has high variety, the core idea is using multiple model architectures to handle data in many viewpoints. In our system, we use five different model architectures combining many types of CNN, and RNN. Each model will use some types of word embedding or handle directly sentence embedding to achieve the best general result. Source code of five models is extended from the GitHub repository", "The first model is TextCNN (figure FIGREF2) proposed in BIBREF11. It only contains CNN blocks following by some Dense layers. The output of multiple CNN blocks with different kernel sizes is connected to each other.", "The second model is VDCNN (figure FIGREF5) inspired by the research in BIBREF12. Like the TextCNN model, it contains multiple CNN blocks. The addition in this model is its residual connection.", "The third model is a simple LSTM bidirectional model (figure FIGREF15). It contains multiple LSTM bidirectional blocks stacked to each other.", "The fourth model is LSTMCNN (figure FIGREF24). Before going through CNN blocks, series of word embedding will be transformed by LSTM bidirectional block.", "The final model is the system named SARNN (figure FIGREF25). It adds an attention block between LTSM blocks." ], [ "Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model. Have the main three types of ensemble methods including Bagging, Boosting and Stacking. In this system, we use the Stacking method. In this method, the output of each model is not only class id but also the probability of each class in the set of three classes. This probability will become a feature for the ensemble model. The stacking ensemble model here is a simple full-connection model with input is all of probability that output from sub-model. The output is the probability of each class." ], [ "The dataset in this HSD task is really imbalance. Clean class dominates with 91.5%, offensive class takes 5% and the rest belongs to hate class with 3.5%. To make model being able to learn with this imbalance data, we inject class weight to the loss function with the corresponding ratio (clean, offensive, hate) is $(0.09, 0.95, 0.96)$. Formular DISPLAY_FORM17 is the loss function apply for all models in our system. $w_i$ is the class weight, $y_i$ is the ground truth and $\\hat{y}_i$ is the output of the model. If the class weight is not set, we find that model cannot adjust parameters. The model tends to output all clean classes.", "We experiment 8 types of embedding in total:", "comment: CBOW embedding training in all dataset comment, each word is splited by space. Embedding size is 200.", "comment_bpe: CBOW embedding training in all dataset comment, each word is splited by subword bpe. Embedding size is 200.", "comment_tokenize: CBOW embedding training in all dataset comment, each word is splited by space. Before split by space, word is concatenated by using BIBREF9, BIBREF13, BIBREF10. Embedding size is 200.", "roberta: sentence embedding training in all dataset comment, training by using RoBERTa architecture. Embedding size is 256.", "fasttext, sonvx* is all pre-trained word embedding in general domain. Before mapping word to vector, word is concatenated by using BIBREF9, BIBREF13, BIBREF10. Embedding size of fasttext is 300. (sonvx_wiki, sonvx_baomoi_w2, sonvx_baomoi_w5) have embedding size corresponding is (400, 300, 400).", "In our experiment, the dataset is split into two-part: train set and dev set with the corresponding ratio $(0.9, 0.1)$. Two subsets have the same imbalance ratio like the root set. For each combination of model and word embedding, we train model in train set until it achieve the best result of loss score in the dev set. The table TABREF12 shows the best result of each combination on the f1_macro score.", "For each model having the best fit on the dev set, we export the probability distribution of classes for each sample in the dev set. In this case, we only use the result of model that has f1_macro score that larger than 0.67. The probability distribution of classes is then used as feature to input into a dense model with only one hidden layer (size 128). The training process of the ensemble model is done on samples of the dev set. The best fit result is 0.7356. The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set.", "Statistics of the final result on the dev set shows that almost cases have wrong prediction from offensive and hate class to clean class belong to samples containing the word `vl'. (62% in the offensive class and 48% in the hate class). It means that model overfit the word `vl' to the clean class. This makes sense because `vl' appears too much in the clean class dataset.", "In case the model predicts wrong from the clean class to the offensive class and the hate class, the model tends to decide case having sensitive words to be wrong class. The class offensive and the hate are quite difficult to distinguish even with human." ], [ "In this study, we experiment the combination of multiple embedding types and multiple model architecture to solve a part of the problem Hate Speech Detection with a signification good classification results. Our system heavily based on the ensemble technique so the weakness of the system is slow processing speed. But in fact, it is not big trouble with this HSD problem when human usually involve handling directly in the before.", "HSD is a hard problem even with human. In order to improve classification quality, in the future, we need to collect more data especially social networks content. This will make building text representation more correct and help model easier to classify.", "british" ] ] }
{ "question": [ "What was the baseline?", "Is the data all in Vietnamese?", "What classifier do they use?", "What is private dashboard?", "What is public dashboard?", "What dataset do they use?" ], "question_id": [ "11e376f98df42f487298ec747c32d485c845b5cd", "284ea817fd79bc10b7a82c88d353e8f8a9d7e93c", "c0122190119027dc3eb51f0d4b4483d2dbedc696", "1ed6acb88954f31b78d2821bb230b722374792ed", "5a33ec23b4341584a8079db459d89a4e23420494", "1b9119813ea637974d21862a8ace83bc1acbab8e" ], "nlp_background": [ "two", "two", "two", "two", "two", "two" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "0b3cf44bc00d13112653dfd6e44be62454996080" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "In the sixth international workshop on Vietnamese Language and Speech Processing (VLSP 2019), the Hate Speech Detection (HSD) task is proposed as one of the shared-tasks to handle the problem related to controlling content in SNSs. HSD is required to build a multi-class classification model that is capable of classifying an item to one of 3 classes (hate, offensive, clean). Hate speech (hate): an item is identified as hate speech if it (1) targets individuals or groups on the basis of their characteristics; (2) demonstrates a clear intention to incite harm, or to promote hatred; (3) may or may not use offensive or profane words. Offensive but not hate speech (offensive): an item (posts/comments) may contain offensive words but it does not target individuals or groups on the basis of their characteristics. Neither offensive nor hate speech (clean): normal item, it does not contain offensive language or hate speech.", "The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16." ], "highlighted_evidence": [ "In the sixth international workshop on Vietnamese Language and Speech Processing (VLSP 2019), the Hate Speech Detection (HSD) task is proposed as one of the shared-tasks to handle the problem related to controlling content in SNSs.", "The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type." ] } ], "annotation_id": [ "8750ed52a25b10a49042f666fb69a331e0a935b8" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Stacking method", "LSTMCNN", "SARNN", "simple LSTM bidirectional model", "TextCNN" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16.", "The first model is TextCNN (figure FIGREF2) proposed in BIBREF11. It only contains CNN blocks following by some Dense layers. The output of multiple CNN blocks with different kernel sizes is connected to each other.", "The second model is VDCNN (figure FIGREF5) inspired by the research in BIBREF12. Like the TextCNN model, it contains multiple CNN blocks. The addition in this model is its residual connection.", "The third model is a simple LSTM bidirectional model (figure FIGREF15). It contains multiple LSTM bidirectional blocks stacked to each other.", "The fourth model is LSTMCNN (figure FIGREF24). Before going through CNN blocks, series of word embedding will be transformed by LSTM bidirectional block.", "The final model is the system named SARNN (figure FIGREF25). It adds an attention block between LTSM blocks.", "Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model. Have the main three types of ensemble methods including Bagging, Boosting and Stacking. In this system, we use the Stacking method. In this method, the output of each model is not only class id but also the probability of each class in the set of three classes. This probability will become a feature for the ensemble model. The stacking ensemble model here is a simple full-connection model with input is all of probability that output from sub-model. The output is the probability of each class." ], "highlighted_evidence": [ " After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13.", "The first model is TextCNN (figure FIGREF2) proposed in BIBREF11. It only contains CNN blocks following by some Dense layers. The output of multiple CNN blocks with different kernel sizes is connected to each other.\n\nThe second model is VDCNN (figure FIGREF5) inspired by the research in BIBREF12. Like the TextCNN model, it contains multiple CNN blocks. The addition in this model is its residual connection.\n\nThe third model is a simple LSTM bidirectional model (figure FIGREF15). It contains multiple LSTM bidirectional blocks stacked to each other.\n\nThe fourth model is LSTMCNN (figure FIGREF24). Before going through CNN blocks, series of word embedding will be transformed by LSTM bidirectional block.", "The final model is the system named SARNN (figure FIGREF25). It adds an attention block between LTSM blocks.", "In this system, we use the Stacking method. In this method, the output of each model is not only class id but also the probability of each class in the set of three classes. This probability will become a feature for the ensemble model. The stacking ensemble model here is a simple full-connection model with input is all of probability that output from sub-model. The output is the probability of each class." ] } ], "annotation_id": [ "c0e0e5fd2ec729d22dfb24cad8b4961de4f6a371" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Private dashboard is leaderboard where competitors can see results after competition is finished - on hidden part of test set (private test set).", "evidence": [ "For each model having the best fit on the dev set, we export the probability distribution of classes for each sample in the dev set. In this case, we only use the result of model that has f1_macro score that larger than 0.67. The probability distribution of classes is then used as feature to input into a dense model with only one hidden layer (size 128). The training process of the ensemble model is done on samples of the dev set. The best fit result is 0.7356. The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set." ], "highlighted_evidence": [ "The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set." ] } ], "annotation_id": [ "8093351a29b0413586ea24cffac9e4a6579fc81b" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Public dashboard where competitors can see their results during competition, on part of the test set (public test set).", "evidence": [ "For each model having the best fit on the dev set, we export the probability distribution of classes for each sample in the dev set. In this case, we only use the result of model that has f1_macro score that larger than 0.67. The probability distribution of classes is then used as feature to input into a dense model with only one hidden layer (size 128). The training process of the ensemble model is done on samples of the dev set. The best fit result is 0.7356. The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set." ], "highlighted_evidence": [ "The final result submitted in public leaderboard is 0.73019 and in private leaderboard is 0.58455. It is quite different in bad way. That maybe is the result of the model too overfit on train set tuning on public test set." ] } ], "annotation_id": [ "1ce57e4664d6c940e3c0273b522df6734e066af6" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "They used Wiki Vietnamese language and Vietnamese newspapers to pretrain embeddings and dataset provided in HSD task to train model (details not mentioned in paper).", "evidence": [ "The fundamental idea of this system is how to make a system that has the diversity of viewing an input. That because of the variety of the meaning in Vietnamese language especially with the acronym, teen code type. To make this diversity, after cleaning raw text input, we use multiple types of word tokenizers. Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7. Each sentence has a set of words corresponding to a set of word vectors, and that set of word vectors is a representation of a sentence. We also make a sentence embedding by using RoBERTa architecture BIBREF8. CBOW and RoBERTa models trained on text from some resources including VLSP 2016 Sentiment Analysis, VLSP 2018 Sentiment Analysis, VLSP 2019 HSD and text crawled from Facebook. After having sentence representation, we use some classification models to classify input sentences. Those models will be described in detail in the section SECREF13. With the multiply output results, we will use an ensemble method to combine them and output the final result. Ensemble method we use here is Stacking method will be introduced in the section SECREF16.", "The dataset in this HSD task is really imbalance. Clean class dominates with 91.5%, offensive class takes 5% and the rest belongs to hate class with 3.5%. To make model being able to learn with this imbalance data, we inject class weight to the loss function with the corresponding ratio (clean, offensive, hate) is $(0.09, 0.95, 0.96)$. Formular DISPLAY_FORM17 is the loss function apply for all models in our system. $w_i$ is the class weight, $y_i$ is the ground truth and $\\hat{y}_i$ is the output of the model. If the class weight is not set, we find that model cannot adjust parameters. The model tends to output all clean classes." ], "highlighted_evidence": [ "Each one of these tokenizers, we combine with some types of representation methods, including word to vector methods such as continuous bag of words BIBREF5, pre-trained embedding as fasttext (trained on Wiki Vietnamese language) BIBREF6 and sonvx (trained on Vietnamese newspaper) BIBREF7", "The dataset in this HSD task is really imbalance. Clean class dominates with 91.5%, offensive class takes 5% and the rest belongs to hate class with 3.5%." ] } ], "annotation_id": [ "5c608801d127bf97d4546a64f1a83ae280112167" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1. Hate Speech Detection System Overview", "Figure 2. TextCNN model architecture", "Figure 4. LSTM model architecture", "Figure 3. VDCNN model architecture", "Table I F1_MACRO SCORE OF DIFFERENT MODEL", "Figure 5. LSTMCNN model architecture", "Figure 6. SARNN model architecture" ], "file": [ "2-Figure1-1.png", "2-Figure2-1.png", "3-Figure4-1.png", "3-Figure3-1.png", "4-TableI-1.png", "4-Figure5-1.png", "5-Figure6-1.png" ] }
1906.07668
Yoga-Veganism: Correlation Mining of Twitter Health Data
Nowadays social media is a huge platform of data. People usually share their interest, thoughts via discussions, tweets, status. It is not possible to go through all the data manually. We need to mine the data to explore hidden patterns or unknown correlations, find out the dominant topic in data and understand people's interest through the discussions. In this work, we explore Twitter data related to health. We extract the popular topics under different categories (e.g. diet, exercise) discussed in Twitter via topic modeling, observe model behavior on new tweets, discover interesting correlation (i.e. Yoga-Veganism). We evaluate accuracy by comparing with ground truth using manual annotation both for train and test data.
{ "section_name": [ "Introduction", "Data Collection", "Apache Kafka", "Apache Zookeeper", "Data Extraction using Tweepy", "Data Pre-processing", "Methodology", "Construct document-term matrix", "Topic Modeling", "Optimal number of Topics", "Topic Inference", "Manual Annotation", "Visualization", "Topic Frequency Distribution", "Comparison with Ground Truth", "Observation and Future Work", "Conclusions" ], "paragraphs": [ [ "The main motivation of this work has been started with a question \"What do people do to maintain their health?\"– some people do balanced diet, some do exercise. Among diet plans some people maintain vegetarian diet/vegan diet, among exercises some people do swimming, cycling or yoga. There are people who do both. If we want to know the answers of the following questions– \"How many people follow diet?\", \"How many people do yoga?\", \"Does yogi follow vegetarian/vegan diet?\", may be we could ask our acquainted person but this will provide very few intuition about the data. Nowadays people usually share their interests, thoughts via discussions, tweets, status in social media (i.e. Facebook, Twitter, Instagram etc.). It's huge amount of data and it's not possible to go through all the data manually. We need to mine the data to get overall statistics and then we will also be able to find some interesting correlation of data.", "Several works have been done on prediction of social media content BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . Prieto et al. proposed a method to extract a set of tweets to estimate and track the incidence of health conditions in society BIBREF5 . Discovering public health topics and themes in tweets had been examined by Prier et al. BIBREF6 . Yoon et al. described a practical approach of content mining to analyze tweet contents and illustrate an application of the approach to the topic of physical activity BIBREF7 .", "Twitter data constitutes a rich source that can be used for capturing information about any topic imaginable. In this work, we use text mining to mine the Twitter health-related data. Text mining is the application of natural language processing techniques to derive relevant information BIBREF8 . Millions of tweets are generated each day on multifarious issues BIBREF9 . Twitter mining in large scale has been getting a lot of attention last few years. Lin and Ryaboy discussed the evolution of Twitter infrastructure and the development of capabilities for data mining on \"big data\" BIBREF10 . Pandarachalil et al. provided a scalable and distributed solution using Parallel python framework for Twitter sentiment analysis BIBREF9 . Large-scale Twitter Mining for drug-related adverse events was developed by Bian et al. BIBREF11 .", "In this paper, we use parallel and distributed technology Apache Kafka BIBREF12 to handle the large streaming twitter data. The data processing is conducted in parallel with data extraction by integration of Apache Kafka and Spark Streaming. Then we use Topic Modeling to infer semantic structure of the unstructured data (i.e Tweets). Topic Modeling is a text mining technique which automatically discovers the hidden themes from given documents. It is an unsupervised text analytic algorithm that is used for finding the group of words from the given document. We build the model using three different algorithms Latent Semantic Analysis (LSA) BIBREF13 , Non-negative Matrix Factorization (NMF) BIBREF14 , and Latent Dirichlet Allocation (LDA) BIBREF15 and infer the topic of tweets. To observe the model behavior, we test the model to infer new tweets. The implication of our work is to annotate unlabeled data using the model and find interesting correlation." ], [ "Tweet messages are retrieved from the Twitter source by utilizing the Twitter API and stored in Kafka topics. The Producer API is used to connect the source (i.e. Twitter) to any Kafka topic as a stream of records for a specific category. We fetch data from a source (Twitter), push it to a message queue, and consume it for further analysis. Fig. FIGREF2 shows the overview of Twitter data collection using Kafka." ], [ "In order to handle the large streaming twitter data, we use parallel and distributed technology for big data framework. In this case, the output of the twitter crawling is queued in messaging system called Apache Kafka. This is a distributed streaming platform created and open sourced by LinkedIn in 2011 BIBREF12 . We write a Producer Client which fetches latest tweets continuously using Twitter API and push them to single node Kafka Broker. There is a Consumer that reads data from Kafka (Fig. FIGREF2 )." ], [ "Apache Zookeeper is a distributed, open-source configuration, synchronization service along with naming registry for distributed applications. Kafka uses Zookeeper to store metadata about the Kafka cluster, as well as consumer client details." ], [ "The twitter data has been crawled using Tweepy which is a Python library for accessing the Twitter API. We use Twitter streaming API to extract 40k tweets (April 17-19, 2019). For the crawling, we focus on several keywords that are related to health. The keywords are processed in a non-case-sensitive way. We use filter to stream all tweets containing the word `yoga', `healthylife', `healthydiet', `diet',`hiking', `swimming', `cycling', `yogi', `fatburn', `weightloss', `pilates', `zumba', `nutritiousfood', `wellness', `fitness', `workout', `vegetarian', `vegan', `lowcarb', `glutenfree', `calorieburn'.", "The streaming API returns tweets, as well as several other types of messages (e.g. a tweet deletion notice, user update profile notice, etc), all in JSON format. We use Python libraries json for parsing the data, pandas for data manipulation." ], [ "Data pre-processing is one of the key components in many text mining algorithms BIBREF8 . Data cleaning is crucial for generating a useful topic model. We have some prerequisites i.e. we download the stopwords from NLTK (Natural Language Toolkit) and spacy's en model for text pre-processing.", "It is noticeable that the parsed full-text tweets have many emails, `RT', newline and extra spaces that is quite distracting. We use Python Regular Expressions (re module) to get rid of them. Then we tokenize each text into a list of words, remove punctuation and unnecessary characters. We use Python Gensim package for further processing. Gensim's simple_preprocess() is used for tokenization and removing punctuation. We use Gensim's Phrases model to build bigrams. Certain parts of English speech, like conjunctions (\"for\", \"or\") or the word \"the\" are meaningless to a topic model. These terms are called stopwords and we remove them from the token list. We use spacy model for lemmatization to keep only noun, adjective, verb, adverb. Stemming words is another common NLP technique to reduce topically similar words to their root. For example, \"connect\", \"connecting\", \"connected\", \"connection\", \"connections\" all have similar meanings; stemming reduces those terms to \"connect\". The Porter stemming algorithm BIBREF16 is the most widely used method." ], [ "We use Twitter health-related data for this analysis. In subsections [subsec:3.1]3.1, [subsec:3.2]3.2, [subsec:3.3]3.3, and [subsec:3.4]3.4 elaborately present how we can infer the meaning of unstructured data. Subsection [subsec:3.5]3.5 shows how we do manual annotation for ground truth comparison. Fig. FIGREF6 shows the overall pipeline of correlation mining." ], [ "The result of the data cleaning stage is texts, a tokenized, stopped, stemmed and lemmatized list of words from a single tweet. To understand how frequently each term occurs within each tweet, we construct a document-term matrix using Gensim's Dictionary() function. Gensim's doc2bow() function converts dictionary into a bag-of-words. In the bag-of-words model, each tweet is represented by a vector in a m-dimensional coordinate space, where m is number of unique terms across all tweets. This set of terms is called the corpus vocabulary." ], [ "Topic modeling is a text mining technique which provides methods for identifying co-occurring keywords to summarize collections of textual information. This is used to analyze collections of documents, each of which is represented as a mixture of topics, where each topic is a probability distribution over words BIBREF17 . Applying these models to a document collection involves estimating the topic distributions and the weight each topic receives in each document. A number of algorithms exist for solving this problem. We use three unsupervised machine learning algorithms to explore the topics of the tweets: Latent Semantic Analysis (LSA) BIBREF13 , Non-negative Matrix Factorization (NMF) BIBREF14 , and Latent Dirichlet Allocation (LDA) BIBREF15 . Fig. FIGREF7 shows the general idea of topic modeling methodology. Each tweet is considered as a document. LSA, NMF, and LDA use Bag of Words (BoW) model, which results in a term-document matrix (occurrence of terms in a document). Rows represent terms (words) and columns represent documents (tweets). After completing topic modeling, we identify the groups of co-occurring words in tweets. These group co-occurring related words makes \"topics\".", "LSA (Latent Semantic Analysis) BIBREF13 is also known as LSI (Latent Semantic Index). It learns latent topics by performing a matrix decomposition on the document-term matrix using Singular Value Decomposition (SVD) BIBREF18 . After corpus creation in [subsec:3.1]Subsection 3.1, we generate an LSA model using Gensim.", "Non-negative Matrix Factorization (NMF) BIBREF14 is a widely used tool for the analysis of high-dimensional data as it automatically extracts sparse and meaningful features from a set of non-negative data vectors. It is a matrix factorization method where we constrain the matrices to be non-negative.", "We apply Term Weighting with term frequency-inverse document frequency (TF-IDF) BIBREF19 to improve the usefulness of the document-term matrix (created in [subsec:3.1]Subsection 3.1) by giving more weight to the more \"important\" terms. In Scikit-learn, we can generate at TF-IDF weighted document-term matrix by using TfidfVectorizer. We import the NMF model class from sklearn.decomposition and fit the topic model to tweets.", "Latent Dirichlet Allocation (LDA) BIBREF15 is widely used for identifying the topics in a set of documents, building on Probabilistic Latent Semantic Analysis (PLSI) BIBREF20 . LDA considers each document as a collection of topics in a certain proportion and each topic as a collection of keywords in a certain proportion. We provide LDA the optimal number of topics, it rearranges the topics' distribution within the documents and keywords' distribution within the topics to obtain a good composition of topic-keywords distribution.", "We have corpus generated in [subsec:3.1]Subsection 3.1 to train the LDA model. In addition to the corpus and dictionary, we provide the number of topics as well." ], [ "Topic modeling is an unsupervised learning, so the set of possible topics are unknown. To find out the optimal number of topic, we build many LSA, NMF, LDA models with different values of number of topics (k) and pick the one that gives the highest coherence score. Choosing a `k' that marks the end of a rapid growth of topic coherence usually offers meaningful and interpretable topics.", "We use Gensim's coherencemodel to calculate topic coherence for topic models (LSA and LDA). For NMF, we use a topic coherence measure called TC-W2V. This measure relies on the use of a word embedding model constructed from the corpus. So in this step, we use the Gensim implementation of Word2Vec BIBREF21 to build a Word2Vec model based on the collection of tweets.", "We achieve the highest coherence score = 0.4495 when the number of topics is 2 for LSA, for NMF the highest coherence value is 0.6433 for K = 4, and for LDA we also get number of topics is 4 with the highest coherence score which is 0.3871 (see Fig. FIGREF8 ).", "For our dataset, we picked k = 2, 4, and 4 with the highest coherence value for LSA, NMF, and LDA correspondingly (Fig. FIGREF8 ). Table TABREF13 shows the topics and top-10 keywords of the corresponding topic. We get more informative and understandable topics using LDA model than LSA. LSA decomposed matrix is a highly dense matrix, so it is difficult to index individual dimension. LSA is unable to capture the multiple meanings of words. It offers lower accuracy than LDA.", "In case of NMF, we observe same keywords are repeated in multiple topics. Keywords \"go\", \"day\" both are repeated in Topic 2, Topic 3, and Topic 4 (Table TABREF13 ). In Table TABREF13 keyword \"yoga\" has been found both in Topic 1 and Topic 4. We also notice that keyword \"eat\" is in Topic 2 and Topic 3 (Table TABREF13 ). If the same keywords being repeated in multiple topics, it is probably a sign that the `k' is large though we achieve the highest coherence score in NMF for k=4.", "We use LDA model for our further analysis. Because LDA is good in identifying coherent topics where as NMF usually gives incoherent topics. However, in the average case NMF and LDA are similar but LDA is more consistent." ], [ "After doing topic modeling using three different method LSA, NMF, and LDA, we use LDA for further analysis i.e. to observe the dominant topic, 2nd dominant topic and percentage of contribution of the topics in each tweet of training data. To observe the model behavior on new tweets those are not included in training set, we follow the same procedure to observe the dominant topic, 2nd dominant topic and percentage of contribution of the topics in each tweet on testing data. Table TABREF30 shows some tweets and corresponding dominant topic, 2nd dominant topic and percentage of contribution of the topics in each tweet." ], [ "To calculate the accuracy of model in comparison with ground truth label, we selected top 500 tweets from train dataset (40k tweets). We extracted 500 new tweets (22 April, 2019) as a test dataset. We did manual annotation both for train and test data by choosing one topic among the 4 topics generated from LDA model (7th, 8th, 9th, and 10th columns of Table TABREF13 ) for each tweet based on the intent of the tweet. Consider the following two tweets:", "Tweet 1: Learning some traditional yoga with my good friend.", "Tweet 2: Why You Should #LiftWeights to Lose #BellyFat #Fitness #core #abs #diet #gym #bodybuilding #workout #yoga", "The intention of Tweet 1 is yoga activity (i.e. learning yoga). Tweet 2 is more about weight lifting to reduce belly fat. This tweet is related to workout. When we do manual annotation, we assign Topic 2 in Tweet 1, and Topic 1 in Tweet 2. It's not wise to assign Topic 2 for both tweets based on the keyword \"yoga\". During annotation, we focus on functionality of tweets." ], [ "We use LDAvis BIBREF22 , a web-based interactive visualization of topics estimated using LDA. Gensim's pyLDAVis is the most commonly used visualization tool to visualize the information contained in a topic model. In Fig. FIGREF21 , each bubble on the left-hand side plot represents a topic. The larger the bubble, the more prevalent is that topic. A good topic model has fairly big, non-overlapping bubbles scattered throughout the chart instead of being clustered in one quadrant. A model with too many topics, is typically have many overlaps, small sized bubbles clustered in one region of the chart. In right hand side, the words represent the salient keywords.", "If we move the cursor over one of the bubbles (Fig. FIGREF21 ), the words and bars on the right-hand side have been updated and top-30 salient keywords that form the selected topic and their estimated term frequencies are shown.", "We observe interesting hidden correlation in data. Fig. FIGREF24 has Topic 2 as selected topic. Topic 2 contains top-4 co-occurring keywords \"vegan\", \"yoga\", \"job\", \"every_woman\" having the highest term frequency. We can infer different things from the topic that \"women usually practice yoga more than men\", \"women teach yoga and take it as a job\", \"Yogi follow vegan diet\". We would say there are noticeable correlation in data i.e. `Yoga-Veganism', `Women-Yoga'." ], [ "Each tweet is composed of multiple topics. But, typically only one of the topics is dominant. We extract the dominant and 2nd dominant topic for each tweet and show the weight of the topic (percentage of contribution in each tweet) and the corresponding keywords.", "We plot the frequency of each topic's distribution on tweets in histogram. Fig. FIGREF25 shows the dominant topics' frequency and Fig. FIGREF25 shows the 2nd dominant topics' frequency on tweets. From Fig. FIGREF25 we observe that Topic 1 became either the dominant topic or the 2nd dominant topic for most of the tweets. 7th column of Table TABREF13 shows the corresponding top-10 keywords of Topic 1." ], [ "To compare with ground truth, we gradually increased the size of dataset 100, 200, 300, 400, and 500 tweets from train data and test data (new tweets) and did manual annotation both for train/test data based on functionality of tweets (described in [subsec:3.5]Subsection 3.5).", "For accuracy calculation, we consider the dominant topic only. We achieved 66% train accuracy and 51% test accuracy when the size of dataset is 500 (Fig. FIGREF28 ). We did baseline implementation with random inference by running multiple times with different seeds and took the average accuracy. For dataset 500, the accuracy converged towards 25% which is reasonable as we have 4 topics." ], [ "In Table TABREF30 , we show some observations. For the tweets in 1st and 2nd row (Table TABREF30 ), we observed understandable topic. We also noticed misleading topic and unrelated topic for few tweets (3rd and 4th row of Table TABREF30 ).", "In the 1st row of Table TABREF30 , we show a tweet from train data and we got Topic 2 as a dominant topic which has 61% of contribution in this tweet. Topic 1 is 2nd dominant topic and 18% contribution here.", "2nd row of Table TABREF30 shows a tweet from test set. We found Topic 2 as a dominant topic with 33% of contribution and Topic 4 as 2nd dominant topic with 32% contribution in this tweet.", "In the 3rd (Table TABREF30 ), we have a tweet from test data and we got Topic 2 as a dominant topic which has 43% of contribution in this tweet. Topic 3 is 2nd dominant with 23% contribution which is misleading topic. The model misinterprets the words `water in hand' and infers topic which has keywords \"swimming, swim, pool\". But the model should infer more reasonable topic (Topic 1 which has keywords \"diet, workout\") here.", "We got Topic 2 as dominant topic for the tweet in 4th row (Table TABREF30 ) which is unrelated topic for this tweet and most relevant topic of this tweet (Topic 2) as 2nd dominant topic. We think during accuracy comparison with ground truth 2nd dominant topic might be considered.", "In future, we will extract more tweets and train the model and observe the model behavior on test data. As we found misleading and unrelated topic in test cases, it is important to understand the reasons behind the predictions. We will incorporate Local Interpretable model-agnostic Explanation (LIME) BIBREF23 method for the explanation of model predictions. We will also do predictive causality analysis on tweets." ], [ "It is challenging to analyze social media data for different application purpose. In this work, we explored Twitter health-related data, inferred topic using topic modeling (i.e. LSA, NMF, LDA), observed model behavior on new tweets, compared train/test accuracy with ground truth, employed different visualizations after information integration and discovered interesting correlation (Yoga-Veganism) in data. In future, we will incorporate Local Interpretable model-agnostic Explanation (LIME) method to understand model interpretability." ] ] }
{ "question": [ "Do the authors report results only on English data?", "What other interesting correlations are observed?" ], "question_id": [ "8abb96b2450ebccfcc5c98772cec3d86cd0f53e0", "f52ec4d68de91dba66668f0affc198706949ff90" ], "nlp_background": [ "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no" ], "search_query": [ "twitter", "twitter" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "FLOAT SELECTED: Table 1: Topics and top-10 keywords of the corresponding topic", "FLOAT SELECTED: Figure 5: Visualization using pyLDAVis. Best viewed in electronic format (zoomed in)." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: Topics and top-10 keywords of the corresponding topic", "FLOAT SELECTED: Figure 5: Visualization using pyLDAVis. Best viewed in electronic format (zoomed in)." ] } ], "annotation_id": [ "0b807e5a88089721cc4f95e33168d8e938755643" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Women-Yoga" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We observe interesting hidden correlation in data. Fig. FIGREF24 has Topic 2 as selected topic. Topic 2 contains top-4 co-occurring keywords \"vegan\", \"yoga\", \"job\", \"every_woman\" having the highest term frequency. We can infer different things from the topic that \"women usually practice yoga more than men\", \"women teach yoga and take it as a job\", \"Yogi follow vegan diet\". We would say there are noticeable correlation in data i.e. `Yoga-Veganism', `Women-Yoga'." ], "highlighted_evidence": [ "We observe interesting hidden correlation in data. Fig. FIGREF24 has Topic 2 as selected topic. Topic 2 contains top-4 co-occurring keywords \"vegan\", \"yoga\", \"job\", \"every_woman\" having the highest term frequency. We can infer different things from the topic that \"women usually practice yoga more than men\", \"women teach yoga and take it as a job\", \"Yogi follow vegan diet\". We would say there are noticeable correlation in data i.e. `Yoga-Veganism', `Women-Yoga'.", "Women-Yoga" ] } ], "annotation_id": [ "b911a2f8205df076b9a5e4f923d50f170fc25452" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ] }
{ "caption": [ "Figure 2: Methodology of correlation mining of Twitter health data.", "Figure 3: Topic Modeling using LSA, NMF, and LDA. After topic modeling we identify topic/topics (circles). Red pentagrams and green triangles represent group of co-occurring related words of corresponding topic.", "Figure 1: Twitter Data Collection.", "Figure 4: Optimal Number of Topics vs Coherence Score. Number of Topics (k) are selected based on the highest coherence score. Graphs are rendered in high resolution and can be zoomed in.", "Table 1: Topics and top-10 keywords of the corresponding topic", "Figure 5: Visualization using pyLDAVis. Best viewed in electronic format (zoomed in).", "Figure 6: Visualization using pyLDAVis. Red bubble in left hand side represents selected Topic which is Topic 2. Red bars in right hand side show estimated term frequencies of top-30 salient keywords that form the Topic 2. Best viewed in electronic format (zoomed in)", "Figure 7: Frequency of each topic’s distribution on tweets.", "Table 2: The Dominant & 2nd Dominant Topic of a Tweet and corresponding Topic Contribution on that specific Tweet.", "Figure 8: Percentage of Accuracy (y-axis) vs Size of Dataset (x-axis). Size of Dataset = 100, 200, 300, 400, and 500 tweets. Blue line shows the accuracy of Train data and Orange line represents Test accuracy. Best viewed in electronic format (zoomed in)." ], "file": [ "2-Figure2-1.png", "2-Figure3-1.png", "2-Figure1-1.png", "3-Figure4-1.png", "4-Table1-1.png", "5-Figure5-1.png", "5-Figure6-1.png", "6-Figure7-1.png", "6-Table2-1.png", "7-Figure8-1.png" ] }
1605.04655
Joint Learning of Sentence Embeddings for Relevance and Entailment
We consider the problem of Recognizing Textual Entailment within an Information Retrieval context, where we must simultaneously determine the relevancy as well as degree of entailment for individual pieces of evidence to determine a yes/no answer to a binary natural language question. We compare several variants of neural networks for sentence embeddings in a setting of decision-making based on evidence of varying relevance. We propose a basic model to integrate evidence for entailment, show that joint training of the sentence embeddings to model relevance and entailment is feasible even with no explicit per-evidence supervision, and show the importance of evaluating strong baselines. We also demonstrate the benefit of carrying over text comprehension model trained on an unrelated task for our small datasets. Our research is motivated primarily by a new open dataset we introduce, consisting of binary questions and news-based evidence snippets. We also apply the proposed relevance-entailment model on a similar task of ranking multiple-choice test answers, evaluating it on a preliminary dataset of school test questions as well as the standard MCTest dataset, where we improve the neural model state-of-art.
{ "section_name": [ "Introduction", "The Hypothesis Evaluation Task", "Argus Dataset", "AI2-8grade/CK12 Dataset", "MCTest Dataset", "Related Work", "Neural Model", "Sentence Embeddings", "Evidence Integration", "Experimental Setup", "Evaluation", "Analysis", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Let us consider the goal of building machine reasoning systems based on knowledge from fulltext data like encyclopedic articles, scientific papers or news articles. Such machine reasoning systems, like humans researching a problem, must be able to recover evidence from large amounts of retrieved but mostly irrelevant information and judge the evidence to decide the answer to the question at hand.", "A typical approach, used implicitly in information retrieval (and its extensions, like IR-based Question Answering systems BIBREF0 ), is to determine evidence relevancy by a keyword overlap feature (like tf-idf or BM-25 BIBREF1 ) and prune the evidence by the relevancy score. On the other hand, textual entailment systems that seek to confirm hypotheses based on evidence BIBREF2 BIBREF3 BIBREF4 are typically provided with only a single piece of evidence or only evidence pre-determined as relevant, and are often restricted to short and simple sentences without open-domain named entity occurences. In this work, we seek to fuse information retrieval and textual entaiment recognition by defining the Hypothesis Evaluation task as deciding the truth value of a hypothesis by integrating numerous pieces of evidence, not all of it equally relevant.", "As a specific instance, we introduce the Argus Yes/No Question Answering task. The problem is, given a real-world event binary question like Did Donald Trump announce he is running for president? and numerous retrieved news article fragments as evidence, to determine the answer for the question. Our research is motivated by the Argus automatic reporting system for the Augur prediction market platform. BIBREF5 Therefore, we consider the question answering task within the constraints of a practical scenario that has limited available dataset and only minimum supervision. Hence, authentic news sentences are the evidence (with noise like segmentation errors, irrelevant participial phrases, etc.), and whereas we have gold standard for the correct answers, the model must do without explicit supervision on which individual evidence snippets are relevant and what do they entail.", "To this end, we introduce an open dataset of questions and newspaper evidence, and a neural model within the Sentence Pair Scoring framework BIBREF6 that (A) learns sentence embeddings for the question and evidence, (B) the embeddings represent both relevance and entailment characteristics as linear classifier inputs, and (C) the model aggregates all available evidence to produce a binary signal as the answer, which is the only training supervision.", "We also evaluate our model on a related task that concerns ranking answers of multiple-choice questions given a set of evidencing sentences. We consider the MCTest dataset and the AI2-8grade/CK12 dataset that we introduce below.", "The paper is structured as follows. In Sec. SECREF2 , we formally outline the Argus question answering task, describe the question-evidence dataset, and describe the multiple-choice questions task and datasets. In Sec. SECREF3 , we briefly survey the related work on similar problems, whereas in Sec. SECREF4 we propose our neural models for joint learning of sentence relevance and entailment. We present the results in Sec. SECREF5 and conclude with a summary, model usage recommendations and future work directions in Sec. SECREF6 ." ], [ "Formally, the Hypothesis Evaluation task is to build a function INLINEFORM0 , where INLINEFORM1 is a binary label (no towards yes) and INLINEFORM2 is a hypothesis instance in the form of question text INLINEFORM3 and a set of INLINEFORM4 evidence texts INLINEFORM5 as extracted from an evidence-carrying corpus." ], [ "Our main aim is to propose a solution to the Argus Task, where the Argus system BIBREF7 BIBREF5 is to automatically analyze and answer questions in the context of the Augur prediction market platform. In a prediction market, users pose questions about future events whereas others bet on the yes or no answer, with the assumption that the bet price reflects the real probability of the event. At a specified moment (e.g. after the date of a to-be-predicted sports match), the correct answer is retroactively determined and the bets are paid off. At a larger volume of questions, determining the bet results may present a significant overhead for running of the market. This motivates the Argus system, which should partially automate this determination — deciding questions related to recent events based on open news sources.", "To train a machine learning model for the INLINEFORM0 function, we have created a dataset of questions with gold labels, and produced sets of evidence texts from a variety of news paper using a pre-existing IR (information retrieval) component of the Argus system. We release this dataset openly.", "To pose a reproducible task for the IR component, the time domain of questions was restricted from September 1, 2014 to September 1, 2015, and topic domain was focused to politics, sports and the stock market. To build the question dataset, we have used several sources:", "We asked Amazon Mechanical Turk users to pose questions, together with a golden label and a news article reference. This seeded the dataset with initial, somewhat redundant 250 questions.", "We manually extended this dataset by derived questions with reversed polarity (to obtain an opposite answer).", "We extended the data with questions autogenerated from 26 templates, pertaining top sporting event winners and US senate or gubernatorial elections.", "To build the evidence dataset, we used the Syphon preprocessing component BIBREF5 of the Argus implementation to identify semantic roles of all question tokens and produce the search keywords if a role was assigned to each token. We then used the IR component to query a corpus of newspaper articles, and kept sentences that contained at least 2/3 of all the keywords. Our corpus of articles contained articles from The Guardian (all articles) and from the New York Times (Sports, Politics and Business sections). Furthermore, we scraped partial archive.org historical data out of 35 RSS feeds from CNN, Reuters, BBC International, CBS News, ABC News, c|net, Financial Times, Skynews and the Washington Post.", "For the final dataset, we kept only questions where at least a single evidence was found (i.e. we successfuly assigned a role to each token, found some news stories and found at least one sentence with 2/3 of question keywords within). The final size of the dataset is outlined in Fig. FIGREF8 and some examples are shown in Fig. FIGREF9 ." ], [ "The AI2 Elementary School Science Questions (no-diagrams variant) released by the Allen Institute cover 855 basic four-choice questions regarding high school science and follows up to the Allen AI Science Kaggle challenge. The vocabulary includes scientific jargon and named entities, and many questions are not factoid, requiring real-world reasoning or thought experiments.", "We have combined each answer with the respective question (by substituting the wh-word in the question by each answer) and retrieved evidence sentences for each hypothesis using Solr search in a collection of CK-12 “Concepts B” textbooks. 525 questions attained any supporting evidence, examples are shown in Fig. FIGREF10 .", "We consider this dataset as preliminary since it was not reviewed by a human and many hypotheses are apparently unprovable by the evidence we have gathered (i.e. the theoretical top accuracy is much lower than 1.0). However, we released it to the public and still included it in the comparison as these qualities reflect many realistic datasets of unknown qualities, so we find relative performances of models on such datasets instructive." ], [ "The Machine Comprehension Test BIBREF8 dataset has been introduced to provide a challenge for researchers to come up with models that approach human-level reading comprehension, and serve as a higher-level alternative to semantic parsing tasks that enforce a specific knowledge representation. The dataset consists of a set of 660 stories spanning multiple sentences, written in simple and clean language (but with less restricted vocabulary than e.g. the bAbI dataset BIBREF9 ). Each story is accompanied by four questions and each of these lists four possible answers; the questions are tagged as based on just one in-story sentence, or requiring multiple sentence inference. We use an official extension of the dataset for RTE evaluation that again textually merges questions and answers.", "The dataset is split in two parts, MC-160 and MC-500, based on provenance but similar in quality. We train all models on a joined training set.", "The practical setting differs from the Argus task as the MCTest dataset contains relatively restricted vocabulary and well-formed sentences. Furthermore, the goal is to find the single key point in the story to focus on, while in the Argus setting we may have many pieces of evidence supporting an answer; another specific characteristics of MCTest is that it consists of stories where the ordering and proximity of evidence sentences matters." ], [ "Our primary concern when integrating natural language query with textual evidence is to find sentence-level representations suitable both for relevance weighing and answer prediction.", "Sentence-level representations in the retrieval + inference context have been popularly proposed within the Memory Network framework BIBREF10 , but explored just in the form of averaged word embeddings; the task includes only very simple sentences and a small vocabulary. Much more realistic setting is introduced in the Answer Sentence Selection context BIBREF11 BIBREF6 , with state-of-art models using complex deep neural architectures with attention BIBREF12 , but the selection task consists of only retrieval and no inference (answer prediction). A more indirect retrieval task regarding news summarization was investigated by BIBREF13 .", "In the entailment context, BIBREF4 introduced a large dataset with single-evidence sentence pairs (Stanford Natural Language Inference, SNLI), but a larger vocabulary and slightly more complicated (but still conservatively formed) sentences. They also proposed baseline recurrent neural model for modeling sentence representations, while word-level attention based models are being studied more recently BIBREF14 BIBREF15 .", "In the MCTest text comprehension challenge BIBREF8 , the leading models use complex engineered features ensembling multiple traditional semantic NLP approaches BIBREF16 . The best deep model so far BIBREF17 uses convolutional neural networks for sentence representations, and attention on multiple levels to pick evidencing sentences." ], [ "Our approach is to use a sequence of word embeddings to build sentence embeddings for each hypothesis and respective evidence, then use the sentence embeddings to estimate relevance and entailment of each evidence with regard to the respective hypothesis, and finally integrate the evidence to a single answer." ], [ "To produce sentence embeddings, we investigated the neural models proposed in the dataset-sts framework for deep learning of sentence pair scoring functions. BIBREF6 ", "We refer the reader to BIBREF6 and its references for detailed model descriptions. We evaluate an RNN model which uses bidirectionally summed GRU memory cells BIBREF18 and uses the final states as embeddings; a CNN model which uses sentence-max-pooled convolutional filters as embeddings BIBREF19 ; an RNN-CNN model which puts the CNN on top of per-token GRU outputs rather than the word embeddings BIBREF20 ; and an attn1511 model inspired by BIBREF20 that integrates the RNN-CNN model with per-word attention to build hypothesis-specific evidence embeddings. We also report the baseline results of avg mean of word embeddings in the sentence with projection matrix and DAN Deep Averaging Network model that employs word-level dropout and adds multiple nonlinear transformations on top of the averaged embeddings BIBREF21 .", "The original attn1511 model BIBREF6 (as tuned for the Answer Sentence Selection task) used a softmax attention mechanism that would effectively select only a few key words of the evidence to focus on — for a hypothesis-evidence token INLINEFORM0 scalar attention score INLINEFORM1 , the focus INLINEFORM2 is: INLINEFORM3 ", "A different focus mechanism exhibited better performance in the Hypothesis Evaluation task, modelling per-token attention more independently: INLINEFORM0 ", "We also use relu instead of tanh in the CNNs.", "As model input, we use the standard GloVe embeddings BIBREF22 extended with binary inputs denoting token type and overlap with token or bigram in the paired sentence, as described in BIBREF6 . However, we introduce two changes to the word embedding model — we use 50-dimensional embeddings instead of 300-dimensional, and rather than building an adaptable embedding matrix from the training set words preinitialized by GloVe, we use only the top 100 most frequent tokens in the adaptable embedding matrix and use fixed GloVe vectors for all other tokens (including tokens not found in the training set). In preliminary experiments, this improved generalization for highly vocabulary-rich tasks like Argus, while still allowing the high-frequency tokens (like interpunction or conjunctions) to learn semantic operator representations.", "As an additional method for producing sentence embeddings, we consider the Ubu. RNN transfer learning method proposed by BIBREF6 where an RNN model (as described above) is trained on the Ubuntu Dialogue task BIBREF23 . The pretrained model weights are used to initialize an RNN model which is then fine-tuned on the Hypothesis Evaluation task. We use the same model as originally proposed (except the aforementioned vocabulary handling modification), with the dot-product scoring used for Ubuntu Dialogue training replaced by MLP point-scores described below." ], [ "Our main proposed schema for evidence integration is Evidence Weighing. From each pair of hypothesis and evidence embeddings, we produce two INLINEFORM0 predictions using a pair of MLP point-scorers of dataset-sts BIBREF6 with sigmoid activation function. The predictions are interpreted as INLINEFORM1 entailment (0 to 1 as no to yes) and relevance INLINEFORM2 . To integrate the predictions across multiple pieces of evidence, we propose a weighed average model: INLINEFORM3 ", "We do not have access to any explicit labels for the evidence, but we train the model end-to-end with just INLINEFORM0 labels and the formula for INLINEFORM1 is differentiable, carrying over the gradient to the sentence embedding model. This can be thought of as a simple passage-wide attention model.", "As a baseline strategy, we also consider Evidence Averaging, where we simply produce a single scalar prediction per hypothesis-evidence pair (using the same strategy as above) and decide the hypothesis simply based on the mean prediction across available evidence.", "Finally, following success reported in the Answer Sentence Selection task BIBREF6 , we consider a BM25 Feature combined with Evidence Averaging, where the MLP scorer that produces the pair scalar prediction as above takes an additional BM25 word overlap score input BIBREF1 besides the elementwise embedding comparisons." ], [ "We implement the differentiable model in the Keras framework BIBREF24 and train the whole network from word embeddings to output evidence-integrated hypothesis label using the binary cross-entropy loss as an objective and the Adam optimization algorithm BIBREF25 . We apply INLINEFORM0 regularization and a INLINEFORM1 dropout.", "Following the recommendation of BIBREF6 , we report expected test set question accuracy as determined by average accuracy in 16 independent trainings and with 95% confidence intervals based on the Student's t-distribution." ], [ "In Fig. FIGREF26 , we report the model performance on the Argus task, showing that the Ubuntu Dialogue transfer RNN outperforms other proposed models by a large margin. However, a comparison of evidence integration approaches in Fig. FIGREF27 shows that evidence integration is not the major deciding factor and there are no staticially meaningful differences between the evaluated approaches. We measured high correlation between classification and relevance scores with Pearson's INLINEFORM0 , showing that our model does not learn a separate evidence weighing function on this task.", "In Fig. FIGREF28 , we look at the model performance on the AI2-8grade/CK12 task, repeating the story of Ubuntu Dialogue transfer RNN dominating other models. However, on this task our proposed evidence weighing scheme improves over simpler approaches — but just on the best model, as shown in Fig. FIGREF29 . On the other hand, the simplest averaging model benefits from at least BM25 information to select relevant evidence, apparently.", "For the MCTest dataset, Fig. FIGREF30 compares our proposed models with the current state-of-art ensemble of hand-crafted syntactic and frame-semantic features BIBREF16 , as well as past neural models from the literature, all using attention mechanisms — the Attentive Reader of BIBREF26 , Neural Reasoner of BIBREF27 and the HABCNN model family of BIBREF17 . We see that averaging-based models are surprisingly effective on this task, and in particular on the MC-500 dataset it can beat even the best so far reported model of HABCNN-TE. Our proposed transfer model is statistically equivalent to the best model on both datasets (furthermore, previous work did not include confidence intervals, even though their models should also be stochastically initialized).", "As expected, our models did badly on the multiple-evidence class of questions — we made no attempt to model information flow across adjacent sentences in our models as this aspect is unique to MCTest in the context of our work.", "Interestingly, evidence weighing does play an important role on the MCTest task as shown in Fig. FIGREF31 , significantly boosting model accuracy. This confirms that a mechanism to allocate attention to different sentences is indeed crucial for this task." ], [ "While we can universally proclaim Ubu. RNN as the best model, we observe many aspects of the Hypothesis Evaluation problem that are shared by the AI2-8grade/CK12 and MCTest tasks, but not by the Argus task.", "Our largest surprise lies in the ineffectivity of evidence weighing on the Argus task, since observations of irrelevant passages initially led us to investigate this model. We may also see that non-pretrained RNN does very well on the Argus task while CNN is a better model otherwise.", "An aspect that could explain this rift is that the latter two tasks are primarily retrieval based, where we seek to judge each evidence as irrelevant or essentially a paraphrase of the hypothesis. On the other hand, the Argus task is highly semantic and compositional, with the questions often differing just by a presence of negation — recurrent model that can capture long-term dependencies and alter sentence representations based on the presence of negation may represent an essential improvement over an n-gram-like convolutional scheme. We might also attribute the lack of success of evidence weighing in the Argus task to a more conservative scheme of passage retrieval employed in the IR pipeline that produced the dataset. Given the large vocabulary and noise levels in the data, we may also simply require more data to train the evidence weighing properly.", "We see from the training vs. test accuracies that RNN-based models (including the word-level attention model) have a strong tendency to overfit on our small datasets, while CNN is much more resilient. While word-level attention seems appealing for such a task, we speculate that we simply might not have enough training data to properly train it. Investigating attention transfer is a point for future work — by our preliminary experiments on multiple datasets, attention models appear more task specific than the basic text comprehension models of memory based RNNs.", "One concrete limitation of our models in case of the Argus task is a problem of reconciling particular named entity instances. The more obvious form of this issue is Had Roger Federer beat Martin Cilic in US OPEN 2014? versus an opposite Had Martin Cilic beat Roger Federer in US OPEN 2014? — another form of this problem is reconciling a hypothesis like Will the Royals win the World Series? with evidence Giants Win World Series With Game 7 Victory Over Royals. An abstract embedding of the sentence will not carry over the required information — it is important to explicitly pass and reconcile the roles of multiple named entities which cannot be meaningfully embedded in a GloVe-like semantic vector space." ], [ "We have established a general Hypothesis Evaluation task with three datasets of various properties, and shown that neural models can exhibit strong performance (with less hand-crafting effort than non-neural classifiers). We propose an evidence weighing model that is never harmful and improves performance on some tasks. We also demonstrate that simple models can outperform or closely match performance of complex architectures; all the models we consider are task-independent and were successfully used in different contexts than Hypothesis Evaluation BIBREF6 . Our results empirically show that a basic RNN text comprehension model well trained on a large dataset (even if the task is unrelated and vocabulary characteristics are very different) outperforms or matches more complex architectures trained only on the dataset of the task at hand.", "Finally, on the MCTest dataset, our best proposed model is better or statistically indistinguishable from the best neural model reported so far BIBREF17 , even though it has a simpler architecture and only a naive attention mechanism.", "We would like to draw several recommendations for future research from our findings: (A) encourage usage of basic neural architectures as evaluation baselines; (B) suggest that future research includes models pretrained on large data as baselines; (C) validate complex architectures on tasks with large datasets if they cannot beat baselines on small datasets; and (D) for randomized machine comprehension models (e.g. neural networks with random weight initialization, batch shuffling or probabilistic dropout), report expected test set performance based on multiple independent training runs.", "As a general advice for solving complex tasks with small datasets, besides the point (B) above our analysis suggests convolutional networks as the best models regarding the tendency to overfit, unless semantic composionality plays a crucial role in the task; in this scenario, simple averaging-based models are a great start as well. Preinitializing a model also helps against overfitting.", "We release our implementation of the Argus task, evidence integration models and processing of all the evaluated datasets as open source.", "We believe the next step towards machine comprehension NLP models (based on deep learning but capable of dealing with real-world, large-vocabulary data) will involve research into a better way to deal with entities without available embeddings. When distinguishing specific entities, simple word-level attention mechanisms will not do. A promising approach could extend the flexibility of the final sentence representation, moving from attention mechanism to a memory mechanism by allowing the network to remember a set of “facts” derived from each sentence; related work has been done for example on end-to-end differentiable shift-reduce parsers with LSTM as stack cells BIBREF28 ." ], [ "This work was co-funded by the Augur Project of the Forecast Foundation and financially supported by the Grant Agency of the Czech Technical University in Prague, grant No. SGS16/ 084/OHK3/1T/13. Computational resources were provided by the CESNET LM2015042 and the CERIT Scientific Cloud LM2015085, provided under the programme “Projects of Large Research, Development, and Innovations Infrastructures.”", "We'd like to thank Peronet Despeignes of the Augur Project for his support. Carl Burke has provided instructions for searching CK-12 ebooks within the Kaggle challenge." ] ] }
{ "question": [ "what were the baselines?", "what is the state of the art for ranking mc test answers?", "what is the size of the introduced dataset?", "what datasets did they use?" ], "question_id": [ "225a567eeb2698a9d3f1024a8b270313a6d15f82", "35b10e0dc2cb4a1a31d5692032dc3fbda933bf7d", "f5eac66c08ebec507c582a2445e99317a83e9ebe", "62613aca3d7c7d534c9f6d8cb91ff55626bb8695" ], "nlp_background": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "search_query": [ "", "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "RNN model", "CNN model ", "RNN-CNN model", "attn1511 model", "Deep Averaging Network model", "avg mean of word embeddings in the sentence with projection matrix" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We refer the reader to BIBREF6 and its references for detailed model descriptions. We evaluate an RNN model which uses bidirectionally summed GRU memory cells BIBREF18 and uses the final states as embeddings; a CNN model which uses sentence-max-pooled convolutional filters as embeddings BIBREF19 ; an RNN-CNN model which puts the CNN on top of per-token GRU outputs rather than the word embeddings BIBREF20 ; and an attn1511 model inspired by BIBREF20 that integrates the RNN-CNN model with per-word attention to build hypothesis-specific evidence embeddings. We also report the baseline results of avg mean of word embeddings in the sentence with projection matrix and DAN Deep Averaging Network model that employs word-level dropout and adds multiple nonlinear transformations on top of the averaged embeddings BIBREF21 ." ], "highlighted_evidence": [ "We evaluate an RNN model which uses bidirectionally summed GRU memory cells BIBREF18 and uses the final states as embeddings; a CNN model which uses sentence-max-pooled convolutional filters as embeddings BIBREF19 ; an RNN-CNN model which puts the CNN on top of per-token GRU outputs rather than the word embeddings BIBREF20 ; and an attn1511 model inspired by BIBREF20 that integrates the RNN-CNN model with per-word attention to build hypothesis-specific evidence embeddings. We also report the baseline results of avg mean of word embeddings in the sentence with projection matrix and DAN Deep Averaging Network model that employs word-level dropout and adds multiple nonlinear transformations on top of the averaged embeddings BIBREF21 ." ] } ], "annotation_id": [ "48b410e2b345e4816884da74f77f564dc47a4f5e" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "ensemble of hand-crafted syntactic and frame-semantic features BIBREF16" ], "yes_no": null, "free_form_answer": "", "evidence": [ "For the MCTest dataset, Fig. FIGREF30 compares our proposed models with the current state-of-art ensemble of hand-crafted syntactic and frame-semantic features BIBREF16 , as well as past neural models from the literature, all using attention mechanisms — the Attentive Reader of BIBREF26 , Neural Reasoner of BIBREF27 and the HABCNN model family of BIBREF17 . We see that averaging-based models are surprisingly effective on this task, and in particular on the MC-500 dataset it can beat even the best so far reported model of HABCNN-TE. Our proposed transfer model is statistically equivalent to the best model on both datasets (furthermore, previous work did not include confidence intervals, even though their models should also be stochastically initialized)." ], "highlighted_evidence": [ "For the MCTest dataset, Fig. FIGREF30 compares our proposed models with the current state-of-art ensemble of hand-crafted syntactic and frame-semantic features BIBREF16 , as well as past neural models from the literature, all using attention mechanisms — the Attentive Reader of BIBREF26 , Neural Reasoner of BIBREF27 and the HABCNN model family of BIBREF17 . " ] } ], "annotation_id": [ "0b9d9254766f774d22cbcd8944f0afadc13c946b" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "609d283853cdbd9b8b8c7744c2df798db23b3e2e" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Argus Dataset", "AI2-8grade/CK12 Dataset", "MCTest Dataset" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Argus Dataset", "AI2-8grade/CK12 Dataset", "We consider this dataset as preliminary since it was not reviewed by a human and many hypotheses are apparently unprovable by the evidence we have gathered (i.e. the theoretical top accuracy is much lower than 1.0). However, we released it to the public and still included it in the comparison as these qualities reflect many realistic datasets of unknown qualities, so we find relative performances of models on such datasets instructive.", "MCTest Dataset", "The Machine Comprehension Test BIBREF8 dataset has been introduced to provide a challenge for researchers to come up with models that approach human-level reading comprehension, and serve as a higher-level alternative to semantic parsing tasks that enforce a specific knowledge representation. The dataset consists of a set of 660 stories spanning multiple sentences, written in simple and clean language (but with less restricted vocabulary than e.g. the bAbI dataset BIBREF9 ). Each story is accompanied by four questions and each of these lists four possible answers; the questions are tagged as based on just one in-story sentence, or requiring multiple sentence inference. We use an official extension of the dataset for RTE evaluation that again textually merges questions and answers." ], "highlighted_evidence": [ "Argus Dataset", "AI2-8grade/CK12 Dataset", "We consider this dataset as preliminary since it was not reviewed by a human and many hypotheses are apparently unprovable by the evidence we have gathered (i.e. the theoretical top accuracy is much lower than 1.0). ", "MCTest Dataset", "We use an official extension of the dataset for RTE evaluation that again textually merges questions and answers." ] } ], "annotation_id": [ "36dd4b385dd896529a94fd92b1689e385d0ebcc1" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] } ] }
{ "caption": [ "Figure 51.14 In a pedigree, squares symbolize males, and circles represent females. energy pyramid model is used to show the pattern of traits that are passed from one generation to the next in a family? Energy is passed up a food chain or web from lower to higher trophic levels. Each step of the food chain in the energy pyramid is called a trophic level." ], "file": [ "1-Figure51.14-1.png" ] }
1911.09483
MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning
In sequence to sequence learning, the self-attention mechanism proves to be highly effective, and achieves significant improvements in many tasks. However, the self-attention mechanism is not without its own flaws. Although self-attention can model extremely long dependencies, the attention in deep layers tends to overconcentrate on a single token, leading to insufficient use of local information and difficultly in representing long sequences. In this work, we explore parallel multi-scale representation learning on sequence data, striving to capture both long-range and short-range language structures. To this end, we propose the Parallel MUlti-Scale attEntion (MUSE) and MUSE-simple. MUSE-simple contains the basic idea of parallel multi-scale sequence representation learning, and it encodes the sequence in parallel, in terms of different scales with the help from self-attention, and pointwise transformation. MUSE builds on MUSE-simple and explores combining convolution and self-attention for learning sequence representations from more different scales. We focus on machine translation and the proposed approach achieves substantial performance improvements over Transformer, especially on long sequences. More importantly, we find that although conceptually simple, its success in practice requires intricate considerations, and the multi-scale attention must build on unified semantic space. Under common setting, the proposed model achieves substantial performance and outperforms all previous models on three main machine translation tasks. In addition, MUSE has potential for accelerating inference due to its parallelism. Code will be available at this https URL
{ "section_name": [ "Introduction", "MUSE: Parallel Multi-Scale Attention", "MUSE: Parallel Multi-Scale Attention ::: Attention Mechanism for Global Context Representation", "MUSE: Parallel Multi-Scale Attention ::: Convolution for Local Context Modeling", "MUSE: Parallel Multi-Scale Attention ::: Point-wise Feed-forward Network for Capturing Token Representations", "Experiment", "Experiment ::: Datasets", "Experiment ::: Experimental Settings ::: Model", "Experiment ::: Experimental Settings ::: Training", "Experiment ::: Experimental Settings ::: Evaluation", "Experiment ::: Results", "Experiment ::: How do we propose effective parallel multi-scale attention?", "Experiment ::: Further Analysis ::: Parallel multi-scale attention brings time efficiency on GPUs", "Related Work", "Conclusion and Future work", "Conclusion and Future work ::: Acknowledgments" ], "paragraphs": [ [ "In recent years, Transformer has been remarkably adept at sequence learning tasks like machine translation BIBREF0, BIBREF1, text classification BIBREF2, BIBREF3, language modeling BIBREF4, BIBREF5, etc. It is solely based on an attention mechanism that captures global dependencies between input tokens, dispensing with recurrence and convolutions entirely. The key idea of the self-attention mechanism is updating token representations based on a weighted sum of all input representations.", "However, recent research BIBREF6 has shown that the Transformer has surprising shortcomings in long sequence learning, exactly because of its use of self-attention. As shown in Figure 1 (a), in the task of machine translation, the performance of Transformer drops with the increase of the source sentence length, especially for long sequences. The reason is that the attention can be over-concentrated and disperse, as shown in Figure 1 (b), and only a small number of tokens are represented by attention. It may work fine for shorter sequences, but for longer sequences, it causes insufficient representation of information and brings difficulty for the model to comprehend the source information intactly. In recent work, local attention that constrains the attention to focus on only part of the sequences BIBREF7, BIBREF8 is used to address this problem. However, it costs self-attention the ability to capture long-range dependencies and also does not demonstrate effectiveness in sequence to sequence learning tasks.", "To build a module with both inductive bias of local and global context modelling in sequence to sequence learning, we hybrid self-attention with convolution and present Parallel multi-scale attention called MUSE. It encodes inputs into hidden representations and then applies self-attention and depth-separable convolution transformations in parallel. The convolution compensates for the insufficient use of local information while the self-attention focuses on capturing the dependencies. Moreover, this parallel structure is highly extensible, and new transformations can be easily introduced as new parallel branches, and is also favourable to parallel computation.", "The main contributions are summarized as follows:", "We find that the attention mechanism alone suffers from dispersed weights and is not suitable for long sequence representation learning. The proposed method tries to address this problem and achieves much better performance on generating long sequence.", "We propose a parallel multi-scale attention and explore a simple but efficient method to successfully combine convolution with self-attention all in one module.", "MUSE outperforms all previous models with same training data and the comparable model size, with state-of-the-art BLEU scores on three main machine translation tasks.", "MUSE-simple introduce parallel representation learning and brings expansibility and parallelism. Experiments show that the inference speed can be increased by 31% on GPUs." ], [ "Like other sequence-to-sequence models, MUSE also adopts an encoder-decoder framework. The encoder takes a sequence of word embeddings $(x_1, \\cdots , x_n)$ as input where $n$ is the length of input. It transfers word embeddings to a sequence of hidden representation ${z} = (z_1, \\cdots , z_n)$. Given ${z}$, the decoder is responsible for generating a sequence of text $(y_1, \\cdots , y_m)$ token by token.", "The encoder is a stack of $N$ MUSE modules. Residual mechanism and layer normalization are used to connect two adjacent layers. The decoder is similar to encoder, except that each MUSE module in the decoder not only captures features from the generated text representations but also performs attention over the output of the encoder stack through additional context attention. Residual mechanism and layer normalization are also used to connect two modules and two adjacent layers.", "The key part in the proposed model is the MUSE module, which contains three main parts: self-attention for capturing global features, depth-wise separable convolution for capturing local features, and a position-wise feed-forward network for capturing token features. The module takes the output of $(i-1)$ layer as input and generates the output representation in a fusion way:", "where “Attention” refers to self-attention, “Conv” refers to dynamic convolution, “Pointwise” refers to a position-wise feed-forward network. The followings list the details of each part. We also propose MUSE-simple, a simple version of MUSE, which generates the output representation similar to the MUSE model except for that it dose not the include convolution operation:" ], [ "Self-attention is responsible for learning representations of global context. For a given input sequence $X$, it first projects $X$ into three representations, key $K$, query $Q$, and value $V$. Then, it uses a self-attention mechanism to get the output representation:", "Where $W^O$, $W^Q$, $W^K$, and $W^V$ are projection parameters. The self-attention operation $\\sigma $ is the dot-production between key, query, and value pairs:", "Note that we conduct a projecting operation over the value in our self-attention mechanism $V_1=VW^V$ here." ], [ "We introduce convolution operations into MUSE to capture local context. To learn contextual sequence representations in the same hidden space, we choose depth-wise convolution BIBREF9 (we denote it as DepthConv in the experiments) as the convolution operation because it includes two separate transformations, namely, point-wise projecting transformation and contextual transformation. It is because that original convolution operator is not separable, but DepthConv can share the same point-wise projecting transformation with self-attention mechanism. We choose dynamic convolution BIBREF10, the best variant of DepthConv, as our implementation.", "Each convolution sub-module contains multiple cells with different kernel sizes. They are used for capturing different-range features. The output of the convolution cell with kernel size $k$ is:", "where $W^{V}$ and $W^{out}$ are parameters, $W^{V}$ is a point-wise projecting transformation matrix. The $Depth\\_conv$ refers to depth convolution in the work of BIBREF10. For an input sequence $X$, the output $O$ is computed as:", "where $d$ is the hidden size. Note that we conduct the same projecting operation over the input in our convolution mechanism $V_2=XW^V$ here with that in self-attention mechanism.", "Shared projection To learn contextual sequence representations in the same hidden space, the projection in the self-attention mechanism $V_1=VW_V$ and that in the convolution mechanism $V_2=XW^V$ is shared. Because the shared projection can project the input feature into the same hidden space. If we conduct two independent projection here: $V_1=VW_1^V$ and $V_2=XW^V_2$, where $W_1^V$ and $W_2^V$ are two parameter matrices, we call it as separate projection. We will analyze the necessity of applying shared projection here instead of separate projection.", "Dynamically Selected Convolution Kernels We introduce a gating mechanism to automatically select the weight of different convolution cells." ], [ "To learn token level representations, MUSE concatenates an self-attention network with a position-wise feed-forward network at each layer. Since the linear transformations are the same across different positions, the position-wise feed-forward network can be seen as a token feature extractor.", "where $W_1$, $b_1$, $W_2$, and $b_2$ are projection parameters." ], [ "We evaluate MUSE on four machine translation tasks. This section describes the datasets, experimental settings, detailed results, and analysis." ], [ "WMT14 En-Fr and En-De datasets The WMT 2014 English-French translation dataset, consisting of $36M$ sentence pairs, is adopted as a big dataset to test our model. We use the standard split of development set and test set. We use newstest2014 as the test set and use newstest2012 +newstest2013 as the development set. Following BIBREF11, we also adopt a joint source and target BPE factorization with the vocabulary size of $40K$. For medium dataset, we borrow the setup of BIBREF0 and adopt the WMT 2014 English-German translation dataset which consists of $4.5M$ sentence pairs, the BPE vocabulary size is set to $32K$. The test and validation datasets we used are the same as BIBREF0.", "IWSLT De-En and En-Vi datasets Besides, we perform experiments on two small IWSLT datasets to test the small version of MUSE with other comparable models. The IWSLT 2014 German-English translation dataset consists of $160k$ sentence pairs. We also adopt a joint source and target BPE factorization with the vocabulary size of $32K$. The IWSLT 2015 English-Vietnamese translation dataset consists of $133K$ training sentence pairs. For the En-Vi task, we build a dictionary including all source and target tokens. The vocabulary size for English is $17.2K$, and the vocabulary size for the Vietnamese is $6.8K$." ], [ "For fair comparisons, we only compare models reported with the comparable model size and the same training data. We do not compare BIBREF12 because it is an ensemble method. We build MUSE-base and MUSE-large with the parameter size comparable to Transformer-base and Transformer-large. We adopt multi-head attention BIBREF0 as implementation of self-attention in MUSE module. The number of attention head is set to 4 for MUSE-base and 16 for MUSE-large. We also add the network architecture built by MUSE-simple in the similar way into the comparison.", "MUSE consists of 12 residual blocks for encoder and 12 residual blocks for decoder, the dimension is set to 384 for MUSE-base and 768 for MUSE-large. The hidden dimension of non linear transformation is set to 768 for MUSE-base and 3072 for MUSE-large.", "The MUSE-large is trained on 4 Titan RTX GPUs while the MUSE-base is trained on a single NVIDIA RTX 2080Ti GPU. The batch size is calculated at the token level, which is called dynamic batching BIBREF0. We adopt dynamic convolution as the variant of depth-wise separable convolution. We tune the kernel size on the validation set. For convolution with a single kernel, we use the kernel size of 7 for all layers. In case of dynamic selected kernels, the kernel size is 3 for small kernels and 15 for large kernels for all layers." ], [ "The training hyper-parameters are tuned on the validation set.", "MUSE-large For training MUSE-large, following BIBREF13, parameters are updated every 32 steps. We train the model for $80K$ updates with a batch size of 5120 for En-Fr, and train the model for ${30K}$ updates with a batch size of 3584 for En-De. The dropout rate is set to $0.1$ for En-Fr and ${0.3}$ for En-De. We borrow the setup of optimizer from BIBREF10 and use the cosine learning rate schedule with 10000 warmup steps. The max learning rate is set to $0.001$ on En-De translation and ${0.0007}$ on En-Fr translation. For checkpoint averaging, following BIBREF10, we tune the average checkpoints for En-De translation tasks. For En-Fr translation, we do not average checkpoint but use the final single checkpoint.", "MUSE-base We train and test MUSE-base on two small datasets, IWSLT 2014 De-En translation and IWSLT2015 En-Vi translation. Following BIBREF0, we use Adam optimizer with a learning rate of $0.001$. We use the warmup mechanism and invert the learning rate decay with warmup updates of $4K$. For the De-En dataset, we train the model for $20K$ steps with a batch size of $4K$. The parameters are updated every 4 steps. The dropout rate is set to $0.4$. For the En-Vi dataset, we train the model for $10K$ steps with a batch size of $4K$. The parameters are also updated every 4 steps. The dropout rate is set to $0.3$. We save checkpoints every epoch and average the last 10 checkpoints for inference." ], [ "During inference, we adopt beam search with a beam size of 5 for De-En, En-Fr and En-Vi translation tasks. The length penalty is set to 0.8 for En-Fr according to the validation results, 1 for the two small datasets following the default setting of BIBREF14. We do not tune beam width and length penalty but use the setting reported in BIBREF0. The BLEU metric is adopted to evaluate the model performance during evaluation." ], [ "As shown in Table TABREF24, MUSE outperforms all previously models on En-De and En-Fr translation, including both state-of-the-art models of stand alone self-attention BIBREF0, BIBREF13, and convolutional models BIBREF11, BIBREF15, BIBREF10. This result shows that either self-attention or convolution alone is not enough for sequence to sequence learning. The proposed parallel multi-scale attention improves over them both on En-De and En-Fr.", "Compared to Evolved Transformer BIBREF19 which is constructed by NAS and also mixes convolutions of different kernel size, MUSE achieves 2.2 BLEU gains in En-Fr translation.", "Relative position or local attention constraints bring improvements over origin self-attention model, but parallel multi-scale outperforms them.", "MUSE can also scale to small model and small datasets, as depicted in Table TABREF25, MUSE-base pushes the state-of-the-art from 35.7 to 36.3 on IWSLT De-En translation dataset.", "It is shown in Table TABREF24 and Table TABREF25 that MUSE-simple which contains the basic idea of parallel multi-scale attention achieves state-of-the-art performance on three major machine translation datasets." ], [ "In this subsection we compare MUSE and its variants on IWSLT 2015 De-En translation to answer the question.", "Does concatenating self-attention with convolution certainly improve the model? To bridge the gap between point-wise transformation which learns token level representations and self-attention which learns representations of global context, we introduce convolution to enhance our multi-scale attention. As we can see from the first experiment group of Table TABREF27, convolution is important in the parallel multi-scale attention. However, it is not easy to combine convolution and self-attention in one module to build better representations on sequence to sequence tasks. As shown in the first line of both second and third group of Table TABREF27, simply learning local representations by using convolution or depth-wise separable convolution in parallel with self-attention harms the performance. Furthermore, combining depth-wise separable convolution (in this work we choose its best variant dynamic convolution as implementation) is even worse than combining convolution.", "Why do we choose DepthConv and what is the importance of sharing Projection of DepthConv and self-attention? We conjecture that convolution and self-attention both learn contextual sequence representations and they should share the point transformation and perform the contextual transformation in the same hidden space. We first project the input to a hidden representation and perform a variant of depth-wise convolution and self-attention transformations in parallel. The fist two experiments in third group of Table TABREF27 show that validating the utility of sharing Projection in parallel multi-scale attention, shared projection gain 1.4 BLEU scores over separate projection, and bring improvement of 0.5 BLEU scores over MUSE-simple (without DepthConv).", "How much is the kernel size? Comparative experiments show that the too large kernel harms performance both for DepthConv and convolution. Since there exists self-attention and point-wise transformations, simply applying the growing kernel size schedule proposed in SliceNet BIBREF15 doesn't work. Thus, we propose to use dynamically selected kernel size to let the learned network decide the kernel size for each layer." ], [ "The underlying parallel structure (compared to the sequential structure in each block of Transformer) allows MUSE to be efficiently computed on GPUs. For example, we can combine small matrices into large matrices, and while it does not reduce the number of actual operations, it can be better paralleled by GPUs to speed up computation. Concretely, for each MUSE module, we first concentrate $W^Q,W^K,W^V$ of self-attention and $W_1$ of point feed-forward transformation into a single encoder matrix $W^{Enc}$, and then perform transformation such as self-attention, depth-separable convolution, and nonlinear transformation, in parallel, to learn multi-scale representations in the hidden layer. $W^O,W_2,W^{out}$ can also be combined a single decoder matrix $W^{Dec}$. The decoder of sequence to sequence architecture can be implemented similarly.", "In Table TABREF31, we conduct comparisons to show the speed gains with the aforementioned implementation, and the batch size is set to one sample per batch to simulate online inference environment. Under the settings, where the numbers of parameters are similar for MUSE and Transformer, about 31% increase in inference speed can be obtained. The experiments use MUSE with 6 MUSE-simple modules and Transformer with 6 base blocks. The hidden size is set to 512.", "Parallel multi-scale attention generates much better long sequence As demonstrated in Figure FIGREF32, MUSE generates better sequences of various length than self-attention, but it is remarkably adept at generate long sequence, e.g. for sequence longer than 100, MUSE is two times better.", "Lower layers prefer local context and higher layers prefer more contextual representations MUSE contains multiple dynamic convolution cells, whose streams are fused by a gated mechanism. The weight for each dynamic cell is a scalar. Here we analyze the weight of different dynamic convolution cells in different layers. Figure FIGREF32 shows that as the layer depth increases, the weight of dynamic convolution cells with small kernel sizes gradually decreases. It demonstrates that lower layers prefer local features while higher layers prefer global features. It is corresponding to the finding in BIBREF26.", "MUSE not only gains BLEU scores, but also generates more reasonable sentences and increases the translation quality. We conduct the case study on the De-En dataset and the cases are shown in Table TABREF34 in Appendix. In case 1, although the baseline transformer translates many correct words according to the source sentence, the translated sentence is not fluent at all. It indicates that Transformer does not capture the relationship between some words and their neighbors, such as “right” and “clap”. By contrast, MUSE captures them well by combining local convolution with global self-attention. In case 2, the cause adverbial clause is correctly translated by MUSE while transformer misses the word “why” and fails to translate it." ], [ "Sequence to sequence learning is an important task in machine learning. It evolves understanding and generating sequence. Machine translation is the touchstone of sequence to sequence learning. Traditional approaches usually adopt long-short term memory networks BIBREF27, BIBREF28 to learn the representation of sequences. However, these models either are built upon auto-regressive structures requiring longer encoding time or perform worse on real-world natural language processing tasks. Recent studies explore convolutional neural networks (CNN) BIBREF11 or self-attention BIBREF0 to support high-parallel sequence modeling and does not require auto-regressive structure during encoding, thus bringing large efficiency improvements. They are strong at capturing local or global dependencies.", "There are several studies on combining self-attention and convolution. However, they do not surpass both convectional and self-attention mechanisms. BIBREF4 propose to augment convolution with self attention by directly concentrating them in computer vision tasks. However, as demonstrated in Table TABREF27 there method does not work for sequence to sequence learning task. Since state-of-the-art models on question answering tasks still consist on self-attention and do no adopt ideas in QAnet BIBREF29. Both self-attention BIBREF13 and convolution BIBREF10 outperforms Evolved transformer by near 2 BLEU scores on En-Fr translation. It seems that learning global and local context through stacking self-attention and convolution layers does not beat either self-attention or convolution models. In contrast, the proposed parallel multi-scale attention outperforms previous convolution or self-attention based models on main translation tasks, showing its effectiveness for sequence to sequence learning." ], [ "Although the self-attention mechanism has been prevalent in sequence modeling, we find that attention suffers from dispersed weights especially for long sequences, resulting from the insufficient local information.", "To address this problem, we present Parallel Multi-scale Attention (MUSE) and MUSE-simple. MUSE-simple introduces the idea of parallel multi-scale attention into sequence to sequence learning. And MUSE fuses self-attention, convolution, and point-wise transformation together to explicitly learn global, local and token level sequence representations. Especially, we find from empirical results that the shared projection plays important part in its success, and is essential for our multi-scale learning.", "Beyond the inspiring new state-of-the-art results on three major machine translation datasets, detailed analysis and model variants also verify the effectiveness of MUSE.", "For future work, the parallel structure is highly extensible and provide many opportunities to improve these models. In addition, given the success of shared projection, we would like to explore its detailed effects on contextual representation learning. Finally, we are exited about future of parallel multi-scale attention and plan to apply this simple but effective idea to other tasks including image and speech." ], [ "This work was supported in part by National Natural Science Foundation of China (No. 61673028)." ] ] }
{ "question": [ "What evaluation metric is used?", "What datasets are used?", "What are three main machine translation tasks?", "How big is improvement in performance over Transformers?" ], "question_id": [ "6e4505609a280acc45b0a821755afb1b3b518ffd", "9bd938859a8b063903314a79f09409af8801c973", "68ba5bf18f351e8c83fae7b444cc50bef7437f13", "f6a1125c5621a2f32c9bcdd188dff14efa096083" ], "nlp_background": [ "two", "two", "two", "two" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "The BLEU metric " ], "yes_no": null, "free_form_answer": "", "evidence": [ "During inference, we adopt beam search with a beam size of 5 for De-En, En-Fr and En-Vi translation tasks. The length penalty is set to 0.8 for En-Fr according to the validation results, 1 for the two small datasets following the default setting of BIBREF14. We do not tune beam width and length penalty but use the setting reported in BIBREF0. The BLEU metric is adopted to evaluate the model performance during evaluation." ], "highlighted_evidence": [ "The BLEU metric is adopted to evaluate the model performance during evaluation." ] } ], "annotation_id": [ "80f7986f576e8d31ae1d31e2b5367edac7b0368d" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "WMT14 En-Fr and En-De datasets", "IWSLT De-En and En-Vi datasets" ], "yes_no": null, "free_form_answer": "", "evidence": [ "WMT14 En-Fr and En-De datasets The WMT 2014 English-French translation dataset, consisting of $36M$ sentence pairs, is adopted as a big dataset to test our model. We use the standard split of development set and test set. We use newstest2014 as the test set and use newstest2012 +newstest2013 as the development set. Following BIBREF11, we also adopt a joint source and target BPE factorization with the vocabulary size of $40K$. For medium dataset, we borrow the setup of BIBREF0 and adopt the WMT 2014 English-German translation dataset which consists of $4.5M$ sentence pairs, the BPE vocabulary size is set to $32K$. The test and validation datasets we used are the same as BIBREF0.", "IWSLT De-En and En-Vi datasets Besides, we perform experiments on two small IWSLT datasets to test the small version of MUSE with other comparable models. The IWSLT 2014 German-English translation dataset consists of $160k$ sentence pairs. We also adopt a joint source and target BPE factorization with the vocabulary size of $32K$. The IWSLT 2015 English-Vietnamese translation dataset consists of $133K$ training sentence pairs. For the En-Vi task, we build a dictionary including all source and target tokens. The vocabulary size for English is $17.2K$, and the vocabulary size for the Vietnamese is $6.8K$." ], "highlighted_evidence": [ "WMT14 En-Fr and En-De datasets The WMT 2014 English-French translation dataset, consisting of $36M$ sentence pairs, is adopted as a big dataset to test our model.", "For medium dataset, we borrow the setup of BIBREF0 and adopt the WMT 2014 English-German translation dataset which consists of $4.5M$ sentence pairs, the BPE vocabulary size is set to $32K$.", "IWSLT De-En and En-Vi datasets Besides, we perform experiments on two small IWSLT datasets to test the small version of MUSE with other comparable models. The IWSLT 2014 German-English translation dataset consists of $160k$ sentence pairs. We also adopt a joint source and target BPE factorization with the vocabulary size of $32K$. The IWSLT 2015 English-Vietnamese translation dataset consists of $133K$ training sentence pairs." ] } ], "annotation_id": [ "c12e2988aa912fff95d557673c627c1b1536bc38" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "De-En, En-Fr and En-Vi translation tasks" ], "yes_no": null, "free_form_answer": "", "evidence": [ "During inference, we adopt beam search with a beam size of 5 for De-En, En-Fr and En-Vi translation tasks. The length penalty is set to 0.8 for En-Fr according to the validation results, 1 for the two small datasets following the default setting of BIBREF14. We do not tune beam width and length penalty but use the setting reported in BIBREF0. The BLEU metric is adopted to evaluate the model performance during evaluation." ], "highlighted_evidence": [ "During inference, we adopt beam search with a beam size of 5 for De-En, En-Fr and En-Vi translation tasks." ] } ], "annotation_id": [ "a6ee673b81a1c88bb9be0d6e87732b611cda1e2f" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "2.2 BLEU gains" ], "yes_no": null, "free_form_answer": "", "evidence": [ "As shown in Table TABREF24, MUSE outperforms all previously models on En-De and En-Fr translation, including both state-of-the-art models of stand alone self-attention BIBREF0, BIBREF13, and convolutional models BIBREF11, BIBREF15, BIBREF10. This result shows that either self-attention or convolution alone is not enough for sequence to sequence learning. The proposed parallel multi-scale attention improves over them both on En-De and En-Fr.", "Compared to Evolved Transformer BIBREF19 which is constructed by NAS and also mixes convolutions of different kernel size, MUSE achieves 2.2 BLEU gains in En-Fr translation." ], "highlighted_evidence": [ "As shown in Table TABREF24, MUSE outperforms all previously models on En-De and En-Fr translation, including both state-of-the-art models of stand alone self-attention BIBREF0, BIBREF13, and convolutional models BIBREF11, BIBREF15, BIBREF10. This result shows that either self-attention or convolution alone is not enough for sequence to sequence learning. The proposed parallel multi-scale attention improves over them both on En-De and En-Fr.\n\nCompared to Evolved Transformer BIBREF19 which is constructed by NAS and also mixes convolutions of different kernel size, MUSE achieves 2.2 BLEU gains in En-Fr translation." ] } ], "annotation_id": [ "0ba22733c6dbfbff8382e85c59fc336137d5cbf9" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Figure 1: The left figure shows that the performance drops largely with the increase of sentence length on the De-En dataset. The right figure shows the attention map from the 3-th encoder layer. As we can see, the attention map is too dispersed to capture sufficient information. For example, “[EOS]”, contributing little to word alignment, is surprisingly over attended.", "Figure 2: Multi-scale attention hybrids point-wise transformation, convolution, and self-attention to learn multi-scale sequence representations in parallel. We project convolution and self-attention into the same space to learn contextual representations.", "Table 1: MUSE-large outperforms all previous models under the standard training and evaluation setting on WMT14 En-De and WMT14 En-Fr datasets.", "Table 2: MUSE-base outperforms previous state-of-the-art models on IWSLT De-En translation datasets and outperforms previous models without BPE processing on IWSLT En-Vi.", "Table 3: Comparisons between MUSE and its variants on the IWSLT 2015 De-En translation task.", "Table 4: The comparison between the inference speed of MUSE and Transformer.", "Figure 3: BLEU scores of models on different groups with different source sentence lengths. The experiments are conducted on the De-En dataset. MUSE performs better than Transformer, especially on long sentences.", "Figure 4: Dynamically selected kernels at each layer: The blue bars represent the ratio between the percentage of the convolution with smaller kernel sizes and the percentage of the convolution with large kernel sizes.", "Table 5: Case study on the De-En dataset. The red bolded words denote the wrong translation and blue bolded words denote the correct translation. In case 1, transformer fails to capture the relationship between some words and their neighbors, such as “right” and “clap”. In case 2, the cause adverbial clause is correctly translated by MUSE while transformer misses the word “why” and fails to translate it." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "6-Table1-1.png", "6-Table2-1.png", "7-Table3-1.png", "8-Table4-1.png", "8-Figure3-1.png", "8-Figure4-1.png", "12-Table5-1.png" ] }
1805.00760
Aspect Term Extraction with History Attention and Selective Transformation
Aspect Term Extraction (ATE), a key sub-task in Aspect-Based Sentiment Analysis, aims to extract explicit aspect expressions from online user reviews. We present a new framework for tackling ATE. It can exploit two useful clues, namely opinion summary and aspect detection history. Opinion summary is distilled from the whole input sentence, conditioned on each current token for aspect prediction, and thus the tailor-made summary can help aspect prediction on this token. Another clue is the information of aspect detection history, and it is distilled from the previous aspect predictions so as to leverage the coordinate structure and tagging schema constraints to upgrade the aspect prediction. Experimental results over four benchmark datasets clearly demonstrate that our framework can outperform all state-of-the-art methods.
{ "section_name": [ "Introduction", "The ATE Task", "Model Description", "Joint Training", "Datasets", "Comparisons", "Settings", "Main Results", "Ablation Study", "Attention Visualization and Case Study", "Related Work", "Concluding Discussions" ], "paragraphs": [ [ "Aspect-Based Sentiment Analysis (ABSA) involves detecting opinion targets and locating opinion indicators in sentences in product review texts BIBREF0 . The first sub-task, called Aspect Term Extraction (ATE), is to identify the phrases targeted by opinion indicators in review sentences. For example, in the sentence “I love the operating system and preloaded software”, the words “operating system” and “preloaded software” should be extracted as aspect terms, and the sentiment on them is conveyed by the opinion word “love”. According to the task definition, for a term/phrase being regarded as an aspect, it should co-occur with some “opinion words” that indicate a sentiment polarity on it BIBREF1 .", "Many researchers formulated ATE as a sequence labeling problem or a token-level classification problem. Traditional sequence models such as Conditional Random Fields (CRFs) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , Long Short-Term Memory Networks (LSTMs) BIBREF6 and classification models such as Support Vector Machine (SVM) BIBREF7 have been applied to tackle the ATE task, and achieved reasonable performance. One drawback of these existing works is that they do not exploit the fact that, according to the task definition, aspect terms should co-occur with opinion-indicating words. Thus, the above methods tend to output false positives on those frequently used aspect terms in non-opinionated sentences, e.g., the word “restaurant” in “the restaurant was packed at first, so we waited for 20 minutes”, which should not be extracted because the sentence does not convey any opinion on it.", "There are a few works that consider opinion terms when tackling the ATE task. BIBREF8 proposed Recursive Neural Conditional Random Fields (RNCRF) to explicitly extract aspects and opinions in a single framework. Aspect-opinion relation is modeled via joint extraction and dependency-based representation learning. One assumption of RNCRF is that dependency parsing will capture the relation between aspect terms and opinion words in the same sentence so that the joint extraction can benefit. Such assumption is usually valid for simple sentences, but rather fragile for some complicated structures, such as clauses and parenthesis. Moreover, RNCRF suffers from errors of dependency parsing because its network construction hinges on the dependency tree of inputs. CMLA BIBREF9 models aspect-opinion relation without using syntactic information. Instead, it enables the two tasks to share information via attention mechanism. For example, it exploits the global opinion information by directly computing the association score between the aspect prototype and individual opinion hidden representations and then performing weighted aggregation. However, such aggregation may introduce noise. To some extent, this drawback is inherited from the attention mechanism, as also observed in machine translation BIBREF10 and image captioning BIBREF11 .", "To make better use of opinion information to assist aspect term extraction, we distill the opinion information of the whole input sentence into opinion summary, and such distillation is conditioned on a particular current token for aspect prediction. Then, the opinion summary is employed as part of features for the current aspect prediction. Taking the sentence “the restaurant is cute but not upscale” as an example, when our model performs the prediction for the word “restaurant”, it first generates an opinion summary of the entire sentence conditioned on “restaurant”. Due to the strong correlation between “restaurant' and “upscale” (an opinion word), the opinion summary will convey more information of “upscale” so that it will help predict “restaurant” as an aspect with high probability. Note that the opinion summary is built on the initial opinion features coming from an auxiliary opinion detection task, and such initial features already distinguish opinion words to some extent. Moreover, we propose a novel transformation network that helps strengthen the favorable correlations, e.g. between “restaurant' and “upscale”, so that the produced opinion summary involves less noise.", "Besides the opinion summary, another useful clue we explore is the aspect prediction history due to the inspiration of two observations: (1) In sequential labeling, the predictions at the previous time steps are useful clues for reducing the error space of the current prediction. For example, in the B-I-O tagging (refer to Section SECREF4 ), if the previous prediction is “O”, then the current prediction cannot be “I”; (2) It is observed that some sentences contain multiple aspect terms. For example, “Apple is unmatched in product quality, aesthetics, craftmanship, and customer service” has a coordinate structure of aspects. Under this structure, the previously predicted commonly-used aspect terms (e.g., “product quality”) can guide the model to find the infrequent aspect terms (e.g., “craftmanship”). To capture the above clues, our model distills the information of the previous aspect detection for making a better prediction on the current state.", "Concretely, we propose a framework for more accurate aspect term extraction by exploiting the opinion summary and the aspect detection history. Firstly, we employ two standard Long-Short Term Memory Networks (LSTMs) for building the initial aspect and opinion representations recording the sequential information. To encode the historical information into the initial aspect representations at each time step, we propose truncated history attention to distill useful features from the most recent aspect predictions and generate the history-aware aspect representations. We also design a selective transformation network to obtain the opinion summary at each time step. Specifically, we apply the aspect information to transform the initial opinion representations and apply attention over the transformed representations to generate the opinion summary. Experimental results show that our framework can outperform state-of-the-art methods." ], [ "Given a sequence INLINEFORM0 of INLINEFORM1 words, the ATE task can be formulated as a token/word level sequence labeling problem to predict an aspect label sequence INLINEFORM2 , where each INLINEFORM3 comes from a finite label set INLINEFORM4 which describes the possible aspect labels. As shown in the example below:", " ", " INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 denote beginning of, inside and outside of the aspect span respectively. Note that in commonly-used datasets such as BIBREF12 , the gold standard opinions are usually not annotated." ], [ "As shown in Figure FIGREF3 , our model contains two key components, namely Truncated History-Attention (THA) and Selective Transformation Network (STN), for capturing aspect detection history and opinion summary respectively. THA and STN are built on two LSTMs that generate the initial word representations for the primary ATE task and the auxiliary opinion detection task respectively. THA is designed to integrate the information of aspect detection history into the current aspect feature to generate a new history-aware aspect representation. STN first calculates a new opinion representation conditioned on the current aspect candidate. Then, we employ a bi-linear attention network to calculate the opinion summary as the weighted sum of the new opinion representations, according to their associations with the current aspect representation. Finally, the history-aware aspect representation and the opinion summary are concatenated as features for aspect prediction of the current time step.", "As Recurrent Neural Networks can record the sequential information BIBREF13 , we employ two vanilla LSTMs to build the initial token-level contextualized representations for sequence labeling of the ATE task and the auxiliary opinion word detection task respectively. For simplicity, let INLINEFORM0 denote an LSTM unit where INLINEFORM1 is the task indicator. In the following sections, without specification, the symbols with superscript INLINEFORM2 and INLINEFORM3 are the notations used in the ATE task and the opinion detection task respectively. We use Bi-Directional LSTM to generate the initial token-level representations INLINEFORM4 ( INLINEFORM5 is the dimension of hidden states): DISPLAYFORM0 ", "", "In principle, RNN can memorize the entire history of the predictions BIBREF13 , but there is no mechanism to exploit the relation between previous predictions and the current prediction. As discussed above, such relation could be useful because of two reasons: (1) reducing the model's error space in predicting the current label by considering the definition of B-I-O schema, (2) improving the prediction accuracy for multiple aspects in one coordinate structure.", "We propose a Truncated History-Attention (THA) component (the THA block in Figure FIGREF3 ) to explicitly model the aspect-aspect relation. Specifically, THA caches the most recent INLINEFORM0 hidden states. At the current prediction time step INLINEFORM1 , THA calculates the normalized importance score INLINEFORM2 of each cached state INLINEFORM3 ( INLINEFORM4 ) as follows: DISPLAYFORM0 ", " DISPLAYFORM0 ", "", " INLINEFORM0 denotes the previous history-aware aspect representation (refer to Eq. EQREF12 ). INLINEFORM1 can be learned during training. INLINEFORM2 are parameters associated with previous aspect representations, current aspect representation and previous history-aware aspect representations respectively. Then, the aspect history INLINEFORM3 is obtained as follows: DISPLAYFORM0 ", "", "To benefit from the previous aspect detection, we consolidate the hidden aspect representation with the distilled aspect history to generate features for the current prediction. Specifically, we adopt a way similar to the residual block BIBREF14 , which is shown to be useful in refining word-level features in Machine Translation BIBREF15 and Part-Of-Speech tagging BIBREF16 , to calculate the history-aware aspect representations INLINEFORM0 at the time step INLINEFORM1 : DISPLAYFORM0 ", "where ReLU is the relu activation function.", "Previous works show that modeling aspect-opinion association is helpful to improve the accuracy of ATE, as exemplified in employing attention mechanism for calculating the opinion information BIBREF9 , BIBREF17 . MIN BIBREF17 focuses on a few surrounding opinion representations and computes their importance scores according to the proximity and the opinion salience derived from a given opinion lexicon. However, it is unable to capture the long-range association between aspects and opinions. Besides, the association is not strong because only the distance information is modeled. Although CMLA BIBREF9 can exploit global opinion information for aspect extraction, it may suffer from the noise brought in by attention-based feature aggregation. Taking the aspect term “fish” in “Furthermore, while the fish is unquestionably fresh, rolls tend to be inexplicably bland.” as an example, it might be enough to tell “fish” is an aspect given the appearance of the strongly related opinion “fresh”. However, CMLA employs conventional attention and does not have a mechanism to suppress the noise caused by other terms such as “rolls”. Dependency parsing seems to be a good solution for finding the most related opinion and indeed it was utilized in BIBREF8 , but the parser is prone to generating mistakes when processing the informal online reviews, as discussed in BIBREF17 .", "To make use of opinion information and suppress the possible noise, we propose a novel Selective Transformation Network (STN) (the STN block in Figure FIGREF3 ), and insert it before attending to global opinion features so that more important features with respect to a given aspect candidate will be highlighted. Specifically, STN first calculates a new opinion representation INLINEFORM0 given the current aspect feature INLINEFORM1 as follows: DISPLAYFORM0 ", "where INLINEFORM0 and INLINEFORM1 are parameters for history-aware aspect representations and opinion representations respectively. They map INLINEFORM2 and INLINEFORM3 to the same subspace. Here the aspect feature INLINEFORM4 acts as a “filter” to keep more important opinion features. Equation EQREF14 also introduces a residual block to obtain a better opinion representation INLINEFORM5 , which is conditioned on the current aspect feature INLINEFORM6 .", "For distilling the global opinion summary, we introduce a bi-linear term to calculate the association score between INLINEFORM0 and each INLINEFORM1 : DISPLAYFORM0 ", "where INLINEFORM0 and INLINEFORM1 are parameters of the Bi-Linear Attention layer. The improved opinion summary INLINEFORM2 at the time INLINEFORM3 is obtained via the weighted sum of the opinion representations: DISPLAYFORM0 ", "Finally, we concatenate the opinion summary INLINEFORM0 and the history-aware aspect representation INLINEFORM1 and feed it into the top-most fully-connected (FC) layer for aspect prediction: DISPLAYFORM0 DISPLAYFORM1 ", "Note that our framework actually performs a multi-task learning, i.e. predicting both aspects and opinions. We regard the initial token-level representations INLINEFORM0 as the features for opinion prediction: DISPLAYFORM0 ", " INLINEFORM0 and INLINEFORM1 are parameters of the FC layers." ], [ "All the components in the proposed framework are differentiable. Thus, our framework can be efficiently trained with gradient methods. We use the token-level cross-entropy error between the predicted distribution INLINEFORM0 ( INLINEFORM1 ) and the gold distribution INLINEFORM2 as the loss function: DISPLAYFORM0 ", "Then, the losses from both tasks are combined to form the training objective of the entire model: DISPLAYFORM0 ", "where INLINEFORM0 and INLINEFORM1 represent the loss functions for aspect and opinion extractions respectively." ], [ "To evaluate the effectiveness of the proposed framework for the ATE task, we conduct experiments over four benchmark datasets from the SemEval ABSA challenge BIBREF1 , BIBREF18 , BIBREF12 . Table TABREF24 shows their statistics. INLINEFORM0 (SemEval 2014) contains reviews of the laptop domain and those of INLINEFORM1 (SemEval 2014), INLINEFORM2 (SemEval 2015) and INLINEFORM3 (SemEval 2016) are for the restaurant domain. In these datasets, aspect terms have been labeled by the task organizer.", "Gold standard annotations for opinion words are not provided. Thus, we choose words with strong subjectivity from MPQA to provide the distant supervision BIBREF19 . To compare with the best SemEval systems and the current state-of-the-art methods, we use the standard train-test split in SemEval challenge as shown in Table TABREF24 ." ], [ "We compare our framework with the following methods:", "CRF-1: Conditional Random Fields with basic feature templates.", "CRF-2: Conditional Random Fields with basic feature templates and word embeddings.", "Semi-CRF: First-order Semi-Markov Conditional Random Fields BIBREF20 and the feature templates in BIBREF21 are adopted.", "LSTM: Vanilla bi-directional LSTM with pre-trained word embeddings.", "IHS_RD BIBREF2 , DLIREC BIBREF3 , EliXa BIBREF22 , NLANGP BIBREF4 : The winning systems in the ATE subtask in SemEval ABSA challenge BIBREF1 , BIBREF18 , BIBREF12 .", "WDEmb BIBREF5 : Enhanced CRF with word embeddings, dependency path embeddings and linear context embeddings.", "MIN BIBREF17 : MIN consists of three LSTMs. Two LSTMs are employed to model the memory interactions between ATE and opinion detection. The last one is a vanilla LSTM used to predict the subjectivity of the sentence as additional guidance.", "RNCRF BIBREF8 : CRF with high-level representations learned from Dependency Tree based Recursive Neural Network.", "CMLA BIBREF9 : CMLA is a multi-layer architecture where each layer consists of two coupled GRUs to model the relation between aspect terms and opinion words.", "To clarify, our framework aims at extracting aspect terms where the opinion information is employed as auxiliary, while RNCRF and CMLA perform joint extraction of aspects and opinions. Nevertheless, the comparison between our framework and RNCRF/CMLA is still fair, because we do not use manually annotated opinions as used by RNCRF and CMLA, instead, we employ an existing opinion lexicon to provide weak opinion supervision." ], [ "We pre-processed each dataset by lowercasing all words and replace all punctuations with PUNCT. We use pre-trained GloVe 840B vectors BIBREF23 to initialize the word embeddings and the dimension (i.e., INLINEFORM0 ) is 300. For out-of-vocabulary words, we randomly sample their embeddings from the uniform distribution INLINEFORM1 as done in BIBREF24 . All of the weight matrices except those in LSTMs are initialized from the uniform distribution INLINEFORM2 . For the initialization of the matrices in LSTMs, we adopt Glorot Uniform strategy BIBREF25 . Besides, all biases are initialized as 0's.", "The model is trained with SGD. We apply dropout over the ultimate aspect/opinion features and the input word embeddings of LSTMs. The dropout rates are empirically set as 0.5. With 5-fold cross-validation on the training data of INLINEFORM0 , other hyper-parameters are set as follows: INLINEFORM1 , INLINEFORM2 ; the number of cached historical aspect representations INLINEFORM3 is 5; the learning rate of SGD is 0.07." ], [ "As shown in Table TABREF39 , the proposed framework consistently obtains the best scores on all of the four datasets. Compared with the winning systems of SemEval ABSA, our framework achieves 5.0%, 1.6%, 1.4%, 1.3% absolute gains on INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 respectively.", "Our framework can outperform RNCRF, a state-of-the-art model based on dependency parsing, on all datasets. We also notice that RNCRF does not perform well on INLINEFORM0 and INLINEFORM1 (3.7% and 3.9% inferior than ours). We find that INLINEFORM2 and INLINEFORM3 contain many informal reviews, thus RNCRF's performance degradation is probably due to the errors from the dependency parser when processing such informal texts.", "CMLA and MIN do not rely on dependency parsing, instead, they employ attention mechanism to distill opinion information to help aspect extraction. Our framework consistently performs better than them. The gains presumably come from two perspectives: (1) In our model, the opinion summary is exploited after performing the selective transformation conditioned on the current aspect features, thus the summary can to some extent avoid the noise due to directly applying conventional attention. (2) Our model can discover some uncommon aspects under the guidance of some commonly-used aspects in coordinate structures by the history attention.", "CRF with basic feature template is not strong, therefore, we add CRF-2 as another baseline. As shown in Table TABREF39 , CRF-2 with word embeddings achieves much better results than CRF-1 on all datasets. WDEmb, which is also an enhanced CRF-based method using additional dependency context embeddings, obtains superior performances than CRF-2. Therefore, the above comparison shows that word embeddings are useful and the embeddings incorporating structure information can further improve the performance." ], [ "To further investigate the efficacy of the key components in our framework, namely, THA and STN, we perform ablation study as shown in the second block of Table TABREF39 . The results show that each of THA and STN is helpful for improving the performance, and the contribution of STN is slightly larger than THA. “OURS w/o THA & STN” only keeps the basic bi-linear attention. Although it performs not bad, it is still less competitive compared with the strongest baseline (i.e., CMLA), suggesting that only using attention mechanism to distill opinion summary is not enough. After inserting the STN component before the bi-linear attention, i.e. “OURS w/o THA”, we get about 1% absolute gains on each dataset, and then the performance is comparable to CMLA. By adding THA, i.e. “OURS”, the performance is further improved, and all state-of-the-art methods are surpassed." ], [ "In Figure FIGREF41 , we visualize the opinion attention scores of the words in two example sentences with the candidate aspects “maitre-D” and “bathroom”. The scores in Figures FIGREF41 and FIGREF41 show that our full model captures the related opinion words very accurately with significantly larger scores, i.e. “incredibly”, “unwelcoming” and “arrogant” for “maitre-D”, and “unfriendly” and “filthy” for “bathroom”. “OURS w/o STN” directly applies attention over the opinion hidden states INLINEFORM0 's, similar to what CMLA does. As shown in Figure FIGREF41 , it captures some unrelated opinion words (e.g. “fine”) and even some non-opinionated words. As a result, it brings in some noise into the global opinion summary, and consequently the final prediction accuracy will be affected. This example demonstrates that the proposed STN works pretty well to help attend to more related opinion words given a particular aspect.", "Some predictions of our model and those of LSTM and OURS w/o THA & STN are given in Table TABREF43 . The models incorporating attention-based opinion summary (i.e., OURS and OURS w/o THA & STN) can better determine if the commonly-used nouns are aspect terms or not (e.g. “device” in the first input), since they make decisions based on the global opinion information. Besides, they are able to extract some infrequent or even misspelled aspect terms (e.g. “survice” in the second input) based on the indicative clues provided by opinion words. For the last three cases, having aspects in coordinate structures (i.e. the third and the fourth) or long aspects (i.e. the fifth), our model can give precise predictions owing to the previous detection clues captured by THA. Without using these clues, the baseline models fail." ], [ "Some initial works BIBREF26 developed a bootstrapping framework for tackling Aspect Term Extraction (ATE) based on the observation that opinion words are usually located around the aspects. BIBREF27 and BIBREF28 performed co-extraction of aspect terms and opinion words based on sophisticated syntactic patterns. However, relying on syntactic patterns suffers from parsing errors when processing informal online reviews. To avoid this drawback, BIBREF29 , BIBREF30 employed word-based translation models. Specifically, these models formulated the ATE task as a monolingual word alignment process and aspect-opinion relation is captured by alignment links rather than word dependencies. The ATE task can also be formulated as a token-level sequence labeling problem. The winning systems BIBREF2 , BIBREF22 , BIBREF4 of SemEval ABSA challenges employed traditional sequence models, such as Conditional Random Fields (CRFs) and Maximum Entropy (ME), to detect aspects. Besides heavy feature engineering, they also ignored the consideration of opinions.", "Recently, neural network based models, such as LSTM-based BIBREF6 and CNN-based BIBREF31 methods, become the mainstream approach. Later on, some neural models jointly extracting aspect and opinion were proposed. BIBREF8 performs the two task in a single Tree-Based Recursive Neural Network. Their network structure depends on dependency parsing, which is prone to error on informal reviews. CMLA BIBREF9 consists of multiple attention layers on top of standard GRUs to extract the aspects and opinion words. Similarly, MIN BIBREF17 employs multiple LSTMs to interactively perform aspect term extraction and opinion word extraction in a multi-task learning framework. Our framework is different from them in two perspectives: (1) It filters the opinion summary by incorporating the aspect features at each time step into the original opinion representations; (2) It exploits history information of aspect detection to capture the coordinate structures and previous aspect features." ], [ "For more accurate aspect term extraction, we explored two important types of information, namely aspect detection history, and opinion summary. We design two components, i.e. truncated history attention, and selective transformation network. Experimental results show that our model dominates those joint extraction works such as RNCRF and CMLA on the performance of ATE. It suggests that the joint extraction sacrifices the accuracy of aspect prediction, although the ground-truth opinion words were annotated by these authors. Moreover, one should notice that those joint extraction methods do not care about the correspondence between the extracted aspect terms and opinion words. Therefore, the necessity of such joint extraction should be obelized, given the experimental findings in this paper." ] ] }
{ "question": [ "How do they determine the opinion summary?", "Do they explore how useful is the detection history and opinion summary?", "Which dataset(s) do they use to train the model?", "By how much do they outperform state-of-the-art methods?" ], "question_id": [ "282aa4e160abfa7569de7d99b8d45cabee486ba4", "ecfb2e75eb9a8eba8f640a039484874fa0d2fceb", "a6950c22c7919f86b16384facc97f2cf66e5941d", "54be3541cfff6574dba067f1e581444537a417db" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7", "2cfd959e433f290bb50b55722370f0d22fe090b7" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "the weighted sum of the new opinion representations, according to their associations with the current aspect representation" ], "yes_no": null, "free_form_answer": "", "evidence": [ "As shown in Figure FIGREF3 , our model contains two key components, namely Truncated History-Attention (THA) and Selective Transformation Network (STN), for capturing aspect detection history and opinion summary respectively. THA and STN are built on two LSTMs that generate the initial word representations for the primary ATE task and the auxiliary opinion detection task respectively. THA is designed to integrate the information of aspect detection history into the current aspect feature to generate a new history-aware aspect representation. STN first calculates a new opinion representation conditioned on the current aspect candidate. Then, we employ a bi-linear attention network to calculate the opinion summary as the weighted sum of the new opinion representations, according to their associations with the current aspect representation. Finally, the history-aware aspect representation and the opinion summary are concatenated as features for aspect prediction of the current time step." ], "highlighted_evidence": [ "As shown in Figure FIGREF3 , our model contains two key components, namely Truncated History-Attention (THA) and Selective Transformation Network (STN), for capturing aspect detection history and opinion summary respectively. THA and STN are built on two LSTMs that generate the initial word representations for the primary ATE task and the auxiliary opinion detection task respectively. THA is designed to integrate the information of aspect detection history into the current aspect feature to generate a new history-aware aspect representation. STN first calculates a new opinion representation conditioned on the current aspect candidate. Then, we employ a bi-linear attention network to calculate the opinion summary as the weighted sum of the new opinion representations, according to their associations with the current aspect representation. Finally, the history-aware aspect representation and the opinion summary are concatenated as features for aspect prediction of the current time step." ] } ], "annotation_id": [ "b3596aa57239b4bc334f9d6dbd92d867f46924fa" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Ablation Study", "To further investigate the efficacy of the key components in our framework, namely, THA and STN, we perform ablation study as shown in the second block of Table TABREF39 . The results show that each of THA and STN is helpful for improving the performance, and the contribution of STN is slightly larger than THA. “OURS w/o THA & STN” only keeps the basic bi-linear attention. Although it performs not bad, it is still less competitive compared with the strongest baseline (i.e., CMLA), suggesting that only using attention mechanism to distill opinion summary is not enough. After inserting the STN component before the bi-linear attention, i.e. “OURS w/o THA”, we get about 1% absolute gains on each dataset, and then the performance is comparable to CMLA. By adding THA, i.e. “OURS”, the performance is further improved, and all state-of-the-art methods are surpassed." ], "highlighted_evidence": [ "Ablation Study\nTo further investigate the efficacy of the key components in our framework, namely, THA and STN, we perform ablation study as shown in the second block of Table TABREF39 . The results show that each of THA and STN is helpful for improving the performance, and the contribution of STN is slightly larger than THA. “OURS w/o THA & STN” only keeps the basic bi-linear attention. Although it performs not bad, it is still less competitive compared with the strongest baseline (i.e., CMLA), suggesting that only using attention mechanism to distill opinion summary is not enough. After inserting the STN component before the bi-linear attention, i.e. “OURS w/o THA”, we get about 1% absolute gains on each dataset, and then the performance is comparable to CMLA. By adding THA, i.e. “OURS”, the performance is further improved, and all state-of-the-art methods are surpassed." ] } ], "annotation_id": [ "0bc1e17a158181ea43988d2b0e9aeb1728e88f84" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "INLINEFORM0 (SemEval 2014) contains reviews of the laptop domain and those of INLINEFORM1 (SemEval 2014), INLINEFORM2 (SemEval 2015) and INLINEFORM3 (SemEval 2016) are for the restaurant domain." ], "yes_no": null, "free_form_answer": "", "evidence": [ "To evaluate the effectiveness of the proposed framework for the ATE task, we conduct experiments over four benchmark datasets from the SemEval ABSA challenge BIBREF1 , BIBREF18 , BIBREF12 . Table TABREF24 shows their statistics. INLINEFORM0 (SemEval 2014) contains reviews of the laptop domain and those of INLINEFORM1 (SemEval 2014), INLINEFORM2 (SemEval 2015) and INLINEFORM3 (SemEval 2016) are for the restaurant domain. In these datasets, aspect terms have been labeled by the task organizer." ], "highlighted_evidence": [ "To evaluate the effectiveness of the proposed framework for the ATE task, we conduct experiments over four benchmark datasets from the SemEval ABSA challenge BIBREF1 , BIBREF18 , BIBREF12 . Table TABREF24 shows their statistics. INLINEFORM0 (SemEval 2014) contains reviews of the laptop domain and those of INLINEFORM1 (SemEval 2014), INLINEFORM2 (SemEval 2015) and INLINEFORM3 (SemEval 2016) are for the restaurant domain. In these datasets, aspect terms have been labeled by the task organizer." ] } ], "annotation_id": [ "3cc0e411389baa79a3e68f9fcbf063af490a28a8" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Compared with the winning systems of SemEval ABSA, our framework achieves 5.0%, 1.6%, 1.4%, 1.3% absolute gains on INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 respectively." ], "yes_no": null, "free_form_answer": "", "evidence": [ "As shown in Table TABREF39 , the proposed framework consistently obtains the best scores on all of the four datasets. Compared with the winning systems of SemEval ABSA, our framework achieves 5.0%, 1.6%, 1.4%, 1.3% absolute gains on INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 respectively.", "Our framework can outperform RNCRF, a state-of-the-art model based on dependency parsing, on all datasets. We also notice that RNCRF does not perform well on INLINEFORM0 and INLINEFORM1 (3.7% and 3.9% inferior than ours). We find that INLINEFORM2 and INLINEFORM3 contain many informal reviews, thus RNCRF's performance degradation is probably due to the errors from the dependency parser when processing such informal texts." ], "highlighted_evidence": [ "As shown in Table TABREF39 , the proposed framework consistently obtains the best scores on all of the four datasets. Compared with the winning systems of SemEval ABSA, our framework achieves 5.0%, 1.6%, 1.4%, 1.3% absolute gains on INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 respectively.\n\nOur framework can outperform RNCRF, a state-of-the-art model based on dependency parsing, on all datasets. We also notice that RNCRF does not perform well on INLINEFORM0 and INLINEFORM1 (3.7% and 3.9% inferior than ours). We find that INLINEFORM2 and INLINEFORM3 contain many informal reviews, thus RNCRF's performance degradation is probably due to the errors from the dependency parser when processing such informal texts." ] } ], "annotation_id": [ "dc171738869398a1cfa21a216ca5e2e65487621f" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Figure 1: Framework architecture. The callouts on both sides describe how THA and STN work at each time step. Color printing is preferred.", "Table 1: Statistics of datasets.", "Table 2: Experimental results (F1 score, %). The first four methods are implemented by us, and other results without markers are copied from their papers. The results with ‘*’ are reproduced by us with the released code by the authors. For RNCRF, the result with ‘\\’ is copied from the paper of CMLA (they have the same authors). ‘-’ indicates the results were not available in their papers.", "Figure 2: Opinion attention scores (i.e. wi,t in Equation 7) with respect to “maitre-D” and “bathroom”.", "Table 3: Case analysis. In the input sentences, the gold standard aspect terms are underlined and in red." ], "file": [ "3-Figure1-1.png", "4-Table1-1.png", "5-Table2-1.png", "6-Figure2-1.png", "6-Table3-1.png" ] }
1909.05358
Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset
A significant barrier to progress in data-driven approaches to building dialog systems is the lack of high quality, goal-oriented conversational data. To help satisfy this elementary requirement, we introduce the initial release of the Taskmaster-1 dataset which includes 13,215 task-based dialogs comprising six domains. Two procedures were used to create this collection, each with unique advantages. The first involves a two-person, spoken "Wizard of Oz" (WOz) approach in which trained agents and crowdsourced workers interact to complete the task while the second is "self-dialog" in which crowdsourced workers write the entire dialog themselves. We do not restrict the workers to detailed scripts or to a small knowledge base and hence we observe that our dataset contains more realistic and diverse conversations in comparison to existing datasets. We offer several baseline models including state of the art neural seq2seq architectures with benchmark performance as well as qualitative human evaluations. Dialogs are labeled with API calls and arguments, a simple and cost effective approach which avoids the requirement of complex annotation schema. The layer of abstraction between the dialog model and the service provider API allows for a given model to interact with multiple services that provide similar functionally. Finally, the dataset will evoke interest in written vs. spoken language, discourse patterns, error handling and other linguistic phenomena related to dialog system research, development and design.
{ "section_name": [ "Introduction", "Related work ::: Human-machine vs. human-human dialog", "Related work ::: The Wizard of Oz (WOz) Approach and MultiWOZ", "The Taskmaster Corpus ::: Overview", "The Taskmaster Corpus ::: Two-person, spoken dataset", "The Taskmaster Corpus ::: Two-person, spoken dataset ::: WOz platform and data pipeline", "The Taskmaster Corpus ::: Two-person, spoken dataset ::: Agents, workers and training", "The Taskmaster Corpus ::: Self-dialogs (one-person written dataset)", "The Taskmaster Corpus ::: Self-dialogs (one-person written dataset) ::: Task scenarios and instructions", "The Taskmaster Corpus ::: Self-dialogs (one-person written dataset) ::: Pros and cons of self-dialogs", "The Taskmaster Corpus ::: Annotation", "Dataset Analysis ::: Self-dialogs vs MultiWOZ", "Dataset Analysis ::: Self-dialogs vs Two-person", "Dataset Analysis ::: Baseline Experiments: Response Generation", "Dataset Analysis ::: Baseline Experiments: Argument Prediction", "Conclusion" ], "paragraphs": [ [ "Voice-based “personal assistants\" such as Apple's SIRI, Microsoft's Cortana, Amazon Alexa, and the Google Assistant have finally entered the mainstream. This development is generally attributed to major breakthroughs in speech recognition and text-to-speech (TTS) technologies aided by recent progress in deep learning BIBREF0, exponential gains in compute power BIBREF1, BIBREF2, and the ubiquity of powerful mobile devices. The accuracy of machine learned speech recognizers BIBREF3 and speech synthesizers BIBREF4 are good enough to be deployed in real-world products and this progress has been driven by publicly available labeled datasets. However, conspicuously absent from this list is equal progress in machine learned conversational natural language understanding (NLU) and generation (NLG). The NLU and NLG components of dialog systems starting from the early research work BIBREF5 to the present commercially available personal assistants largely rely on rule-based systems. The NLU and NLG systems are often carefully programmed for very narrow and specific cases BIBREF6, BIBREF7. General understanding of natural spoken behaviors across multiple dialog turns, even in single task-oriented situations, is by most accounts still a long way off. In this way, most of these products are very much hand crafted, with inherent constraints on what users can say, how the system responds and the order in which the various subtasks can be completed. They are high precision but relatively low coverage. Not only are such systems unscalable, but they lack the flexibility to engage in truly natural conversation.", "Yet none of this is surprising. Natural language is heavily context dependent and often ambiguous, especially in multi-turn conversations across multiple topics. It is full of subtle discourse cues and pragmatic signals whose patterns have yet to be thoroughly understood. Enabling an automated system to hold a coherent task-based conversation with a human remains one of computer science's most complex and intriguing unsolved problems BIBREF5. In contrast to more traditional NLP efforts, interest in statistical approaches to dialog understanding and generation aided by machine learning has grown considerably in the last couple of years BIBREF8, BIBREF9, BIBREF10. However, the dearth of high quality, goal-oriented dialog data is considered a major hindrance to more significant progress in this area BIBREF9, BIBREF11.", "To help solve the data problem we present Taskmaster-1, a dataset consisting of 13,215 dialogs, including 5,507 spoken and 7,708 written dialogs created with two distinct procedures. Each conversation falls into one of six domains: ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations. For the spoken dialogs, we created a “Wizard of Oz” (WOz) system BIBREF12 to collect two-person, spoken conversations. Crowdsourced workers playing the “user\" interacted with human operators playing the “digital assistant” using a web-based interface. In this way, users were led to believe they were interacting with an automated system while it was in fact a human, allowing them to express their turns in natural ways but in the context of an automated interface. We refer to this spoken dialog type as “two-person dialogs\". For the written dialogs, we engaged crowdsourced workers to write the full conversation themselves based on scenarios outlined for each task, thereby playing roles of both the user and assistant. We refer to this written dialog type as “self-dialogs\". In a departure from traditional annotation techniques BIBREF10, BIBREF8, BIBREF13, dialogs are labeled with simple API calls and arguments. This technique is much easier for annotators to learn and simpler to apply. As such it is more cost effective and, in addition, the same model can be used for multiple service providers.", "Taskmaster-1 has richer and more diverse language than the current popular benchmark in task-oriented dialog, MultiWOZ BIBREF13. Table TABREF2 shows that Taskmaster-1 has more unique words and is more difficult for language models to fit. We also find that Taskmaster-1 is more realistic than MultiWOZ. Specifically, the two-person dialogs in Taskmaster-1 involve more real-word entities than seen in MutliWOZ since we do not restrict conversations to a small knowledge base. Beyond the corpus and the methodologies used to create it, we present several baseline models including state-of-the-art neural seq2seq architectures together with perplexity and BLEU scores. We also provide qualitative human performance evaluations for these models and find that automatic evaluation metrics correlate well with human judgments. We will publicly release our corpus containing conversations, API call and argument annotations, and also the human judgments." ], [ "BIBREF14 discuss the major features and differences among the existing offerings in an exhaustive and detailed survey of available corpora for data driven learning of dialog systems. One important distinction covered is that of human-human vs. human-machine dialog data, each having its advantages and disadvantages. Many of the existing task-based datasets have been generated from deployed dialog systems such as the Let’s Go Bus Information System BIBREF15 and the various Dialog State Tracking Challenges (DSTCs) BIBREF16. However, it is doubtful that new data-driven systems built with this type of corpus would show much improvement since they would be biased by the existing system and likely mimic its limitations BIBREF17. Since the ultimate goal is to be able to handle complex human language behaviors, it would seem that human-human conversational data is the better choice for spoken dialog system development BIBREF13. However, learning from purely human-human based corpora presents challenges of its own. In particular, human conversation has a different distribution of understanding errors and exhibits turn-taking idiosyncrasies which may not be well suited for interaction with a dialog system BIBREF17, BIBREF14." ], [ "The WOz framework, first introduced by BIBREF12 as a methodology for iterative design of natural language interfaces, presents a more effective approach to human-human dialog collection. In this setup, users are led to believe they are interacting with an automated assistant but in fact it is a human behind the scenes that controls the system responses. Given the human-level natural language understanding, users quickly realize they can comfortably and naturally express their intent rather than having to modify behaviors as is normally the case with a fully automated assistant. At the same time, the machine-oriented context of the interaction, i.e. the use of TTS and slower turn taking cadence, prevents the conversation from becoming fully fledged, overly complex human discourse. This creates an idealized spoken environment, revealing how users would openly and candidly express themselves with an automated assistant that provided superior natural language understanding.", "Perhaps the most relevant work to consider here is the recently released MultiWOZ dataset BIBREF13, since it is similar in size, content and collection methodologies. MultiWOZ has roughly 10,000 dialogs which feature several domains and topics. The dialogs are annotated with both dialog states and dialog acts. MultiWOZ is an entirely written corpus and uses crowdsourced workers for both assistant and user roles. In contrast, Taskmaster-1 has roughly 13,000 dialogs spanning six domains and annotated with API arguments. The two-person spoken dialogs in Taskmaster-1 use crowdsourcing for the user role but trained agents for the assistant role. The assistant's speech is played to the user via TTS. The remaining 7,708 conversations in Taskmaster-1 are self-dialogs, in which crowdsourced workers write the entire conversation themselves. As BIBREF18, BIBREF19 show, self dialogs are surprisingly rich in content." ], [ "There are several key attributes that make Taskmaster-1 both unique and effective for data-driven approaches to building dialog systems and for other research.", "", "Spoken and written dialogs: While the spoken sources more closely reflect conversational language BIBREF20, written dialogs are significantly cheaper and easier to gather. This allows for a significant increase in the size of the corpus and in speaker diversity.", "", "Goal-oriented dialogs: All dialogs are based on one of six tasks: ordering pizza, creating auto repair appointments, setting up rides for hire, ordering movie tickets, ordering coffee drinks and making restaurant reservations.", "", "Two collection methods: The two-person dialogs and self-dialogs each have pros and cons, revealing interesting contrasts.", "", "Multiple turns: The average number of utterances per dialog is about 23 which ensures context-rich language behaviors.", "", "API-based annotation: The dataset uses a simple annotation schema providing sufficient grounding for the data while making it easy for workers to apply labels consistently.", "", "Size: The total of 13,215 dialogs in this corpus is on par with similar, recently released datasets such as MultiWOZ BIBREF13." ], [ "In order to replicate a two-participant, automated digital assistant experience, we built a WOz platform that pairs agents playing the digital assistant with crowdsourced workers playing the user in task-based conversational scenarios. An example dialog from this dataset is given in Figure FIGREF5." ], [ "While it is beyond the scope of this work to describe the entire system in detail, there are several platform features that help illustrate how the process works.", "", "Modality: The agents playing the assistant type their input which is in turn played to the user via text-to-speech (TTS) while the crowdsourced workers playing the user speak aloud to the assistant using their laptop and microphone. We use WebRTC to establish the audio channel. This setup creates a digital assistant-like communication style.", "", "Conversation and user quality control: Once the task is completed, the agents tag each conversation as either successful or problematic depending on whether the session had technical glitches or user behavioral issues. We are also then able to root out problematic users based on this logging.", "", "Agent quality control: Agents are required to login to the system which allows us to monitor performance including the number and length of each session as well as their averages.", "", "User queuing: When there are more users trying to connect to the system than available agents, a queuing mechanism indicates their place in line and connects them automatically once they move to the front of the queue.", "", "Transcription: Once complete, the user's audio-only portion of the dialog is transcribed by a second set of workers and then merged with the assistant's typed input to create a full text version of the dialog. Finally, these conversations are checked for transcription errors and typos and then annotated, as described in Section SECREF48." ], [ "Both agents and crowdsourced workers are given written instructions prior to the session. Examples of each are given in Figure FIGREF6 and Figure FIGREF23. The instructions continue to be displayed on screen to the crowdsourced workers while they interact with the assistant. Instructions are modified at times (for either participant or both) to ensure broader coverage of dialog scenarios that are likely to occur in actual user-assistant interactions. For example, in one case users were asked to change their mind after ordering their first item and in another agents were instructed to tell users that a given item was not available. Finally, in their instructions, crowdsourced workers playing the user are told they will be engaging in conversation with “a digital assistant”. However, it is plausible that some suspect human intervention due to the advanced level of natural language understanding from the assistant side.", "Agents playing the assistant role were hired from a pool of dialog analysts and given two hours of training on the system interface as well as on how to handle specific scenarios such as uncooperative users and technical glitches. Uncooperative users typically involve those who either ignored agent input or who rushed through the conversation with short phrases. Technical issues involved dropped sessions (e.g. WebRTC connections failed) or cases in which the user could not hear the agent or vice-versa. In addition, weekly meetings were held with the agents to answer questions and gather feedback on their experiences. Agents typically work four hours per day with dialog types changing every hour. Crowdsourced workers playing the user are accessed using Amazon Mechanical Turk. Payment for a completed dialog session lasting roughly five to seven minutes was typically in the range of $\\$1.00$ to $\\$1.30$. Problematic users are detected either by the agent involved in the specific dialog or by post-session assessment and removed from future requests." ], [ "While the two-person approach to data collection creates a realistic scenario for robust, spoken dialog data collection, this technique is time consuming, complex and expensive, requiring considerable technical implementation as well as administrative procedures to train and manage agents and crowdsourced workers. In order to extend the Taskmaster dataset at minimal cost, we use an alternative self-dialog approach in which crowdsourced workers write the full dialogs themselves (i.e. interpreting the roles of both user and assistant)." ], [ "Targeting the same six tasks used for the two-person dialogs, we again engaged the Amazon Mechanical Turk worker pool to create self-dialogs, this time as a written exercise. In this case, users are asked to pretend they have a personal assistant who can help them take care of various tasks in real time. They are told to imagine a scenario in which they are speaking to their assistant on the phone while the assistant accesses the services for one of the given tasks. They then write down the entire conversation. Figure FIGREF34 shows a sample set of instructions." ], [ "The self-dialog technique renders quality data and avoids some of the challenges seen with the two-person approach. To begin, since the same person is writing both sides of the conversation, we never see misunderstandings that lead to frustration as is sometimes experienced between interlocutors in the two-person approach. In addition, all the self-dialogs follow a reasonable path even when the user is constructing conversations that include understanding errors or other types of dialog glitches such as when a particular choice is not available. As it turns out, crowdsourced workers are quite effective at recreating various types of interactions, both error-free and those containing various forms of linguistic repair. The sample dialog in Figure FIGREF44 shows the result of a self-dialog exercise in which workers were told to write a conversation with various ticket availability issues that is ultimately unsuccessful.", "Two more benefits of the self-dialog approach are its efficiency and cost effectiveness. We were able to gather thousands of dialogs in just days without transcription or trained agents, and spent roughly six times less per dialog. Despite these advantages, the self-dialog written technique cannot recreate the disfluencies and other more complex error patterns that occur in the two-person spoken dialogs which are important for model accuracy and coverage." ], [ "We chose a highly simplified annotation approach for Taskmaster-1 as compared to traditional, detailed strategies which require robust agreement among workers and usually include dialog state and slot information, among other possible labels. Instead we focus solely on API arguments for each type of conversation, meaning just the variables required to execute the transaction. For example, in dialogs about setting up UBER rides, we label the “to\" and “from\" locations along with the car type (UberX, XL, Pool, etc). For movie tickets, we label the movie name, theater, time, number of tickets, and sometimes screening type (e.g. 3D vs. standard). A complete list of labels is included with the corpus release.", "As discussed in Section SECREF33, to encourage diversity, at times we explicitly ask users to change their mind in the middle of the conversation, and the agents to tell the user that the requested item is not available. This results in conversations having multiple instances of the same argument type. To handle this ambiguity, in addition to the labels mentioned above, the convention of either “accept” or “reject\" was added to all labels used to execute the transaction, depending on whether or not that transaction was successful.", "In Figure FIGREF49, both the number of people and the time variables in the assistant utterance would have the “.accept\" label indicating the transaction was completed successfully. If the utterance describing a transaction does not include the variables by name, the whole sentence is marked with the dialog type. For example, a statement such as The table has been booked for you would be labeled as reservation.accept." ], [ "We quantitatively compare our self-dialogs (Section SECREF45) with the MultiWOZ dataset in Table TABREF2. Compared to MultiWOZ, we do not ask the users and assistants to stick to detailed scripts and do not restrict them to have conversations surrounding a small knowledge base. Table TABREF2 shows that our dataset has more unique words, and has almost twice the number of utterances per dialog than the MultiWOZ corpus. Finally, when trained with the Transformer BIBREF21 model, we observe significantly higher perplexities and lower BLEU scores for our dataset compared to MultiWOZ suggesting that our dataset conversations are difficult to model. Finally, Table TABREF2 also shows that our dataset contains close to 10 times more real-world named entities than MultiWOZ and thus, could potentially serve as a realistic baseline when designing goal oriented dialog systems. MultiWOZ has only 1338 unique named entities and only 4510 unique values (including date, time etc.) in their datatset." ], [ "In this section, we quantitatively compare 5k conversations each of self-dialogs (Section SECREF45) and two-person (Section SECREF31). From Table TABREF50, we find that self-dialogs exhibit higher perplexity ( almost 3 times) compared to the two-person conversations suggesting that self-dialogs are more diverse and contains more non-conventional conversational flows which is inline with the observations in Section-SECREF47. While the number of unique words are higher in the case of self-dialogs, conversations are longer in the two-person conversations. We also report metrics by training a single model on both the datasets together." ], [ "We evaluate various seq2seq architectures BIBREF22 on our self-dialog corpus using both automatic evaluation metrics and human judgments. Following the recent line of work on generative dialog systems BIBREF23, we treat the problem of response generation given the dialog history as a conditional language modeling problem. Specifically we want to learn a conditional probability distribution $P_{\\theta }(U_{t}|U_{1:t-1})$ where $U_{t}$ is the next response given dialog history $U_{1:t-1}$. Each utterance $U_i$ itself is comprised of a sequence of words $w_{i_1}, w_{i_2} \\ldots w_{i_k}$. The overall conditional probability is factorized autoregressively as", "$P_{\\theta }$, in this work, is parameterized by a recurrent, convolution or Transformer-based seq2seq model.", "n-gram: We consider 3-gram and 4-gram conditional language model baseline with interpolation. We use random grid search for the best coefficients for the interpolated model.", "Convolution: We use the fconv architecture BIBREF24 and default hyperparameters from the fairseq BIBREF25 framework. We train the network with ADAM optimizer BIBREF26 with learning rate of 0.25 and dropout probability set to 0.2.", "LSTM: We consider LSTM models BIBREF27 with and without attention BIBREF28 and use the tensor2tensor BIBREF29 framework for the LSTM baselines. We use a two-layer LSTM network for both the encoder and the decoder with 128 dimensional hidden vectors.", "Transformer: As with LSTMs, we use the tensor2tensor framework for the Transformer model. Our Transformer BIBREF21 model uses 256 dimensions for both input embedding and hidden state, 2 layers and 4 attention heads. For both LSTMs and Transformer, we train the model with ADAM optimizer ($\\beta _{1} = 0.85$, $\\beta _{2} = 0.997$) and dropout probability set to 0.2.", "GPT-2: Apart from supervised seq2seq models, we also include results from pre-trained GPT-2 BIBREF30 containing 117M parameters.", "We evaluate all the models with perplexity and BLEU scores (Table TABREF55). Additionally, we perform two kinds of human evaluation - Ranking and Rating (LIKERT scale) for the top-3 performing models - Convolution, LSTM-attention and Transformer. For the ranking task, we randomly show 500 partial dialogs and generated responses of the top-3 models from the test set to three different crowdsourced workers and ask them to rank the responses based on their relevance to the dialog history. For the rating task, we show the model responses individually to three different crowdsourced workers and ask them to rate the responses on a 1-5 LIKERT scale based on their appropriateness to the dialog history. From Table-TABREF56, we see that inter-annotator reliability scores (Krippendorf’s Alpha) are higher for the ranking task compared to the rating task. From Table TABREF55, we see that Transformer is the best performing model on automatic evaluation metrics. It is interesting to note that there is a strong correlation between BLEU score and human ranking judgments." ], [ "Next, we discuss a set of baseline experiments for the task of argument prediction. API arguments are annotated as spans in the dialog (Section SECREF48). We formulate this problem as mapping text conversation to a sequence of output arguments. Apart from the seq2seq Transformer baseline, we consider an additional model - an enhanced Transformer seq2seq model where the decoder can choose to copy from the input or generate from the vocabulary BIBREF31, BIBREF32. Since all the API arguments are input spans, the copy model having the correct inductive bias achieves the best performance." ], [ "To address the lack of quality corpora for data-driven dialog system research and development, this paper introduces Taskmaster-1, a dataset that provides richer and more diverse language as compared to current benchmarks since it is based on unrestricted, task-oriented conversations involving more real-word entities. In addition, we present two data collection methodologies, both spoken and written, that ensure both speaker diversity and conversational accuracy. Our straightforward, API-oriented annotation technique is much easier for annotators to learn and simpler to apply. We give several baseline models including state-of-the-art neural seq2seq architectures, provide qualitative human performance evaluations for these models, and find that automatic evaluation metrics correlate well with human judgments." ] ] }
{ "question": [ "What is the average number of turns per dialog?", "What baseline models are offered?", "Which six domains are covered in the dataset?" ], "question_id": [ "221e9189a9d2431902d8ea833f486a38a76cbd8e", "a276d5931b989e0a33f2a0bc581456cca25658d9", "c21d26130b521c9596a1edd7b9ef3fe80a499f1e" ], "nlp_background": [ "", "", "" ], "topic_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "search_query": [ "dataset", "dataset", "dataset" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "The average number of utterances per dialog is about 23 " ], "yes_no": null, "free_form_answer": "", "evidence": [ "Multiple turns: The average number of utterances per dialog is about 23 which ensures context-rich language behaviors." ], "highlighted_evidence": [ "Multiple turns: The average number of utterances per dialog is about 23 which ensures context-rich language behaviors." ] } ], "annotation_id": [ "0bcd2807b2ffc25eb96aab13aec426313ee4d1d0" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "3-gram and 4-gram conditional language model", "Convolution", "LSTM models BIBREF27 with and without attention BIBREF28", "Transformer", "GPT-2" ], "yes_no": null, "free_form_answer": "", "evidence": [ "n-gram: We consider 3-gram and 4-gram conditional language model baseline with interpolation. We use random grid search for the best coefficients for the interpolated model.", "Convolution: We use the fconv architecture BIBREF24 and default hyperparameters from the fairseq BIBREF25 framework. We train the network with ADAM optimizer BIBREF26 with learning rate of 0.25 and dropout probability set to 0.2.", "LSTM: We consider LSTM models BIBREF27 with and without attention BIBREF28 and use the tensor2tensor BIBREF29 framework for the LSTM baselines. We use a two-layer LSTM network for both the encoder and the decoder with 128 dimensional hidden vectors.", "Transformer: As with LSTMs, we use the tensor2tensor framework for the Transformer model. Our Transformer BIBREF21 model uses 256 dimensions for both input embedding and hidden state, 2 layers and 4 attention heads. For both LSTMs and Transformer, we train the model with ADAM optimizer ($\\beta _{1} = 0.85$, $\\beta _{2} = 0.997$) and dropout probability set to 0.2.", "GPT-2: Apart from supervised seq2seq models, we also include results from pre-trained GPT-2 BIBREF30 containing 117M parameters." ], "highlighted_evidence": [ "n-gram: We consider 3-gram and 4-gram conditional language model baseline with interpolation. We use random grid search for the best coefficients for the interpolated model.\n\nConvolution: We use the fconv architecture BIBREF24 and default hyperparameters from the fairseq BIBREF25 framework. We train the network with ADAM optimizer BIBREF26 with learning rate of 0.25 and dropout probability set to 0.2.\n\nLSTM: We consider LSTM models BIBREF27 with and without attention BIBREF28 and use the tensor2tensor BIBREF29 framework for the LSTM baselines. We use a two-layer LSTM network for both the encoder and the decoder with 128 dimensional hidden vectors.\n\nTransformer: As with LSTMs, we use the tensor2tensor framework for the Transformer model. Our Transformer BIBREF21 model uses 256 dimensions for both input embedding and hidden state, 2 layers and 4 attention heads. For both LSTMs and Transformer, we train the model with ADAM optimizer ($\\beta _{1} = 0.85$, $\\beta _{2} = 0.997$) and dropout probability set to 0.2.\n\nGPT-2: Apart from supervised seq2seq models, we also include results from pre-trained GPT-2 BIBREF30 containing 117M parameters." ] } ], "annotation_id": [ "7cabfc841fa9d530661b66c75e457c883029b067" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations" ], "yes_no": null, "free_form_answer": "", "evidence": [ "To help solve the data problem we present Taskmaster-1, a dataset consisting of 13,215 dialogs, including 5,507 spoken and 7,708 written dialogs created with two distinct procedures. Each conversation falls into one of six domains: ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations. For the spoken dialogs, we created a “Wizard of Oz” (WOz) system BIBREF12 to collect two-person, spoken conversations. Crowdsourced workers playing the “user\" interacted with human operators playing the “digital assistant” using a web-based interface. In this way, users were led to believe they were interacting with an automated system while it was in fact a human, allowing them to express their turns in natural ways but in the context of an automated interface. We refer to this spoken dialog type as “two-person dialogs\". For the written dialogs, we engaged crowdsourced workers to write the full conversation themselves based on scenarios outlined for each task, thereby playing roles of both the user and assistant. We refer to this written dialog type as “self-dialogs\". In a departure from traditional annotation techniques BIBREF10, BIBREF8, BIBREF13, dialogs are labeled with simple API calls and arguments. This technique is much easier for annotators to learn and simpler to apply. As such it is more cost effective and, in addition, the same model can be used for multiple service providers." ], "highlighted_evidence": [ "To help solve the data problem we present Taskmaster-1, a dataset consisting of 13,215 dialogs, including 5,507 spoken and 7,708 written dialogs created with two distinct procedures. Each conversation falls into one of six domains: ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations. " ] } ], "annotation_id": [ "a2d3c777e3a00aceef1803f37264ebea8bd2adb6" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Table 1: Statistics comparison: Self-dialogs vs MultiWOZ corpus both containing approximately 10k dialogues each.", "Figure 1: Sample Taskmaster-1 two-person dialog", "Figure 5: Sample one-person, written dialog", "Figure 6: Indicating transaction status with “accept” or “reject”", "Table 2: Statistics comparison: Self-dialogs vs two person corpus both containing 5k dialogs. Perplexity and BLEU are reported for Transformer baseline. Joint-Perplexity and Joint-BLEU are perplexity/BLEU scores from the joint training of self-dialogs and twoperson but evaluated with their respective test sets.", "Table 3: Evaluation of various seq2seq architectures (Sutskever et al., 2014) on our self-dialog corpus using both automatic evaluation metrics and human judgments. Human evaluation ratings in the 1-5 LIKERT scale (higher the better), and human ranking are averaged over 500 x 3 ratings (3 crowdsourced workers per rating).", "Table 4: Inter-Annotator Reliability scores of seq2seq model responses computed for 500 self-dialogs from the test set, each annotated by 3 crowdsourcedworkers.", "Table 5: API Argument prediction accuracy for Selfdialogs. API arguments are annotated as spans in the utterances." ], "file": [ "2-Table1-1.png", "3-Figure1-1.png", "6-Figure5-1.png", "7-Figure6-1.png", "7-Table2-1.png", "8-Table3-1.png", "8-Table4-1.png", "8-Table5-1.png" ] }
2003.06279
Using word embeddings to improve the discriminability of co-occurrence text networks
Word co-occurrence networks have been employed to analyze texts both in the practical and theoretical scenarios. Despite the relative success in several applications, traditional co-occurrence networks fail in establishing links between similar words whenever they appear distant in the text. Here we investigate whether the use of word embeddings as a tool to create virtual links in co-occurrence networks may improve the quality of classification systems. Our results revealed that the discriminability in the stylometry task is improved when using Glove, Word2Vec and FastText. In addition, we found that optimized results are obtained when stopwords are not disregarded and a simple global thresholding strategy is used to establish virtual links. Because the proposed approach is able to improve the representation of texts as complex networks, we believe that it could be extended to study other natural language processing tasks. Likewise, theoretical languages studies could benefit from the adopted enriched representation of word co-occurrence networks.
{ "section_name": [ "Introduction", "Related works", "Material and Methods", "Results and Discussion", "Results and Discussion ::: Performance analysis", "Results and Discussion ::: Effects of considering stopwords and local thresholding", "Conclusion", "Acknowledgments", "Supplementary Information ::: Stopwords", "Supplementary Information ::: List of books", "Supplementary Information ::: Additional results" ], "paragraphs": [ [ "The ability to construct complex and diverse linguistic structures is one of the main features that set us apart from all other species. Despite its ubiquity, some language aspects remain unknown. Topics such as language origin and evolution have been studied by researchers from diverse disciplines, including Linguistic, Computer Science, Physics and Mathematics BIBREF0, BIBREF1, BIBREF2. In order to better understand the underlying language mechanisms and universal linguistic properties, several models have been developed BIBREF3, BIBREF4. A particular language representation regards texts as complex systems BIBREF5. Written texts can be considered as complex networks (or graphs), where nodes could represent syllables, words, sentences, paragraphs or even larger chunks BIBREF5. In such models, network edges represent the proximity between nodes, e.g. the frequency of the co-occurrence of words. Several interesting results have been obtained from networked models, such as the explanation of Zipf's Law as a consequence of the least effort principle and theories on the nature of syntactical relationships BIBREF6, BIBREF7.", "In a more practical scenario, text networks have been used in text classification tasks BIBREF8, BIBREF9, BIBREF10. The main advantage of the model is that it does not rely on deep semantical information to obtain competitive results. Another advantage of graph-based approaches is that, when combined with other approaches, it yields competitive results BIBREF11. A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks.", "While the co-occurrence representation yields good results in classification scenarios, some important features are not considered in the model. For example, long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach BIBREF12. In addition, semantically similar words not sharing the same lemma are mapped into distinct nodes. In order to address these issues, here we introduce a modification of the traditional network representation by establishing additional edges, referred to as “virtual” edges. In the proposed model, in addition to the co-occurrence edges, we link two nodes (words) if the corresponding word embedding representation is similar. While this approach still does not merge similar nodes into the same concept, similar nodes are explicitly linked via virtual edges.", "Our main objective here is to evaluate whether such an approach is able to improve the discriminability of word co-occurrence networks in a typical text network classification task. We evaluate the methodology for different embedding techniques, including GloVe, Word2Vec and FastText. We also investigated different thresholding strategies to establish virtual links. Our results revealed, as a proof of principle, that the proposed approach is able to improve the discriminability of the classification when compared to the traditional co-occurrence network. While the gain in performance depended upon the text length being considered, we found relevant gains for intermediary text lengths. Additional results also revealed that a simple thresholding strategy combined with the use of stopwords tends to yield the best results.", "We believe that the proposed representation could be applied in other text classification tasks, which could lead to potential gains in performance. Because the inclusion of virtual edges is a simple technique to make the network denser, such an approach can benefit networked representations with a limited number of nodes and edges. This representation could also shed light into language mechanisms in theoretical studies relying on the representation of text as complex networks. Potential novel research lines leveraging the adopted approach to improve the characterization of texts in other applications are presented in the conclusion." ], [ "Complex networks have been used in a wide range of fields, including in Social Sciences BIBREF13, Neuroscience BIBREF14, Biology BIBREF15, Scientometry BIBREF16 and Pattern Recognition BIBREF17, BIBREF18, BIBREF19, BIBREF20. In text analysis, networks are used to uncover language patterns, including the origins of the ever present Zipf's Law BIBREF21 and the analysis of linguistic properties of natural and unknown texts BIBREF22, BIBREF23. Applications of network science in text mining and text classification encompasses applications in semantic analysis BIBREF24, BIBREF25, BIBREF26, BIBREF27, authorship attribution BIBREF28, BIBREF29 and stylometry BIBREF28, BIBREF30, BIBREF31. Here we focus in the stylometric analysis of texts using complex networks.", "In BIBREF28, the authors used a co-occurrence network to study a corpus of English and Polish books. They considered a dataset of 48 novels, which were written by 8 different authors. Differently from traditional co-occurrence networks, some punctuation marks were considered as words when mapping texts as networks. The authors also decided to create a methodology to normalize the obtained network metrics, since they considered documents with variations in length. A similar approach was adopted in a similar study BIBREF32, with a focus on comparing novel measurements and measuring the effect of considering stopwords in the network structure.", "A different approach to analyze co-occurrence networks was devised in BIBREF33. Whilst most approaches only considered traditional network measurements or devised novel topological and dynamical measurements, the authors combined networked and semantic information to improve the performance of network-based classification. Interesting, the combined use of network motifs and node labels (representing the corresponding words) allowed an improvement in performance in the considered task. A similar combination of techniques using a hybrid approach was proposed in BIBREF8. Networked-based approaches has also been applied to the authorship recognition tasks in other languages, including Persian texts BIBREF9.", "Co-occurrence networks have been used in other contexts other than stylometric analysis. The main advantage of this approach is illustrated in the task aimed at diagnosing diseases via text analysis BIBREF11. Because the topological analysis of co-occurrence language networks do not require deep semantic analysis, this model is able to model text created by patients suffering from cognitive impairment BIBREF11. Recently, it has been shown that the combination of network and traditional features could be used to improve the diagnosis of patients with cognitive impairment BIBREF11. Interestingly, this was one of the first approaches suggesting the use of embeddings to address the particular problem of lack of statistics to create a co-occurrence network in short documents BIBREF34.", "While many of the works dealing with word co-occurrence networks have been proposed in the last few years, no systematic study of the effects of including information from word embeddings in such networks has been analyzed. This work studies how links created via embeddings information modify the underlying structure of networks and, most importantly, how it can improve the model to provide improved classification performance in the stylometry task." ], [ "To represent texts as networks, we used the so-called word adjacency network representation BIBREF35, BIBREF28, BIBREF32. Typically, before creating the networks, the text is pre-processed. An optional pre-processing step is the removal of stopwords. This step is optional because such words include mostly article and prepositions, which may be artlessly represented by network edges. However, in some applications – including the authorship attribution task – stopwords (or function words) play an important role in the stylistic characterization of texts BIBREF32. A list of stopwords considered in this study is available in the Supplementary Information.", "The pre-processing step may also include a lemmatization procedure. This step aims at mapping words conveying the same meaning into the same node. In the lemmatization process, nouns and verbs are mapped into their singular and infinite forms. Note that, while this step is useful to merge words sharing a lemma into the same node, more complex semantical relationships are overlooked. For example, if “car” and “vehicle” co-occur in the same text, they are considered as distinct nodes, which may result in an inaccurate representation of the text.", "Such a drawback is addressed by including “virtual” edges connecting nodes. In other words, even if two words are not adjacent in the text, we include “virtual” edges to indicate that two distant words are semantically related. The inclusion of such virtual edges is illustrated in Figure FIGREF1. In order to measure the semantical similarity between two concepts, we use the concept of word embeddings BIBREF36, BIBREF37. Thus, each word is represented using a vector representation encoding the semantical and contextual characteristics of the word. Several interesting properties have been obtained from distributed representation of words. One particular property encoded in the embeddings representation is the fact the semantical similarity between concepts is proportional to the similarity of vectors representing the words. Similarly to several other works, here we measure the similarity of the vectors via cosine similarity BIBREF38.", "The following strategies to create word embedding were considered in this paper:", "GloVe: the Global Vectors (GloVe) algorithm is an extension of the Word2vec model BIBREF39 for efficient word vector learning BIBREF40. This approach combines global statistics from matrix factorization techniques (such as latent semantic analysis) with context-based and predictive methods like Word2Vec. This method is called as Global Vector method because the global corpus statistics are captured by GloVe. Instead of using a window to define the local context, GloVe constructs an explicit word-context matrix (or co-occurrence matrix) using statistics across the entire corpus. The final result is a learning model that oftentimes yields better word vector representations BIBREF40.", "Word2Vec: this is a predictive model that finds dense vector representations of words using a three-layer neural network with a single hidden layer BIBREF39. It can be defined in a two-fold way: continuous bag-of-words and skip-gram model. In the latter, the model analyzes the words of a set of sentences (or corpus) and attempts to predict the neighbors of such words. For example, taking as reference the word “Robin”, the model decides that “Hood” is more likely to follow the reference word than any other word. The vectors are obtained as follows: given the vocabulary (generated from all corpus words), the model trains a neural network with the sentences of the corpus. Then, for a given word, the probabilities that each word follows the reference word are obtained. Once the neural network is trained, the weights of the hidden layer are used as vectors of each corpus word.", "FastText: this method is another extension of the Word2Vec model BIBREF41. Unlike Word2Vec, FastText represents each word as a bag of character n-grams. Therefore, the neural network not only trains individual words, but also several n-grams of such words. The vector for a word is the sum of vectors obtained for the character n-grams composing the word. For example, the embedding obtained for the word “computer” with $n\\le 3$ is the sum of the embeddings obtained for “co”, “com”, “omp”, “mpu”, “put”, “ute”, “ter” and “er”. In this way, this method obtains improved representations for rare words, since n-grams composing rare words might be present in other words. The FastText representation also allows the model to understand suffixes and prefixes. Another advantage of FastText is its efficiency to be trained in very large corpora.", "Concerning the thresholding process, we considered two main strategies. First, we used a global strategy: in addition to the co-occurrence links (continuous lines in Figure FIGREF1), only “virtual” edges stronger than a given threshold are left in the network. Thus only the most similar concepts are connected via virtual links. This strategy is hereafter referred to as global strategy. Unfortunately, this method may introduce an undesired bias towards hubs BIBREF42.", "To overcome the potential disadvantages of the global thresholding method, we also considered a more refined thresholding approach that takes into account the local structure to decide whether a weighted link is statistically significant BIBREF42. This method relies on the idea that the importance of an edge should be considered in the the context in which it appears. In other words, the relevance of an edge should be evaluated by analyzing the nodes connected to its ending points. Using the concept of disparity filter, the method devised in BIBREF42 defines a null model that quantifies the probability of a node to be connected to an edge with a given weight, based on its other connections. This probability is used to define the significance of the edge. The parameter that is used to measure the significance of an edge $e_{ij}$ is $\\alpha _{ij}$, defined as:", "where $w_{ij}$ is the weight of the edge $e_{ij}$ and $k_i$ is the degree of the $i$-th node. The obtained network corresponds to the set of nodes and edges obtained by removing all edges with $\\alpha $ higher than the considered threshold. Note that while the similarity between co-occurrence links might be considered to compute $\\alpha _{ij}$, only “virtual” edges (i.e. the dashed lines in Figure FIGREF1) are eligible to be removed from the network in the filtering step. This strategy is hereafter referred to as local strategy.", "After co-occurrence networks are created and virtual edges are included, in the next step we used a characterization based on topological analysis. Because a global topological analysis is prone to variations in network size, we focused our analysis in the local characterization of complex networks. In a local topological analysis, we use as features the value of topological/dynamical measurements obtained for a set of words. In this case, we selected as feature the words occurring in all books of the dataset. For each word, we considered the following network measurements: degree, betweenness, clustering coefficient, average shortest path length, PageRank, concentric symmetry (at the second and third hierarchical level) BIBREF32 and accessibility BIBREF43, BIBREF44 (at the second and third hierarchical level). We chose these measurements because all of them capture some particular linguistic feature of texts BIBREF45, BIBREF46, BIBREF47, BIBREF48. After network measurements are extracted, they are used in machine learning algorithms. In our experiments, we considered Decision Trees (DT), nearest neighbors (kNN), Naive Bayes (NB) and Support Vector Machines (SVM). We used some heuristics to optimize classifier parameters. Such techniques are described in the literature BIBREF49. The accuracy of the pattern recognition methods were evaluated using cross-validation BIBREF50.", "In summary, the methodology used in this paper encompasses the following steps:", "Network construction: here texts are mapped into a co-occurrence networks. Some variations exists in the literature, however here we focused in the most usual variation, i.e. the possibility of considering or disregarding stopwords. A network with co-occurrence links is obtained after this step.", "Network enrichment: in this step, the network is enriched with virtual edges established via similarity of word embeddings. After this step, we are given a complete network with weighted links. Virtually, any embedding technique could be used to gauge the similarity between nodes.", "Network filtering: in order to eliminate spurious links included in the last step, the weakest edges are filtered. Two approaches were considered: a simple approach based on a global threshold and a local thresholding strategy that preserves network community structure. The outcome of this network filtering step is a network with two types of links: co-occurrence and virtual links (as shown in Figure FIGREF1).", "Feature extraction: In this step, topological and dynamical network features are extracted. Here, we do not discriminate co-occurrence from virtual edges to compute the network metrics.", "Pattern classification: once features are extracted from complex networks, they are used in pattern classification methods. This might include supervised, unsupervised and semi-supervised classification. This framework is exemplified in the supervised scenario.", "The above framework is exemplified with the most common technique(s). It should be noted that the methods used, however, can be replaced by similar techniques. For example, the network construction could consider stopwords or even punctuation marks BIBREF51. Another possibility is the use of different strategies of thresholding. While a systematic analysis of techniques and parameters is still required to reveal other potential advantages of the framework based on the addition of virtual edges, in this paper we provide a first analysis showing that virtual edges could be useful to improve the discriminability of texts modeled as complex networks.", "Here we used a dataset compatible with datasets used recently in the literature (see e.g. BIBREF28, BIBREF10, BIBREF52). The objective of the studied stylometric task is to identify the authorship of an unknown document BIBREF53. All data and some statistics of each book are shown in the Supplementary Information." ], [ "In Section SECREF13, we probe whether the inclusion of virtual edges is able to improve the performance of the traditional co-occurrence network-based classification in a usual stylometry task. While the focus of this paper is not to perform a systematic analysis of different methods comprising the adopted network, we consider two variations in the adopted methodology. In Section SECREF19, we consider the use of stopwords and the adoption of a local thresholding process to establish different criteria to create new virtual edges." ], [ "In Figure FIGREF14, we show some of the improvements in performance obtained when including a fixed amount of virtual edges using GloVe as embedding method. In each subpanel, we show the relative improvement in performance obtained as a function of the fraction of additional edges. In this section, we considered the traditional co-occurrence as starting point. In other words, the network construction disregarded stopwords. The list of stopwords considered in this paper is available in the Supplementary Information. We also considered the global approach to filter edges.", "The relative improvement in performance is given by $\\Gamma _+{(p)}/\\Gamma _0$, where $\\Gamma _+{(p)}$ is the accuracy rate obtained when $p\\%$ additional edges are included and $\\Gamma _0 = \\Gamma _+{(p=0)}$, i.e. $\\Gamma _0$ is the accuracy rate measured from the traditional co-occurrence model. We only show the highest relative improvements in performance for each classifier. In our analysis, we considered also samples of text with distinct length, since the performance of network-based methods is sensitive to text length BIBREF34. In this figure, we considered samples comprising $w=\\lbrace 1.0, 2.5, 5.0, 10.0\\rbrace $ thousand words.", "The results obtained for GloVe show that the highest relative improvements in performance occur for decision trees. This is apparent specially for the shortest samples. For $w=1,000$ words, the decision tree accuracy is enhanced by a factor of almost 50% when $p=20\\%$. An excellent gain in performance is also observed for both Naive Bayes and SVM classifiers, when $p=18\\%$ and $p=12\\%$, respectively. When $w=2,500$ words, the highest improvements was observed for the decision tree algorithm. A minor improvement was observed for the kNN method. A similar behavior occurred for $w=5,000$ words. Interestingly, SVM seems to benefit from the use of additional edges when larger documents are considered. When only 5% virtual edges are included, the relative gain in performance is about 45%.", "The relative gain in performance obtained for Word2vec is shown in Figure FIGREF15. Overall, once again decision trees obtained the highest gain in performance when short texts are considered. Similar to the analysis based on the GloVe method, the gain for kNN is low when compared to the benefit received by other methods. Here, a considerable gain for SVM in only clear for $w=2,500$ and $p=10\\%$. When large texts are considered, Naive Bayes obtained the largest gain in performance.", "Finally, the relative gain in performance obtained for FastText is shown in Figure FIGREF16. The prominent role of virtual edges in decision tree algorithm in the classification of short texts once again is evident. Conversely, the classification of large documents using virtual edges mostly benefit the classification based on the Naive Bayes classifier. Similarly to the results observed for Glove and Word2vec, the gain in performance obtained for kNN is low compared when compared to other methods.", "While Figures FIGREF14 – FIGREF16 show the relative behavior in the accuracy, it still interesting to observe the absolute accuracy rate obtained with the classifiers. In Table TABREF17, we show the best accuracy rate (i.e. $\\max \\Gamma _+ = \\max _p \\Gamma _+(p)$) for GloVe. We also show the average difference in performance ($\\langle \\Gamma _+ - \\Gamma _0 \\rangle $) and the total number of cases in which an improvement in performance was observed ($N_+$). $N_+$ ranges in the interval $0 \\le N_+ \\le 20$. Table TABREF17 summarizes the results obtained for $w = \\lbrace 1.0, 5.0, 10.0\\rbrace $ thousand words. Additional results for other text length are available in Tables TABREF28–TABREF30 of the Supplementary Information.", "In very short texts, despite the low accuracy rates, an improvement can be observed in all classifiers. The best results was obtained with SVM when virtual edges were included. For $w=5,000$ words, the inclusion of new edges has no positive effect on both kNN and Naive Bayes algorithms. On the other hand, once again SVM could be improved, yielding an optimized performance. For $w=10,000$ words, SVM could not be improved. However, even without improvement it yielded the maximum accuracy rate. The Naive Bayes algorithm, in average, could be improved by a margin of about 10%.", "The results obtained for Word2vec are summarized in Table TABREF29 of the Supplementary Information. Considering short documents ($w=1,000$ words), here the best results occurs only with the decision tree method combined with enriched networks. Differently from the GloVe approach, SVM does not yield the best results. Nonetheless, the highest accuracy across all classifiers and values of $p$ is the same. For larger documents ($w=5,000$ and $w=10,000$ words), no significant difference in performance between Word2vec and GloVe is apparent.", "The results obtained for FastText are shown in Table TABREF18. In short texts, only kNN and Naive Bayes have their performance improved with virtual edges. However, none of the optimized results for these classifiers outperformed SVM applied to the traditional co-occurrence model. Conversely, when $w=5,000$ words, the optimized results are obtained with virtual edges in the SVM classifier. Apart from kNN, the enriched networks improved the traditional approach in all classifiers. For large chunks of texts ($w=10,000$), once again the approach based on SVM and virtual edges yielded optimized results. All classifiers benefited from the inclusion of additional edges. Remarkably, Naive Bayes improved by a margin of about $13\\%$." ], [ "While in the previous section we focused our analysis in the traditional word co-occurrence model, here we probe if the idea of considering virtual edges can also yield optimized results in particular modifications of the framework described in the methodology. The first modification in the co-occurrence model is the use of stopwords. While in semantical application of network language modeling stopwords are disregarded, in other application it can unravel interesting linguistic patterns BIBREF10. Here we analyzed the effect of using stopwords in enriched networks. We summarize the obtained results in Table TABREF20. We only show the results obtained with SVM, as it yielded the best results in comparison to other classifiers. The accuracy rate for other classifiers is shown in the Supplementary Information.", "The results in Table TABREF20 reveals that even when stopwords are considered in the original model, an improvement can be observed with the addition of virtual edges. However, the results show that the degree of improvement depends upon the text length. In very short texts ($w=1,000$), none of the embeddings strategy was able to improve the performance of the classification. For $w=1,500$, a minor improvement was observed with FastText: the accuracy increased from $\\Gamma _0 = 37.18\\%$ to $38.46\\%$. A larger improvement could be observed for $w=2,000$. Both Word2vec and FastText approaches allowed an increase of more than 5% in performance. A gain higher than 10% was observed for $w=2,500$ with Word2vec. For larger pieces of texts, the gain is less expressive or absent. All in all, the results show that the use of virtual edges can also benefit the network approach based on stopwords. However, no significant improvement could be observed with very short and very large documents. The comparison of all three embedding methods showed that no method performed better than the others in all cases.", "We also investigated if more informed thresholding strategies could provide better results. While the simple global thresholding approach might not be able to represent more complex structures, we also tested a more robust approach based on the local approach proposed by Serrano et al. BIBREF42. In Table TABREF21, we summarize the results obtained with this thresholding strategies. The table shows $\\max \\Gamma _+^{(L)} / \\max \\Gamma _+^{(G)}$, where $\\Gamma _+^{(L)}$ and $\\Gamma _+^{(G)}$ are the accuracy obtained with the local and global thresholding strategy, respectively. The results were obtained with the SVM classifier, as it turned to be the most efficient classification method. We found that there is no gain in performance when the local strategy is used. In particular cases, the global strategy is considerably more efficient. This is the case e.g. when GloVe is employed in texts with $w=1,500$ words. The performance of the global strategy is $12.2\\%$ higher than the one obtained with the global method. A minor difference in performance was found in texts comprising $w=1,000$ words, yet the global strategy is still more efficient than the global one.", "To summarize all results obtained in this study we show in Table TABREF22 the best results obtained for each text length. We also show the relative gain in performance with the proposed approach and the embedding technique yielding the best result. All optimized results were obtained with the use of stopwords, global thresholding strategy and SVM as classification algorithm. A significant gain is more evident for intermediary text lengths." ], [ "Textual classification remains one of the most important facets of the Natural Language Processing area. Here we studied a family of classification methods, the word co-occurrence networks. Despite this apparent simplicity, this model has been useful in several practical and theoretical scenarios. We proposed a modification of the traditional model by establishing virtual edges to connect nodes that are semantically similar via word embeddings. The reasoning behind this strategy is the fact the similar words are not properly linked in the traditional model and, thus, important links might be overlooked if only adjacent words are linked.", "Taking as reference task a stylometric problem, we showed – as a proof of principle – that the use of virtual edges might improve the discriminability of networks. When analyzing the best results for each text length, apart from very short and long texts, the proposed strategy yielded optimized results in all cases. The best classification performance was always obtained with the SVM classifier. In addition, we found an improved performance when stopwords are used in the construction of the enriched co-occurrence networks. Finally, a simple global thresholding strategy was found to be more efficient than a local approach that preserves the community structure of the networks. Because complex networks are usually combined with other strategies BIBREF8, BIBREF11, we believe that the proposed could be used in combination with other methods to improve the classification performance of other text classification tasks.", "Our findings paves the way for research in several new directions. While we probed the effectiveness of virtual edges in a specific text classification task, we could extend this approach for general classification tasks. A systematic comparison of embeddings techniques could also be performed to include other recent techniques BIBREF54, BIBREF55. We could also identify other relevant techniques to create virtual edges, allowing thus the use of the methodology in other networked systems other than texts. For example, a network could be enriched with embeddings obtained from graph embeddings techniques. A simpler approach could also consider link prediction BIBREF56 to create virtual edges. Finally, other interesting family of studies concerns the discrimination between co-occurrence and virtual edges, possibly by creating novel network measurements considering heterogeneous links." ], [ "The authors acknowledge financial support from FAPESP (Grant no. 16/19069-9), CNPq-Brazil (Grant no. 304026/2018-2). This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001." ], [ "The following words were considered as stopwords in our analysis: all, just, don't, being, over, both, through, yourselves, its, before, o, don, hadn, herself, ll, had, should, to, only, won, under, ours,has, should've, haven't, do, them, his, very, you've, they, not, during, now, him, nor, wasn't, d, did, didn, this, she, each, further, won't, where, mustn't, isn't, few, because, you'd, doing, some, hasn, hasn't, are, our, ourselves, out, what, for, needn't, below, re, does, shouldn't, above, between, mustn, t, be, we, who, mightn't, doesn't, were, here, shouldn, hers, aren't, by, on, about, couldn, of, wouldn't, against, s, isn, or, own, into, yourself, down, hadn't, mightn, couldn't, wasn, your, you're, from, her, their, aren, it's, there, been, whom, too, wouldn, themselves, weren, was, until, more, himself, that, didn't, but, that'll, with, than, those, he, me, myself, ma, weren't, these, up, will, while, ain, can, theirs, my, and, ve, then, is, am, it, doesn, an, as, itself, at, have, in, any, if, again, no, when, same, how, other, which, you, shan't, shan, needn, haven, after, most, such, why, a, off i, m, yours, you'll, so, y, she's, the, having, once." ], [ "The list of books is shown in Tables TABREF25 and TABREF26. For each book we show the respective authors (Aut.) and the following quantities: total number of words ($N_W$), total number of sentences ($N_S$), total number of paragraphs ($N_P$) and the average sentence length ($\\langle S_L \\rangle $), measured in number of words. The following authors were considered: Hector Hugh (HH), Thomas Hardy (TH), Daniel Defoe (DD), Allan Poe (AP), Bram Stoker (BS), Mark Twain (MT), Charles Dickens (CD), Pelham Grenville (PG), Charles Darwin (CD), Arthur Doyle (AD), George Eliot (GE), Jane Austen (JA), and Joseph Conrad (JC)." ], [ "In this section we show additional results obtained for different text length. More specifically, we show the results obtained for GloVe, Word2vec and FastText when stopwords are either considered in the text or disregarded from the analysis." ] ] }
{ "question": [ "What other natural processing tasks authors think could be studied by using word embeddings?", "What is the reason that traditional co-occurrence networks fail in establishing links between similar words whenever they appear distant in the text?", "Do the use word embeddings alone or they replace some previous features of the model with word embeddings?", "On what model architectures are previous co-occurence networks based?" ], "question_id": [ "ec8043290356fcb871c2f5d752a9fe93a94c2f71", "728c2fb445173fe117154a2a5482079caa42fe24", "23d32666dfc29ed124f3aa4109e2527efa225fbc", "076928bebde4dffcb404be216846d9d680310622" ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "general classification tasks", "use of the methodology in other networked systems", "a network could be enriched with embeddings obtained from graph embeddings techniques" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our findings paves the way for research in several new directions. While we probed the effectiveness of virtual edges in a specific text classification task, we could extend this approach for general classification tasks. A systematic comparison of embeddings techniques could also be performed to include other recent techniques BIBREF54, BIBREF55. We could also identify other relevant techniques to create virtual edges, allowing thus the use of the methodology in other networked systems other than texts. For example, a network could be enriched with embeddings obtained from graph embeddings techniques. A simpler approach could also consider link prediction BIBREF56 to create virtual edges. Finally, other interesting family of studies concerns the discrimination between co-occurrence and virtual edges, possibly by creating novel network measurements considering heterogeneous links." ], "highlighted_evidence": [ "Our findings paves the way for research in several new directions. While we probed the effectiveness of virtual edges in a specific text classification task, we could extend this approach for general classification tasks. A systematic comparison of embeddings techniques could also be performed to include other recent techniques BIBREF54, BIBREF55. We could also identify other relevant techniques to create virtual edges, allowing thus the use of the methodology in other networked systems other than texts. For example, a network could be enriched with embeddings obtained from graph embeddings techniques. A simpler approach could also consider link prediction BIBREF56 to create virtual edges. Finally, other interesting family of studies concerns the discrimination between co-occurrence and virtual edges, possibly by creating novel network measurements considering heterogeneous links." ] } ], "annotation_id": [ "c98053f61caf0057e9b860a136f79840b47e83ab" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In a more practical scenario, text networks have been used in text classification tasks BIBREF8, BIBREF9, BIBREF10. The main advantage of the model is that it does not rely on deep semantical information to obtain competitive results. Another advantage of graph-based approaches is that, when combined with other approaches, it yields competitive results BIBREF11. A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks.", "While the co-occurrence representation yields good results in classification scenarios, some important features are not considered in the model. For example, long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach BIBREF12. In addition, semantically similar words not sharing the same lemma are mapped into distinct nodes. In order to address these issues, here we introduce a modification of the traditional network representation by establishing additional edges, referred to as “virtual” edges. In the proposed model, in addition to the co-occurrence edges, we link two nodes (words) if the corresponding word embedding representation is similar. While this approach still does not merge similar nodes into the same concept, similar nodes are explicitly linked via virtual edges." ], "highlighted_evidence": [ "A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks.\n\nWhile the co-occurrence representation yields good results in classification scenarios, some important features are not considered in the model. For example, long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach BIBREF12." ] } ], "annotation_id": [ "0bdf5fb318f76cc109cfa8ff324fa6c915bf9c55" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "They use it as addition to previous model - they add new edge between words if word embeddings are similar.", "evidence": [ "While the co-occurrence representation yields good results in classification scenarios, some important features are not considered in the model. For example, long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach BIBREF12. In addition, semantically similar words not sharing the same lemma are mapped into distinct nodes. In order to address these issues, here we introduce a modification of the traditional network representation by establishing additional edges, referred to as “virtual” edges. In the proposed model, in addition to the co-occurrence edges, we link two nodes (words) if the corresponding word embedding representation is similar. While this approach still does not merge similar nodes into the same concept, similar nodes are explicitly linked via virtual edges." ], "highlighted_evidence": [ "In the proposed model, in addition to the co-occurrence edges, we link two nodes (words) if the corresponding word embedding representation is similar." ] } ], "annotation_id": [ "182529ec096a2983f73eb75bd663ceacddf6e26d" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window", "connects only adjacent words in the so called word adjacency networks" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In a more practical scenario, text networks have been used in text classification tasks BIBREF8, BIBREF9, BIBREF10. The main advantage of the model is that it does not rely on deep semantical information to obtain competitive results. Another advantage of graph-based approaches is that, when combined with other approaches, it yields competitive results BIBREF11. A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks." ], "highlighted_evidence": [ "A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks." ] } ], "annotation_id": [ "845c82e222206d736d76c979e6b88f5acd7f59b6" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "FIG. 1. Example of a enriched word co-occurrence network created for a text. In this model, after the removal of stopwords, the remaining words are linked whenever they appear in the same context. In the proposed network representation, “virtual” edges are included whenever two nodes (words) are semantically related. In this example, virtual edges are those represented by red dashed lines. Edges are included via embeddings similarity. The quantity of included edges is a parameter to be chosen.", "FIG. 2. Gain in performance when considering additional virtual edges created using GloVe as embedding method. Each sub-panel shows the results obtained for distinct values of text length. In this case, the highest improvements in performance tends to occur in the shortest documents.", "FIG. 3. Gain in performance when considering additional virtual edges created using Word2vec as embedding method. Each sub-panel shows the results obtained for distinct values of text length.", "FIG. 4. Gain in performance when considering additional virtual edges created using FastText as embedding method. Each sub-panel shows the results obtained for distinct value of text length.", "TABLE I. Statistics of performance obtained with GloVe for different text lengths. Additional results considering other text lengths are shown in the Supplementary Information. Γ0 is the the accuracy rate obtained with the traditional co-occurrence model and max Γ+ is the highest accuracy rate considering different number of additional virtual edges. 〈Γ+ − Γ0〉 is the average absolute improvement in performance, 〈Γ+/Γ0〉 is the average relative improvement in performance and N+ is the total number of cases in which an improvement in performance was observed. In total we considered 20 different cases, which corresponds to the addition of p = 1%, 2% . . . 20% additional virtual edges. The best result for each document length is highlighted.", "TABLE II. Statistics of performance obtained with FastText for different text lengths. Additional results considering other text lengths are shown in the Supplementary Information. Γ0 is the the accuracy rate obtained with the traditional co-occurrence model and max Γ+ is the highest accuracy rate considering different number of additional virtual edges. 〈Γ+ − Γ0〉 is the average absolute improvement in performance, 〈Γ+/Γ0〉 is the average relative improvement in performance and N+ is the total number of cases in which an improvement in performance was observed. In total we considered 20 different cases, which corresponds to the addition of p = 1%, 2% . . . 20% additional virtual edges. The best result for each document length is highlighted.", "TABLE III. Performance analysis of the adopted framework when considering stopwords in the construction of the networks. Only the best results obtained across all considered classifiers are shown. In this case, all optimized results were obtained with SVM. Γ0 corresponds to the accuracy obtained with no virtual edges and max Γ+ is the best accuracy rate obtained when including virtual edges. For each text length, the highest accuracy rate is highlighted. A full list of results for each classifier is available in the Supplementary Information.", "TABLE IV. Comparison between the best results obtained via global and local thresholding. For each text length and embedding method, we show max Γ (L)", "TABLE V. Summary of best results obtained in this paper. For each document length we show the highest accuracy rate obtained, the relative gain obtained with the proposed approach and the embedding method yielding the highest accuracy rate: GloVe (GL), Word2Vec (W2V) or FastText (FT). All the results below were obtained when stopwords were used and the SVM was used as classification method." ], "file": [ "6-Figure1-1.png", "11-Figure2-1.png", "12-Figure3-1.png", "13-Figure4-1.png", "14-TableI-1.png", "15-TableII-1.png", "16-TableIII-1.png", "17-TableIV-1.png", "18-TableV-1.png" ] }
2004.03744
e-SNLI-VE-2.0: Corrected Visual-Textual Entailment with Natural Language Explanations
The recently proposed SNLI-VE corpus for recognising visual-textual entailment is a large, real-world dataset for fine-grained multimodal reasoning. However, the automatic way in which SNLI-VE has been assembled (via combining parts of two related datasets) gives rise to a large number of errors in the labels of this corpus. In this paper, we first present a data collection effort to correct the class with the highest error rate in SNLI-VE. Secondly, we re-evaluate an existing model on the corrected corpus, which we call SNLI-VE-2.0, and provide a quantitative comparison with its performance on the non-corrected corpus. Thirdly, we introduce e-SNLI-VE-2.0, which appends human-written natural language explanations to SNLI-VE-2.0. Finally, we train models that learn from these explanations at training time, and output such explanations at testing time.
{ "section_name": [ "Introduction", "SNLI-VE-2.0", "SNLI-VE-2.0 ::: Re-annotation details", "SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment", "SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment ::: Model.", "SNLI-VE-2.0 ::: Re-evaluation of Visual-Textual Entailment ::: Results.", "Visual-Textual Entailment with Natural Language Explanations", "Visual-Textual Entailment with Natural Language Explanations ::: e-SNLI-VE-2.0", "Visual-Textual Entailment with Natural Language Explanations ::: Collecting Explanations", "Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations", "Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain", "Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Model.", "Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Loss.", "Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Model selection.", "Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Predict and Explain ::: Results.", "Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict", "Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Model.", "Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Model selection.", "Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Explain Then Predict ::: Results.", "Visual-Textual Entailment with Natural Language Explanations ::: VTE Models with Natural Language Explanations ::: Qualitative Analysis of Generated Explanations", "Conclusion", "Conclusion ::: Acknowledgements.", "Appendix ::: Statistics of e-SNLI-VE-2.0", "Appendix ::: Details of the Mechanical Turk Task", "Appendix ::: Ambiguous Examples from SNLI-VE" ], "paragraphs": [ [ "Inspired by textual entailment BIBREF0, Xie BIBREF1 introduced the visual-textual entailment (VTE) task, which considers semantic entailment between a premise image and a textual hypothesis. Semantic entailment consists in determining if the hypothesis can be concluded from the premise, and assigning to each pair of (premise image, textual hypothesis) a label among entailment, neutral, and contradiction. In Figure FIGREF3, the label for the first image-sentence pair is entailment, because the hypothesis states that “a bunch of people display different flags”, which can be clearly derived from the image. On the contrary, the second image-sentence pair is labelled as contradiction, because the hypothesis stating that “people [are] running a marathon” contradicts the image with static people.", "Xie also propose the SNLI-VE dataset as the first dataset for VTE. SNLI-VE is built from the textual entailment SNLI dataset BIBREF0 by replacing textual premises with the Flickr30k images that they originally described BIBREF2. However, images contain more information than their descriptions, which may entail or contradict the textual hypotheses (see Figure FIGREF3). As a result, the neutral class in SNLI-VE has substantial labelling errors. Vu BIBREF3 estimated ${\\sim }31\\%$ errors in this class, and ${\\sim }1\\%$ for the contradiction and entailment classes.", "Xie BIBREF1 introduced the VTE task under the name of “visual entailment”, which could imply recognizing entailment between images only. This paper prefers to follow Suzuki BIBREF4 and call it “visual-textual entailment” instead, as it involves reasoning on image-sentence pairs.", "In this work, we first focus on decreasing the error in the neutral class by collecting new labels for the neutral pairs in the validation and test sets of SNLI-VE, using Amazon Mechanical Turk (MTurk). To ensure high quality annotations, we used a series of quality control measures, such as in-browser checks, inserting trusted examples, and collecting three annotations per instance. Secondly, we re-evaluate current image-text understanding systems, such as the bottom-up top-down attention network (BUTD) BIBREF5 on VTE using our corrected dataset, which we call SNLI-VE-2.0.", "Thirdly, we introduce the e-SNLI-VE-2.0 corpus, which we form by appending human-written natural language explanations to SNLI-VE-2.0. These explanations were collected in e-SNLI BIBREF6 to support textual entailment for SNLI. For the same reasons as above, we re-annotate the explanations for the neutral pairs in the validation and test sets, while keeping the explanations from e-SNLI for all the rest. Finally, we extend a current VTE model with the capacity of learning from these explanations at training time and outputting an explanation for each predicted label at testing time." ], [ "The goal of VTE is to determine if a textual hypothesis $H_{text}$ can be concluded, given the information in a premise image $P_{image}$ BIBREF1. There are three possible labels:", "Entailment: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is true.", "Contradiction: if there is enough evidence in $P_{image}$ to conclude that $H_{text}$ is false.", "Neutral: if neither of the earlier two are true.", "The SNLI-VE dataset proposed by Xie BIBREF1 is the combination of Flickr30k, a popular image dataset for image captioning BIBREF2 and SNLI, an influential dataset for natural language inference BIBREF0. Textual premises from SNLI are replaced with images from Flickr30k, which is possible, as these premises were originally collected as captions of these images (see Figure FIGREF3).", "However, in practice, a sensible proportion of labels are wrong due to the additional information contained in images. This mostly affects neutral pairs, since images may contain the necessary information to ground a hypothesis for which a simple premise caption was not sufficient. An example is shown in Figure FIGREF3. Vu BIBREF3 report that the label is wrong for ${\\sim }31\\%$ of neutral examples, based on a random subset of 171 neutral points from the test set. We also annotated 150 random neutral examples from the test set and found a similar percentage of 30.6% errors.", "Our annotations are available at https://github.com/virginie-do/e-SNLI-VE/tree/master/annotations/gt_labels.csv" ], [ "In this work, we only collect new labels for the neutral pairs in the validation and test sets of SNLI-VE. While the procedure of re-annotation is generic, we limit our re-annotation to these splits as a first step to verify the difference in performance that current models have when evaluated on the corrected test set as well as the effect of model selection on the corrected validation set. We leave for future work re-annotation of the training set, which would likely lead to training better VTE models. We also chose not to re-annotate entailment and contradiction classes, as their error rates are much lower ($<$1% as reported by Vu BIBREF3).", "The main question that we want our dataset to answer is: “What is the relationship between the image premise and the sentence hypothesis?”. We provide workers with the definitions of entailment, neutral, and contradiction for image-sentence pairs and one example for each label. As shown in Figure FIGREF8, for each image-sentence pair, workers are required to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using at least half of the words that they highlighted. The collected explanations will be presented in more detail in Section SECREF20, as we focus here on the label correction. We point out that it is likely that requiring an explanation at the same time as requiring a label has a positive effect on the correctness of the label, since having to justify in writing the picked label may make workers pay an increased attention. Moreover, we implemented additional quality control measures for crowdsourced annotations, such as (a) collecting three annotations for every input, (b) injecting trusted annotations into the task for verification BIBREF7, and (c) restricting to workers with at least 90% previous approval rate.", "First, we noticed that some instances in SNLI-VE are ambiguous. We show some examples in Figure FIGREF3 and in Appendix SECREF43. In order to have a better sense of this ambiguity, three authors of this paper independently annotated 100 random examples. All three authors agreed on 54% of the examples, exactly two authors agreed on 45%, and there was only one example on which all three authors disagreed. We identified the following three major sources of ambiguity:", "mapping an emotion in the hypothesis to a facial expression in the image premise, e.g., “people enjoy talking”, “angry people”, “sad woman”. Even when the face is seen, it may be subjective to infer an emotion from a static image (see Figure FIGREF44 in Appendix SECREF43).", "personal taste, e.g., “the sign is ugly”.", "lack of consensus on terms such as “many people” or “crowded”.", "To account for the ambiguity that the neutral labels seem to present, we considered that an image-sentence pair is too ambiguous and not suitable for a well-defined visual-textual entailment task when three different labels were assigned by the three workers. Hence, we removed these examples from the validation (5.2%) and test (5.5%) sets.", "To ensure that our workers are correctly performing the task, we randomly inserted trusted pairs, i.e., pairs among the 54% on which all three authors agreed on the label. For each set of 10 pairs presented to a worker, one trusted pair was introduced at a random location, so that the worker, while being told that there is such a test pair, cannot figure out which one it is. Via an in-browser check, we only allow workers to submit their answers for each set of 10 instances only if the trusted pair was correctly labelled. Other in-browser checks were done for the collection of explanations, as we will describe in Section SECREF20. More details about the participants and design of the Mechanical Turk task can be found in Appendix SECREF41.", "After collecting new labels for the neutral instances in the validation and testing sets, we randomly select and annotate 150 instances from the validation set that were neutral in SNLI-VE. Based on this sample, the error rate went down from 31% to 12% in SNLI-VE-2.0. Looking at the 18 instances where we disagreed with the label assigned by MTurk workers, we noticed that 12 were due to ambiguity in the examples, and 6 were due to workers' errors. Further investigation into potentially eliminating ambiguous instances would likely be beneficial. However, we leave it as future work, and we proceed in this work with using our corrected labels, since our error rate is significantly lower than that of the original SNLI-VE.", "Finally, we note that only about 62% of the originally neutral pairs remain neutral, while 21% become contradiction and 17% entailment pairs. Therefore, we are now facing an imbalance between the neutral, entailment, and contradiction instances in the validation and testing sets of SNLI-VE-2.0. The neutral class becomes underrepresented and the label distributions in the corrected validation and testing sets both become E / N / C: 39% / 20% / 41%. To account for this, we compute the balanced accuracy, i.e., the average of the three accuracies on each class." ], [ "Since we decreased the error rate of labels in the validation and test set, we are interested in the performance of a VTE model when using the corrected sets." ], [ "To tackle SNLI-VE, Xie BIBREF1 used EVE (for “Explainable Visual Entailment”), a modified version of the BUTD architecture, the winner of the Visual Question Answering (VQA) challenge in 2017 BIBREF5. Since the EVE implementation is not available at the time of this work, we used the original BUTD architecture, with the same hyperparameters as reported in BIBREF1.", "BUTD contains an image processing module and a text processing module. The image processing module encodes each image region proposed by FasterRCNN BIBREF8 into a feature vector using a bottom-up attention mechanism. In the text processing module, the text hypothesis is encoded into a fixed-length vector, which is the last output of a recurrent neural network with 512-GRU units BIBREF9. To input each token into the recurrent network, we use the pretrained GloVe vectors BIBREF10. Finally, a top-down attention mechanism is used between the hypothesis vector and each of the image region vectors to obtain an attention weight for each region. The weighted sum of these image region vectors is then fused with the text hypothesis vector. The multimodal fusion is fed to a multilayer percetron (MLP) with tanh activations and a final softmax layer to classify the image-sentence relation as entailment, contradiction, or neutral.", "Using the implementation from https://github.com/claudiogreco/coling18-gte.", "We use the original training set from SNLI-VE. To see the impact of correcting the validation and test sets, we do the following three experiments:", "model selection as well as testing are done on the original uncorrected SNLI-VE.", "model selection is done on the uncorrected SNLI-VE validation set, while testing is done on the corrected SNLI-VE-2.0 test set.", "model selection as well as testing are done on the corrected SNLI-VE-2.0.", "Models are trained with cross-entropy loss optimized by the Adam optimizer BIBREF11 with batch size 64. The maximum number of training epochs is set to 100, with early stopping when no improvement is observed on validation accuracy for 3 epochs. The final model checkpoint selected for testing is the one with the highest validation accuracy." ], [ "The results of the three experiments enumerated above are reported in Table TABREF18. Surprisingly, we obtained an accuracy of 73.02% on SNLI-VE using BUTD, which is better than the 71.16% reported by Xie BIBREF1 for the EVE system which meant to be an improvement over BUTD. It is also better than their reproduction of BUTD, which gave 68.90%.", "The same BUTD model that achieves 73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set from SNLI-VE-2.0. Hence, for this model, we do not notice a significant difference in performance. This could be due to randomness. Finally, when we run the training loop again, this time doing the model selection on the corrected validation set from SNLI-VE-2.0, we obtain a slightly worse performance of 72.52%, although the difference is not clearly significant.", "Finally, we recall that the training set has not been re-annotated, and hence approximately 31% image-sentence pairs are wrongly labelled as neutral, which likely affects the performance of the model." ], [ "In this work, we also introduce e-SNLI-VE-2.0, a dataset combining SNLI-VE-2.0 with human-written explanations from e-SNLI BIBREF6, which were originally collected to support textual entailment. We replace the explanations for the neutral pairs in the validation and test sets with new ones collected at the same time as the new labels. We extend a current VTE model with an explanation module able to learn from these explanations at training time and generate an explanation for each predicted label at testing time." ], [ "e-SNLI BIBREF6 is an extension of the SNLI corpus with human-annotated natural language explanations for the ground-truth labels. The authors use the explanations to train models to also generate natural language justifications for their predictions. They collected one explanation for each instance in the training set of SNLI and three explanations for each instance in the validation and testing sets.", "We randomly selected 100 image-sentence pairs in the validation set of SNLI-VE and their corresponding explanations in e-SNLI and examined how relevant these explanations are for the VTE task. More precisely, we say that an explanation is relevant if it brings information that justifies the relationship between the image and the sentence. We restricted the count to correctly labelled inputs and found that 57% explanations were relevant. For example, the explanation for entailment in Figure FIGREF21 (“Cooking in his apartment is cooking”) was counted as irrelevant in our statistics, because it would not be the best explanation for an image-sentence pair, even though it is coherent with the textual pair. We investigate whether these explanations improve a VTE model when enhanced with a component that can process explanations at train time and output them at test time.", "To form e-SNLI-VE-2.0, we append to SNLI-VE-2.0 the explanations from e-SNLI for all except the neutral pairs in the validation and test sets of SNLI-VE, which we replace with newly crowdsourced explanations collected at the same time as the labels for these splits (see Figure FIGREF21). Statistics of e-SNLI-VE-2.0 are shown in Appendix SECREF39, Table TABREF40." ], [ "As mentioned before, in order to submit the annotation of an image-sentence pair, three steps must be completed: workers must choose a label, highlight words in the hypothesis, and use at least half of the highlighted words to write an explanation for their decision. The last two steps thus follow the quality control of crowd-sourced explanations introduced by Camburu BIBREF6. We also ensured that workers do not simply use a copy of the given hypothesis as explanation. We ensured all the above via in-browser checks before workers' submission. An example of collected explanations is given in Figure FIGREF21.", "To check the success of our crowdsourcing, we manually assessed the relevance of explanations among a random subset of 100 examples. A marking scale between 0 and 1 was used, assigning a score of $k$/$n$ when $k$ required attributes were given in an explanation out of $n$. We report an 83.5% relevance of explanations from workers. We note that, since our explanations are VTE-specific, they were phrased differently from the ones in e-SNLI, with more specific mentions to the images (e.g., “There is no labcoat in the picture, just a man wearing a blue shirt.”, “There are no apples or oranges shown in the picture, only bananas.”). Therefore, it would likely be beneficial to collect new explanations for all SNLI-VE-2.0 (not only for the neutral pairs in the validation and test sets) such that models can learn to output convincing explanations for the task at hand. However, we leave this as future work, and we show in this work the results that one obtains when using the explanations from e-SNLI-VE-2.0." ], [ "This section presents two VTE models that generate natural language explanations for their own decisions. We name them PaE-BUTD-VE and EtP-BUTD-VE, where PaE (resp. EtP) is for PredictAndExplain (resp. ExplainThenPredict), two models with similar principles introduced by Camburu BIBREF6. The first system learns to generate an explanation conditioned on the image premise, textual hypothesis, and predicted label. In contrast, the second system learns to first generate an explanation conditioned on the image premise and textual hypothesis, and subsequently makes a prediction solely based on the explanation." ], [ "PaE-BUTD-VE is a system for solving VTE and generating natural language explanations for the predicted labels. The explanations are conditioned on the image premise, the text hypothesis, and the predicted label (ground-truth label at train time), as shown in Figure FIGREF24." ], [ "As described in Section SECREF12, in the BUTD model, the hypothesis vector and the image vector were fused in a fixed-size feature vector f. The vector f was then given as input to an MLP which outputs a probability distribution over the three labels. In PaE-BUTD-VE, in addition to the classification layer, we add a 512-LSTM BIBREF12 decoder to generate an explanation. The decoder takes the feature vector f as initial state. Following Camburu BIBREF6, we prepend the label as a token at the beginning of the explanation to condition the explanation on the label. The ground truth label is provided at training time, whereas the predicted label is given at test time.", "At test time, we use beam search with a beam width of 3 to decode explanations. For memory and time reduction, we replaced words that appeared less than 15 times among explanations with “#UNK#”. This strategy reduces the output vocabulary size to approximately 8.6k words." ], [ "The training loss is a weighted combination of the classification loss and the explanation loss, both computed using softmax cross entropy: $\\mathcal {L} = \\alpha \\mathcal {L}_{label} + (1-\\alpha ) \\mathcal {L}_{explanation} \\; \\textrm {;} \\; \\alpha \\in [0,1]$." ], [ "In this experiment, we are first interested in examining if a neural network can generate explanations at no cost for label accuracy. Therefore, only balanced accuracy on label is used for the model selection criterion. However, future work can investigate other selection criteria involving a combination between the label and explanation performances. We performed hyperparameter search on $\\alpha $, considering values between 0.2 and 0.8 with a step of 0.2. We found $\\alpha =0.4$ to produce the best validation balanced accuracy of 72.81%, while BUTD trained without explanations yielded a similar 72.58% validation balanced accuracy." ], [ "As summarised in Table TABREF30, we obtain a test balanced accuracy for PaE-BUTD-VE of 73%, while the same model trained without explanations obtains 72.52%. This is encouraging, since it shows that one can obtain additional natural language explanations without sacrificing performance (and eventually even improving the label performance, however, future work is needed to conclude whether the difference $0.48\\%$ improvement in performance is statistically significant).", "Camburu BIBREF6 mentioned that the BLEU score was not an appropriate measure for the quality of explanations and suggested human evaluation instead. We therefore manually scored the relevance of 100 explanations that were generated when the model predicted correct labels. We found that only 20% of explanations were relevant. We highlight that the relevance of explanations is in terms of whether the explanation reflects ground-truth reasons supporting the correct label. This is not to be confused with whether an explanation is correctly illustrating the inner working of the model, which is left as future work. It is also important to note that on a similar experimental setting, Camburu report as low as 34.68% correct explanations, training with explanations that were actually collected for their task. Lastly, the model selection criterion at validation time was the prediction balanced accuracy, which may contribute to the low quality of explanations. While we show that adding an explanation module does not harm prediction performance, more work is necessary to get models that output trustable explanations." ], [ "When assigning a label, an explanation is naturally part of the decision-making process. This motivates the design of a system that explains itself before deciding on a label, called EtP-BUTD-VE. For this system, a first neural network is trained to generate an explanation given an image-sentence input. Separately, a second neural network, called ExplToLabel-VE, is trained to predict a label from an explanation (see Figure FIGREF32)." ], [ "For the first network, we set $\\alpha =0$ in the training loss of the PaE-BUTD-VE model to obtain a system that only learns to generate an explanation from the image-sentence input, without label prediction. Hence, in this setting, no label is prepended before the explanation.", "For the ExplToLabel-VE model, we use a 512-LSTM followed by an MLP with three 512-layers and ReLU activation, and softmax activation to classify the explanation between entailment, contradiction, and neutral." ], [ "For ExplToLabel-VE, the best model is selected on balanced accuracy at validation time. For EtP-BUTD-VE, perplexity is used to select the best model parameters at validation time. It is computed between the explanations produced by the LSTM and ground truth explanations from the validation set." ], [ "When we train ExplToLabel-VE on e-SNLI-VE-2.0, we obtain a balanced accuracy of 90.55% on the test set.", "As reported in Table TABREF30, the overall PaE-BUTD-VE system achieves 69.40% balanced accuracy on the test set of e-SNLI-VE-2.0, which is a 3% decrease from the non-explanatory BUTD counterpart (72.52%). However, by setting $\\alpha $ to zero and selecting the model that gives the best perplexity per word at validation, the quality of explanation significantly increased, with 35% relevance, based on manual evaluation. Thus, in our model, generating better explanations involves a small sacrifice in label prediction accuracy, implying a trade-off between explanation generation and accuracy.", "We note that there is room for improvement in our explanation generation method. For example, one can implement an attention mechanism similar to Xu BIBREF13, so that each generated word relates to a relevant part of the multimodal feature representation." ], [ "We complement our quantitative results with a qualitative analysis of the explanations generated by our enhanced VTE systems. In Figures FIGREF36 and FIGREF37, we present examples of the predicted labels and generated explanations.", "Figure FIGREF36 shows an example where the EtP-BUTD-VE model produces both a correct label and a relevant explanation. The label is contradiction, because in the image, the students are playing with a soccer ball and not a basketball, thus contradicting the text hypothesis. Given the composition of the generated sentence (“Students cannot be playing soccer and baseball at the same time.”), ExplToLabel-VE was able to detect a contradiction in the image-sentence input. In comparison, the explanation from e-SNLI-VE-2.0 is not correct, even if it was valid for e-SNLI when the text premise was given. This emphasizes the difficulty that we are facing with generating proper explanations when training on a noisy dataset.", "Even when the generated explanations are irrelevant, we noticed that they are on-topic and that most of the time the mistakes come from repetitions of certain sub-phrases. For example, in Figure FIGREF37, PaE-BUTD-VE predicts the label neutral, which is correct, but the explanation contains an erroneous repetition of the n-gram “are in a car”. However, it appears that the system learns to generate a sentence in the form “Just because ...doesn't mean ...”, which is frequently found for the justification of neutral pairs in the training set. The explanation generated by EtP-BUTD-VE adopts the same structure, and the ExplToLabel-VE component correctly classifies the instance as neutral. However, even if the explanation is semantically correct, it is not relevant for the input and fails to explain the classification." ], [ "In this paper, we first presented SNLI-VE-2.0, which corrects the neutral instances in the validation and test sets of SNLI-VE. Secondly, we re-evaluated an existing model on the corrected sets in order to update the estimate of its performance on this task. Thirdly, we introduced e-SNLI-VE-2.0, a dataset which extends SNLI-VE-2.0 with natural language explanations. Finally, we trained two types of models that learn from these explanations at training time, and output such explanations at test time, as a stepping stone in explainable artificial intelligence. Our work is a jumping-off point for both the identification and correction of SNLI-VE, as well as in the extension to explainable VTE. We hope that the community will build on our findings to create more robust as well as explainable multimodal systems." ], [ "This work was supported by the Oxford Internet Institute, a JP Morgan PhD Fellowship 2019-2020, an Oxford-DeepMind Graduate Scholarship, the Alan Turing Institute under the EPSRC grant EP/N510129/1, and the AXA Research Fund, as well as DFG-EXC-Nummer 2064/1-Projektnummer 390727645 and the ERC under the Horizon 2020 program (grant agreement No. 853489)." ], [ "e-SNLI-VE-2.0 is the combination of SNLI-VE-2.0 with explanations from either e-SNLI or our crowdsourced annotations where applicable. The statistics of e-SNLI-VE-2.0 are shown in Table TABREF40.", "Including text hypotheses and explanations." ], [ "We used Amazon Mechanical Turk (MTurk) to collect new labels and explanations for SNLI-VE. 2,060 workers participated in the annotation effort, with an average of 1.98 assignments per worker and a standard deviation of 5.54. We required the workers to have a previous approval rate above 90%. No restriction was put on the workers' location.", "Each assignment consisted of a set of 10 image-sentence pairs. For each pair, the participant was asked to (a) choose a label, (b) highlight words in the sentence that led to their decision, and (c) explain their decision in a comprehensive and concise manner, using a subset of the words that they highlighted. The instructions are shown in Figure FIGREF42. Workers were also guided with three annotated examples, one for each label.", "For each assignment of 10 questions, one trusted annotation with gold standard label was inserted at a random position, as a measure to control the quality of label annotation. Each assignment was completed by three different workers. An example of question is shown in Figure FIGREF8 in the core paper." ], [ "Some examples in SNLI-VE were ambiguous and could find correct justifications for incompatible labels, as shown in Figures FIGREF44, FIGREF45, and FIGREF46." ] ] }
{ "question": [ "Is model explanation output evaluated, what metric was used?", "How many annotators are used to write natural language explanations to SNLI-VE-2.0?", "How many natural language explanations are human-written?", "How much is performance difference of existing model between original and corrected corpus?", "What is the class with highest error rate in SNLI-VE?" ], "question_id": [ "f33236ebd6f5a9ccb9b9dbf05ac17c3724f93f91", "66bf0d61ffc321f15e7347aaed191223f4ce4b4a", "5dfa59c116e0ceb428efd99bab19731aa3df4bbd", "0c557b408183630d1c6c325b5fb9ff1573661290", "a08b5018943d4428f067c08077bfff1af3de9703" ], "nlp_background": [ "zero", "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no" ], "search_query": [ "", "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "balanced accuracy, i.e., the average of the three accuracies on each class" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Finally, we note that only about 62% of the originally neutral pairs remain neutral, while 21% become contradiction and 17% entailment pairs. Therefore, we are now facing an imbalance between the neutral, entailment, and contradiction instances in the validation and testing sets of SNLI-VE-2.0. The neutral class becomes underrepresented and the label distributions in the corrected validation and testing sets both become E / N / C: 39% / 20% / 41%. To account for this, we compute the balanced accuracy, i.e., the average of the three accuracies on each class." ], "highlighted_evidence": [ "To account for this, we compute the balanced accuracy, i.e., the average of the three accuracies on each class." ] } ], "annotation_id": [ "94b90e9041b91232b87bfc13b5fa5ff8f7feb0b2" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "2,060 workers" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We used Amazon Mechanical Turk (MTurk) to collect new labels and explanations for SNLI-VE. 2,060 workers participated in the annotation effort, with an average of 1.98 assignments per worker and a standard deviation of 5.54. We required the workers to have a previous approval rate above 90%. No restriction was put on the workers' location." ], "highlighted_evidence": [ "We used Amazon Mechanical Turk (MTurk) to collect new labels and explanations for SNLI-VE. 2,060 workers participated in the annotation effort, with an average of 1.98 assignments per worker and a standard deviation of 5.54." ] } ], "annotation_id": [ "7069fb67777a7ce17a963cbbe4809993e8c99322" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Totally 6980 validation and test image-sentence pairs have been corrected.", "evidence": [ "e-SNLI-VE-2.0 is the combination of SNLI-VE-2.0 with explanations from either e-SNLI or our crowdsourced annotations where applicable. The statistics of e-SNLI-VE-2.0 are shown in Table TABREF40.", "FLOAT SELECTED: Table 3. Summary of e-SNLI-VE-2.0 (= SNLI-VE-2.0 + explanations). Image-sentence pairs labelled as neutral in the training set have not been corrected." ], "highlighted_evidence": [ "The statistics of e-SNLI-VE-2.0 are shown in Table TABREF40.", "FLOAT SELECTED: Table 3. Summary of e-SNLI-VE-2.0 (= SNLI-VE-2.0 + explanations). Image-sentence pairs labelled as neutral in the training set have not been corrected." ] } ], "annotation_id": [ "a70ac2ea8449767510dc5bb9dfa1caf4a8fa11e2" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The same BUTD model that achieves 73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set from SNLI-VE-2.0. Hence, for this model, we do not notice a significant difference in performance. This could be due to randomness. Finally, when we run the training loop again, this time doing the model selection on the corrected validation set from SNLI-VE-2.0, we obtain a slightly worse performance of 72.52%, although the difference is not clearly significant." ], "highlighted_evidence": [ "The same BUTD model that achieves 73.02% on the uncorrected SNLI-VE test set, achieves 73.18% balanced accuracy when tested on the corrected test set from SNLI-VE-2.0. Hence, for this model, we do not notice a significant difference in performance. This could be due to randomness." ] } ], "annotation_id": [ "bb7949af7c9d62e0feda5bbbaa7283147e88306b" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "neutral class" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Xie also propose the SNLI-VE dataset as the first dataset for VTE. SNLI-VE is built from the textual entailment SNLI dataset BIBREF0 by replacing textual premises with the Flickr30k images that they originally described BIBREF2. However, images contain more information than their descriptions, which may entail or contradict the textual hypotheses (see Figure FIGREF3). As a result, the neutral class in SNLI-VE has substantial labelling errors. Vu BIBREF3 estimated ${\\sim }31\\%$ errors in this class, and ${\\sim }1\\%$ for the contradiction and entailment classes." ], "highlighted_evidence": [ "As a result, the neutral class in SNLI-VE has substantial labelling errors. Vu BIBREF3 estimated ${\\sim }31\\%$ errors in this class, and ${\\sim }1\\%$ for the contradiction and entailment classes." ] } ], "annotation_id": [ "0be4666fdfe22ede55d5468e3beb6e478ec60b2f" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1. Examples from SNLI-VE-2.0. (a) In red, the neutral label from SNLI-VE is wrong, since the picture clearly shows that the crowd is outdoors. We corrected it to entailment in SNLIVE-2.0. (b) In green, an ambiguous instance. There is indeed an American flag in the background but it is very hard to see, hence the ambiguity between neutral and entailment, and even contradiction if one cannot spot it. Further, it is not clear whether “they” implies the whole group or the people visible in the image.", "Figure 2. MTurk annotation screen. (a) The label contradiction is chosen, (b) the evidence words “man”, “violin”, and “crowd” are highlighted, and (c) an explanation is written with these words.", "Table 1. Accuracies obtained with BUTD on SNLI-VE (valoriginal, test-original) and SNLI-VE-2.0 (val-corrected, testcorrected).", "Figure 3. Two image-sentence pairs from e-SNLI-VE-2.0 with (a) at the top, an uninformative explanation from e-SNLI, (b) at the bottom, an explanation collected from our crowdsourcing. We only collected new explanations for the neutral class (along with new labels). The SNLI premise is not included in e-SNLI-VE-2.0.", "Figure 4. PAE-BUTD-VE. The generation of explanation is conditioned on the image premise, textual hypothesis, and predicted label.", "Table 2. Label balanced accuracies and explanation relevance rates of our two explanatory systems on e-SNLI-VE-2.0. Comparison with their counterparts in e-SNLI [3]. Without the explanation component, the balanced accuracy on SNLI-VE-2.0 is 72.52%", "Figure 5. Architecture of ETP-BUTD-VE. Firstly, an explanation is generated, secondly the label is predicted from the explanation. The two models (in separate dashed rectangles) are not trained jointly.", "Figure 6. Both systems PAE-BUTD-VE and ETP-BUTD-VE predict the correct label, but only ETP-BUTD-VE generates a relevant explanation.", "Figure 7. Both systems PAE-BUTD-VE and ETP-BUTD-VE predict the correct label, but generate irrelevant explanations.", "Figure 8. Instructions given to workers on Mechanical Turk", "Table 3. Summary of e-SNLI-VE-2.0 (= SNLI-VE-2.0 + explanations). Image-sentence pairs labelled as neutral in the training set have not been corrected.", "Figure 9. Ambiguous SNLI-VE instance. Some may argue that the woman’s face betrays sadness, but the image is not quite clear. Secondly, even with better resolution, facial expression may not be a strong enough evidence to support the hypothesis about the woman’s emotional state.", "Figure 10. Ambiguous SNLI-VE instance. The lack of consensus is on whether the man is “leering” at the woman. While it is likely the case, this interpretation in favour of entailment is subjective, and a cautious annotator would prefer to label the instance as neutral.", "Figure 11. Ambiguous SNLI-VE instance. Some may argue that it is impossible to certify from the image that the children are kindergarten students, and label the instance as neutral. On the other hand, the furniture may be considered as typical of kindergarten, which would be sufficient evidence for entailment." ], "file": [ "2-Figure1-1.png", "2-Figure2-1.png", "4-Table1-1.png", "5-Figure3-1.png", "5-Figure4-1.png", "6-Table2-1.png", "6-Figure5-1.png", "7-Figure6-1.png", "7-Figure7-1.png", "8-Figure8-1.png", "8-Table3-1.png", "8-Figure9-1.png", "8-Figure10-1.png", "9-Figure11-1.png" ] }
2001.09332
An Analysis of Word2Vec for the Italian Language
Word representation is fundamental in NLP tasks, because it is precisely from the coding of semantic closeness between words that it is possible to think of teaching a machine to understand text. Despite the spread of word embedding concepts, still few are the achievements in linguistic contexts other than English. In this work, analysing the semantic capacity of the Word2Vec algorithm, an embedding for the Italian language is produced. Parameter setting such as the number of epochs, the size of the context window and the number of negatively backpropagated samples is explored.
{ "section_name": [ "Introduction", "Word2Vec", "Word2Vec ::: Sampling rate", "Word2Vec ::: Negative sampling", "Implementation details", "Results", "Results ::: Analysis of the various models", "Results ::: Comparison with other models", "Conclusion" ], "paragraphs": [ [ "In order to make human language comprehensible to a computer, it is obviously essential to provide some word encoding. The simplest approach is the one-hot encoding, where each word is represented by a sparse vector with dimension equal to the vocabulary size. In addition to the storage need, the main problem of this representation is that any concept of word similarity is completely ignored (each vector is orthogonal and equidistant from each other). On the contrary, the understanding of natural language cannot be separated from the semantic knowledge of words, which conditions a different closeness between them. Indeed, the semantic representation of words is the basic problem of Natural Language Processing (NLP). Therefore, there is a necessary need to code words in a space that is linked to their meaning, in order to facilitate a machine in potential task of “understanding\" it. In particular, starting from the seminal work BIBREF0, words are usually represented as dense distributed vectors that preserve their uniqueness but, at the same time, are able to encode the similarities.", "These word representations are called Word Embeddings since the words (points in a space of vocabulary size) are mapped in an embedding space of lower dimension. Supported by the distributional hypothesis BIBREF1 BIBREF2, which states that a word can be semantically characterized based on its context (i.e. the words that surround it in the sentence), in recent years many word embedding representations have been proposed (a fairly complete and updated review can be found in BIBREF3 and BIBREF4). These methods can be roughly categorized into two main classes: prediction-based models and count-based models. The former is generally linked to work on Neural Network Language Models (NNLM) and use a training algorithm that predicts the word given its local context, the latter leverage word-context statistics and co-occurrence counts in an entire corpus. The main prediction-based and count-based models are respectively Word2Vec BIBREF5 (W2V) and GloVe BIBREF6.", "Despite the widespread use of these concepts BIBREF7 BIBREF8, few contributions exist regarding the development of a W2V that is not in English. In particular, no detailed analysis on an Italian W2V seems to be present in the literature, except for BIBREF9 and BIBREF10. However, both seem to leave out some elements of fundamental interest in the learning of the neural network, in particular relating to the number of epochs performed during learning, reducing the importance that it may have on the final result. In BIBREF9, this for example leads to the simplistic conclusion that (being able to organize with more freedom in space) the more space is given to the vectors, the better the results may be. However, the problem in complex structures is that large embedding spaces can make training too difficult.", "In this work, by setting the size of the embedding to a commonly used average value, various parameters are analysed as the number of learning epochs changes, depending on the window sizes and the negatively backpropagated samples." ], [ "The W2V structure consists of a simple two-level neural network (Figure FIGREF1) with one-hot vectors representing words at the input. It can be trained in two different modes, algorithmically similar, but different in concept: Continuous Bag-of-Words (CBOW) model and Skip-Gram model. While CBOW tries to predict the target words from the context, Skip-Gram instead aims to determine the context for a given target word. The two different approaches therefore modify only the way in which the inputs and outputs are to be managed, but in any case, the network does not change, and the training always takes place between single pairs of words (placed as one-hot in input and output).", "The text is in fact divided into sentences, and for each word of a given sentence a window of words is taken from the right and from the left to define the context. The central word is coupled with each of the words forming the set of pairs for training. Depending on the fact that the central word represents the output or the input in training pairs, the CBOW and Skip-gram models are obtained respectively.", "Regardless of whether W2V is trained to predict the context or the target word, it is used as a word embedding in a substantially different manner from the one for which it has been trained. In particular, the second matrix is totally discarded during use, since the only thing relevant to the representation is the space of the vectors generated in the intermediate level (embedding space)." ], [ "The common words (such as “the\", “of\", etc.) carry very little information on the target word with which they are coupled, and through backpropagation they tend to have extremely small representative vectors in the embedding space. To solve both these problems the W2V algorithm implements a particular “subsampling\" BIBREF11, which acts by eliminating some words from certain sentences. Note that the elimination of a word directly from the text means that it no longer appears in the context of any of the words of the sentence and, at the same time, a number of pairs equal to (at most) twice the size of the window relating to the deleted word will also disappear from the training set.", "In practice, each word is associated with a sort of “keeping probability\" and, when you meet that word, if this value is greater than a randomly generated value then the word will not be discarded from the text. The W2V implementation assigns this “probability\" to the generic word $w_i$ through the formula:", "where $f(w_i)$ is the relative frequency of the word $w_i$ (namely $count(w_i)/total$), while $s$ is a sample value, typically set between $10^{-3}$ and $10^{-5}$." ], [ "Working with one-hot pairs of words means that the size of the network must be the same at input and output, and must be equal to the size of the vocabulary. So, although very simple, the network has a considerable number of parameters to train, which lead to an excessive computational cost if we are supposed to backpropagate all the elements of the one-hot vector in output.", "The “negative sampling\" technique BIBREF11 tries to solve this problem by modifying only a small percentage of the net weights every time. In practice, for each pair of words in the training set, the loss function is calculated only for the value 1 and for a few values 0 of the one-hot vector of the desired output. The computational cost is therefore reduced by choosing to backpropagate only $K$ words “negative\" and one positive, instead of the entire vocabulary. Typical values for negative sampling (the number of negative samples that will be backpropagated and to which therefore the only positive value will always be added), range from 2 to 20, depending on the size of the dataset.", "The probability of selecting a negative word to backpropagate depends on its frequency, in particular through the formula:", "Negative samples are then selected by choosing a sort of “unigram distribution\", so that the most frequent words are also the most often backpropated ones." ], [ "The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences.", "The text was previously preprocessed by removing the words whose absolute frequency was less than 5 and eliminating all special characters. Since it is impossible to represent every imaginable numerical value, but not wanting to eliminate the concept of “numerical representation\" linked to certain words, it was also decided to replace every number present in the text with the particular $\\langle NUM \\rangle $ token; which probably also assumes a better representation in the embedding space (not separating into the various possible values). All the words were then transformed to lowercase (to avoid a double presence) finally producing a vocabulary of $618\\,224$ words.", "Note that among the special characters are also included punctuation marks, which therefore do not appear within the vocabulary. However, some of them (`.', `?' and `!') are later removed, as they are used to separate the sentences.", "The Python implementation provided by Gensim was used for training the various embeddings all with size 300 and sampling parameter ($s$ in Equation DISPLAY_FORM3) set at $0.001$." ], [ "To analyse the results we chose to use the test provided by BIBREF10, which consists of $19\\,791$ analogies divided into 19 different categories: 6 related to the “semantic\" macro-area (8915 analogies) and 13 to the “syntactic\" one (10876 analogies). All the analogies are composed by two pairs of words that share a relation, schematized with the equation: $a:a^{*}=b:b^{*}$ (e.g. “man : woman = king : queen\"); where $b^{*}$ is the word to be guessed (“queen\"), $b$ is the word coupled to it (“king\"), $a$ is the word for the components to be eliminated (“man\"), and $a^{*}$ is the word for the components to be added (“woman\").", "The determination of the correct response was obtained both through the classical additive cosine distance (3COSADD) BIBREF5:", "and through the multiplicative cosine distance (3COSMUL) BIBREF12:", "where $\\epsilon =10^{-6}$ and $\\cos (x, y) = \\frac{x \\cdot y}{\\left\\Vert x\\right\\Vert \\left\\Vert y\\right\\Vert }$. The extremely low value chosen for the $\\epsilon $ is due to the desire to minimize as much as possible its impact on performance, as during the various testing phases we noticed a strange bound that is still being investigated. As usual, moreover, the representative vectors of the embedding space are previously normalized for the execution of the various tests." ], [ "We first analysed 6 different implementations of the Skip-gram model each one trained for 20 epochs. Table TABREF10 shows the accuracy values (only on possible analogies) at the 20th epoch for the six models both using 3COSADD and 3COSMUL. It is interesting to note that the 3COSADD total metric, respect to 3COSMUL, seems to have slightly better results in the two extreme cases of limited learning (W5N5 and W10N20) and under the semantic profile. However, we should keep in mind that the semantic profile is the one best captured by the network in both cases, which is probably due to the nature of the database (mainly composed of articles and news that principally use an impersonal language). In any case, the improvements that are obtained under the syntactic profile lead to the 3COSMUL metric obtaining better overall results.", "Figure FIGREF11 shows the trends of the total accuracy at different epochs for the various models using 3COSMUL (the trend obtained with 3COSADD is very similar). Here we can see how the use of high negative sampling can worsen performance, even causing the network to oscillate (W5N20) in order to better adapt to all the data. The choice of the negative sampling to be used should therefore be strongly linked to the choice of the window size as well as to the number of training epochs.", "Continuing the training of the two worst models up to the 50th epoch, it is observed (Table TABREF12) that they are still able to reach the performances of the other models. The W10N20 model at the 50th epoch even proves to be better than all the other previous models, becoming the reference model for subsequent comparisons. As the various epochs change (Figure FIGREF13.a) it appears to have the same oscillatory pattern observed previously, albeit with only one oscillation given the greater window size. This model is available at: https://mlunicampania.gitlab.io/italian-word2vec/.", "Various tests were also conducted on CBOW models, which however proved to be in general significantly lower than Skip-gram models. Figure FIGREF13.b shows, for example, the accuracy trend for a CBOW model with a window equal to 10 and negative sampling equal to 20, which on 50 epochs reaches only $37.20\\%$ of total accuracy (with 3COSMUL metric)." ], [ "Finally, a comparison was made between the Skip-gram model W10N20 obtained at the 50th epoch and the other two W2V in Italian present in the literature (BIBREF9 and BIBREF10). The first test (Table TABREF15) was performed considering all the analogies present, and therefore evaluating as an error any analogy that was not executable (as it related to one or more words absent from the vocabulary).", "As it can be seen, regardless of the metric used, our model has significantly better results than the other two models, both overall and within the two macro-areas. Furthermore, the other two models seem to be more subject to the metric used, perhaps due to a stabilization not yet reached for the few training epochs.", "For a complete comparison, both models were also tested considering only the subset of the analogies in common with our model (i.e. eliminating from the test all those analogies that were not executable by one or the other model). Tables TABREF16 and TABREF17 again highlight the marked increase in performance of our model compared to both." ], [ "In this work we have analysed the Word2Vec model for Italian Language obtaining a substantial increase in performance respect to other two models in the literature (and despite the fixed size of the embedding). These results, in addition to the number of learning epochs, are probably also due to the different phase of data pre-processing, very carefully excuted in performing a complete cleaning of the text and above all in substituting the numerical values with a single particular token. We have observed that the number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others.", "Changing the number of epochs, in some configurations, creates an oscillatory trend, which seems to be linked to a particular interaction between the window size and the negative sampling value. In the future, thanks to the collaboration in the Laila project, we intend to expand the dataset by adding more user chats. The objective will be to verify if the use of a less formal language can improves accuracy in the syntactic macro-area." ] ] }
{ "question": [ "What is the dataset used as input to the Word2Vec algorithm?", "Are the word embeddings tested on a NLP task?", "Are the word embeddings evaluated?", "How big is dataset used to train Word2Vec for the Italian Language?", "How does different parameter settings impact the performance and semantic capacity of resulting model?", "Are the semantic analysis findings for Italian language similar to English language version?", "What dataset is used for training Word2Vec in Italian language?" ], "question_id": [ "9447ec36e397853c04dcb8f67492ca9f944dbd4b", "12c6ca435f4fcd4ad5ea5c0d76d6ebb9d0be9177", "32c149574edf07b1a96d7f6bc49b95081de1abd2", "3de27c81af3030eb2d9de1df5ec9bfacdef281a9", "cc680cb8f45aeece10823a3f8778cf215ccc8af0", "fab4ec639a0ea1e07c547cdef1837c774ee1adb8", "9190c56006ba84bf41246a32a3981d38adaf422c" ], "nlp_background": [ "two", "two", "two", "zero", "zero", "zero", "zero" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no", "no", "no", "no", "no" ], "search_query": [ "italian", "italian", "italian", "", "", "", "" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Italian Wikipedia and Google News extraction producing final vocabulary of 618224 words", "evidence": [ "The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences.", "The text was previously preprocessed by removing the words whose absolute frequency was less than 5 and eliminating all special characters. Since it is impossible to represent every imaginable numerical value, but not wanting to eliminate the concept of “numerical representation\" linked to certain words, it was also decided to replace every number present in the text with the particular $\\langle NUM \\rangle $ token; which probably also assumes a better representation in the embedding space (not separating into the various possible values). All the words were then transformed to lowercase (to avoid a double presence) finally producing a vocabulary of $618\\,224$ words." ], "highlighted_evidence": [ "The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences.", "All the words were then transformed to lowercase (to avoid a double presence) finally producing a vocabulary of $618\\,224$ words." ] } ], "annotation_id": [ "707f16cbdcecaaf2438b2eea89bbbde0c2bf24a7" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "To analyse the results we chose to use the test provided by BIBREF10, which consists of $19\\,791$ analogies divided into 19 different categories: 6 related to the “semantic\" macro-area (8915 analogies) and 13 to the “syntactic\" one (10876 analogies). All the analogies are composed by two pairs of words that share a relation, schematized with the equation: $a:a^{*}=b:b^{*}$ (e.g. “man : woman = king : queen\"); where $b^{*}$ is the word to be guessed (“queen\"), $b$ is the word coupled to it (“king\"), $a$ is the word for the components to be eliminated (“man\"), and $a^{*}$ is the word for the components to be added (“woman\")." ], "highlighted_evidence": [ "To analyse the results we chose to use the test provided by BIBREF10, which consists of $19\\,791$ analogies divided into 19 different categories: 6 related to the “semantic\" macro-area (8915 analogies) and 13 to the “syntactic\" one (10876 analogies). All the analogies are composed by two pairs of words that share a relation, schematized with the equation: $a:a^{*}=b:b^{*}$ (e.g. “man : woman = king : queen\"); where $b^{*}$ is the word to be guessed (“queen\"), $b$ is the word coupled to it (“king\"), $a$ is the word for the components to be eliminated (“man\"), and $a^{*}$ is the word for the components to be added (“woman\")." ] } ], "annotation_id": [ "0c2537b0a6e0a98a8aa8f16f37fe604db25039f0" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Finally, a comparison was made between the Skip-gram model W10N20 obtained at the 50th epoch and the other two W2V in Italian present in the literature (BIBREF9 and BIBREF10). The first test (Table TABREF15) was performed considering all the analogies present, and therefore evaluating as an error any analogy that was not executable (as it related to one or more words absent from the vocabulary).", "As it can be seen, regardless of the metric used, our model has significantly better results than the other two models, both overall and within the two macro-areas. Furthermore, the other two models seem to be more subject to the metric used, perhaps due to a stabilization not yet reached for the few training epochs." ], "highlighted_evidence": [ "Finally, a comparison was made between the Skip-gram model W10N20 obtained at the 50th epoch and the other two W2V in Italian present in the literature (BIBREF9 and BIBREF10). The first test (Table TABREF15) was performed considering all the analogies present, and therefore evaluating as an error any analogy that was not executable (as it related to one or more words absent from the vocabulary).\n\nAs it can be seen, regardless of the metric used, our model has significantly better results than the other two models, both overall and within the two macro-areas." ] } ], "annotation_id": [ "c31edf6a48d34aed1af8e1d1ad9c0590e81bf8ae" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "$421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences." ], "highlighted_evidence": [ "The dataset (composed of 2.6 GB of raw text) includes $421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences." ] } ], "annotation_id": [ "6b0d86450efcf7a1e5c54930fe1a0059721f5fec" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In this work we have analysed the Word2Vec model for Italian Language obtaining a substantial increase in performance respect to other two models in the literature (and despite the fixed size of the embedding). These results, in addition to the number of learning epochs, are probably also due to the different phase of data pre-processing, very carefully excuted in performing a complete cleaning of the text and above all in substituting the numerical values with a single particular token. We have observed that the number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others." ], "highlighted_evidence": [ "We have observed that the number of epochs is an important parameter and its increase leads to results that rank our two worst models almost equal, or even better than others." ] } ], "annotation_id": [ "5e5ade4049facac2ff1b0e51cbb5021f28d0b90f" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "9c3bb13aff045629237781aa1e0cefadf9bc0ae1" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila). The dataset (composed of 2.6 GB of raw text) includes $421\\,829\\,960$ words divided into $17\\,305\\,401$ sentences." ], "highlighted_evidence": [ "The dataset needed to train the W2V was obtained using the information extracted from a dump of the Italian Wikipedia (dated 2019.04.01), from the main categories of Italian Google News (WORLD, NATION, BUSINESS, TECHNOLOGY, ENTERTAINMENT, SPORTS, SCIENCE, HEALTH) and from some anonymized chats between users and a customer care chatbot (Laila)." ] } ], "annotation_id": [ "26affe9ada758836d0f069da4cb25d48bcee44fb" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Fig. 1. Representation of Word2Vec model.", "Table 1. Accuracy at the 20th epoch for the 6 Skip-gram models analysed when the W dimension of the window and the N value of negative sampling change.", "Fig. 2. Total accuracy using 3COSMUL at different epochs with negative sampling equal to 5, 10 and 20, where: (a) window is 5 and (b) window is 10.", "Table 2. Accuracy at the 50th epoch for the two worst Skip-gram models.", "Fig. 3. Total accuracy using 3COSMUL up to the 50th epoch for: (a) the two worst Skip-gram models and (b) CBOW model with W = 10 and N = 20", "Table 3. Accuracy evaluated on the total of all the analogies", "Table 5. Accuracy evaluated only on the analogies common to both vocabularies", "Table 4. Accuracy evaluated only on the analogies common to both vocabularies" ], "file": [ "3-Figure1-1.png", "6-Table1-1.png", "6-Figure2-1.png", "7-Table2-1.png", "7-Figure3-1.png", "7-Table3-1.png", "8-Table5-1.png", "8-Table4-1.png" ] }
1804.06506
Improving Character-based Decoding Using Target-Side Morphological Information for Neural Machine Translation
Recently, neural machine translation (NMT) has emerged as a powerful alternative to conventional statistical approaches. However, its performance drops considerably in the presence of morphologically rich languages (MRLs). Neural engines usually fail to tackle the large vocabulary and high out-of-vocabulary (OOV) word rate of MRLs. Therefore, it is not suitable to exploit existing word-based models to translate this set of languages. In this paper, we propose an extension to the state-of-the-art model of Chung et al. (2016), which works at the character level and boosts the decoder with target-side morphological information. In our architecture, an additional morphology table is plugged into the model. Each time the decoder samples from a target vocabulary, the table sends auxiliary signals from the most relevant affixes in order to enrich the decoder's current state and constrain it to provide better predictions. We evaluated our model to translate English into German, Russian, and Turkish as three MRLs and observed significant improvements.
{ "section_name": [ "Introduction", "NMT for MRLs", "Proposed Architecture", "The Embedded Morphology Table", "The Auxiliary Output Channel", "Combining the Extended Output Layer and the Embedded Morphology Table", "Experimental Study", "Experimental Setting", "Experimental Results", "Conclusion and Future Work", "Acknowledgments" ], "paragraphs": [ [ "Morphologically complex words (MCWs) are multi-layer structures which consist of different subunits, each of which carries semantic information and has a specific syntactic role. Table 1 gives a Turkish example to show this type of complexity. This example is a clear indication that word-based models are not suitable to process such complex languages. Accordingly, when translating MRLs, it might not be a good idea to treat words as atomic units as it demands a large vocabulary that imposes extra overhead. Since MCWs can appear in various forms we require a very large vocabulary to $i$ ) cover as many morphological forms and words as we can, and $ii$ ) reduce the number of OOVs. Neural models by their nature are complex, and we do not want to make them more complicated by working with large vocabularies. Furthermore, even if we have quite a large vocabulary set, clearly some words would remain uncovered by that. This means that a large vocabulary not only complicates the entire process, but also does not necessarily mitigate the OOV problem. For these reasons we propose an NMT engine which works at the character level.", "In this paper, we focus on translating into MRLs and issues associated with word formation on the target side. To provide a better translation we do not necessarily need a large target lexicon, as an MCW can be gradually formed during decoding by means of its subunits, similar to the solution proposed in character-based decoding models BIBREF0 . Generating a complex word character-by-character is a better approach compared to word-level sampling, but it has other disadvantages.", "One character can co-occur with another with almost no constraint, but a particular word or morpheme can only collocate with a very limited number of other constituents. Unlike words, characters are not meaning-bearing units and do not preserve syntactic information, so (in the extreme case) the chance of sampling each character by the decoder is almost equal to the others, but this situation is less likely for words. The only constraint that prioritize which character should be sampled is information stored in the decoder, which we believe is insufficient to cope with all ambiguities. Furthermore, when everything is segmented into characters the target sentence with a limited number of words is changed to a very long sequence of characters, which clearly makes it harder for the decoder to remember such a long history. Accordingly, character-based information flows in the decoder may not be as informative as word- or morpheme-based information.", "In the character-based NMT model everything is almost the same as its word-based counterpart except the target vocabulary whose size is considerably reduced from thousands of words to just hundreds of characters. If we consider the decoder as a classifier, it should in principle be able to perform much better over hundreds of classes (characters) rather than thousands (words), but the performance of character-based models is almost the same as or slightly better than their word-based versions. This underlines the fact that the character-based decoder is perhaps not fed with sufficient information to provide improved performance compared to word-based models.", "Character-level decoding limits the search space by dramatically reducing the size of the target vocabulary, but at the same time widens the search space by working with characters whose sampling seems to be harder than words. The freedom in selection and sampling of characters can mislead the decoder, which prevents us from taking the maximum advantages of character-level decoding. If we can control the selection process with other constraints, we may obtain further benefit from restricting the vocabulary set, which is the main goal followed in this paper.", "In order to address the aforementioned problems we redesign the neural decoder in three different scenarios. In the first scenario we equip the decoder with an additional morphology table including target-side affixes. We place an attention module on top of the table which is controlled by the decoder. At each step, as the decoder samples a character, it searches the table to find the most relevant information which can enrich its state. Signals sent from the table can be interpreted as additional constraints. In the second scenario we share the decoder between two output channels. The first one samples the target character and the other one predicts the morphological annotation of the character. This multi-tasking approach forces the decoder to send morphology-aware information to the final layer which results in better predictions. In the third scenario we combine these two models. Section \"Proposed Architecture\" provides more details on our models.", "Together with different findings that will be discussed in the next sections, there are two main contributions in this paper. We redesigned and tuned the NMT framework for translating into MRLs. It is quite challenging to show the impact of external knowledge such as morphological information in neural models especially in the presence of large parallel corpora. However, our models are able to incorporate morphological information into decoding and boost its quality. We inject the decoder with morphological properties of the target language. Furthermore, the novel architecture proposed here is not limited to morphological information alone and is flexible enough to provide other types of information for the decoder." ], [ "There are several models for NMT of MRLs which are designed to deal with morphological complexities. garcia2016factored and sennrich-haddow:2016:WMT adapted the factored machine translation approach to neural models. Morphological annotations can be treated as extra factors in such models. jean-EtAl:2015:ACL-IJCNLP proposed a model to handle very large vocabularies. luong-EtAl:2015:ACL-IJCNLP addressed the problem of rare words and OOVs with the help of a post-translation phase to exchange unknown tokens with their potential translations. sennrich2015neural used subword units for NMT. The model relies on frequent subword units instead of words. costajussa-fonollosa:2016:P16-2 designed a model for translating from MRLs. The model encodes source words with a convolutional module proposed by kim2015character. Each word is represented by a convolutional combination of its characters.", "luong-manning:2016:P16-1 used a hybrid model for representing words. In their model, unseen and complex words are encoded with a character-based representation, with other words encoded via the usual surface-form embeddings. DBLP:journals/corr/VylomovaCHH16 compared different representation models (word-, morpheme, and character-level models) which try to capture complexities on the source side, for the task of translating from MRLs.", "chung-cho-bengio proposed an architecture which benefits from different segmentation schemes. On the encoder side, words are segmented into subunits with the byte-pair segmentation model (bpe) BIBREF1 , and on the decoder side, one target character is produced at each time step. Accordingly, the target sequence is treated as a long chain of characters without explicit segmentation. W17-4727 focused on translating from English into Finnish and implicitly incorporated morphological information into NMT through multi-task learning. passbanPhD comprehensively studied the problem of translating MRLs and addressed potential challenges in the field.", "Among all the models reviewed in this section, the network proposed by chung-cho-bengio could be seen as the best alternative for translating into MRLs as it works at the character level on the decoder side and it was evaluated in different settings on different languages. Consequently, we consider it as a baseline model in our experiments." ], [ "We propose a compatible neural architecture for translating into MRLs. The model benefits from subword- and character-level information and improves upon the state-of-the-art model of chung-cho-bengio. We manipulated the model to incorporate morphological information and developed three new extensions, which are discussed in Sections \"The Embedded Morphology Table\" , \"The Auxiliary Output Channel\" , and \"Combining the Extended Output Layer and the Embedded Morphology Table\" ." ], [ "In the first extension an additional table containing the morphological information of the target language is plugged into the decoder to assist with word formation. Each time the decoder samples from the target vocabulary, it searches the morphology table to find the most relevant affixes given its current state. Items selected from the table act as guiding signals to help the decoder sample a better character.", "Our base model is an encoder-decoder model with attention BIBREF2 , implemented using gated recurrent units (GRUs) BIBREF3 . We use a four-layer model in our experiments. Similar to chung-cho-bengio and DBLP:journals/corr/WuSCLNMKCGMKSJL16, we use bidirectional units to encode the source sequence. Bidirectional GRUs are placed only at the input layer. The forward GRU reads the input sequence in its original order and the backward GRU reads the input in the reverse order. Each hidden state of the encoder in one time step is a concatenation of the forward and backward states at the same time step. This type of bidirectional processing provides a richer representation of the input sequence.", "On the decoder side, one target character is sampled from a target vocabulary at each time step. In the original encoder-decoder model, the probability of predicting the next token $y_i$ is estimated based on $i$ ) the current hidden state of the decoder, $ii$ ) the last predicted token, and $iii$ ) the context vector. This process can be formulated as $p(y_i|y_1,...,y_{i-1},{\\bf x}) = g(h_i,y_{i-1},{\\bf c}_i)$ , where $g(.)$ is a softmax function, $y_i$ is the target token (to be predicted), $\\textbf {x}$ is the representation of the input sequence, $h_i$ is the decoder's hidden state at the $i$ -th time step, and $i$0 indicates the context vector which is a weighted summary of the input sequence generated by the attention module. $i$1 is generated via the procedure shown in ( 3 ): ", "$$\\begin{aligned}\n{\\bf c}_i &= \\sum _{j=1}^{n} \\alpha _{ij} s_j\\\\\n\\alpha _{ij} &=\\frac{\\exp {(e_{ij})}}{\\sum {_{k=1}^{n}\\exp {(e_{ik})}}}; \\hspace{5.69054pt}e_{ij}=a(s_j, h_{i-1})\n\\end{aligned}$$ (Eq. 3) ", "where $\\alpha _{ij}$ denotes the weight of the $j$ -th hidden state of the encoder ( $s_j$ ) when the decoder predicts the $i$ -th target token, and $a()$ shows a combinatorial function which can be modeled through a simple feed-forward connection. $n$ is the length of the input sequence.", "In our first extension, the prediction probability is conditioned on one more constraint in addition to those three existing ones, as in $p(y_i|y_1,...,y_{i-1},{\\bf x}) = g(h_i,y_{i-1},{\\bf c}_i, {\\bf c}^m_i)$ , where ${\\bf c}^m_i$ is the morphological context vector and carries information from those useful affixes which can enrich the decoder's information. ${\\bf c}^m_i$ is generated via an attention module over the morphology table which works in a similar manner to word-based attention model. The attention procedure for generating ${\\bf c}^m_i$ is formulated as in ( 5 ): ", "$$\\begin{aligned}\n{\\bf c}^m_i &= \\sum _{u=1}^{|\\mathcal {A}|} \\beta _{iu} f_u\\\\\n\\beta _{iu} &= \\frac{\\exp {(e^m_{iu})}}{\\sum {_{v=1}^{|\\mathcal {A}|} \\exp {(e_{iv})}}}; \\hspace{5.69054pt}e^m_{iu}= a^m(f_u, h_{i-1})\n\\end{aligned}$$ (Eq. 5) ", "where $f_u$ represents the embedding of the $u$ -th affix ( $u$ -th column) in the morphology/affix table $\\mathcal {A}$ , $\\beta _{iu}$ is the weight assigned to $f_u$ when predicting the $i$ -th target token, and $a^m$ is a feed-forward connection between the morphology table and the decoder.", "The attention module in general can be considered as a search mechanism, e.g. in the original encoder-decoder architecture the basic attention module finds the most relevant input words to make the prediction. In multi-modal NMT BIBREF4 , BIBREF5 an extra attention module is added to the basic one in order to search the image input to find the most relevant image segments. In our case we have a similar additional attention module which searches the morphology table.", "In this scenario, the morphology table including the target language's affixes can be considered as an external knowledge repository that sends auxiliary signals which accompany the main input sequence at all time steps. Such a table certainly includes useful information for the decoder. As we are not sure which affix preserves those pieces of useful information, we use an attention module to search for the best match. The attention module over the table works as a filter which excludes irrelevant affixes and amplifies the impact of relevant ones by assigning different weights ( $\\beta $ values)." ], [ "In the first scenario, we embedded a morphology table into the decoder in the hope that it can enrich sampling information. Mathematically speaking, such an architecture establishes an extra constraint for sampling and can control the decoder's predictions. However, this is not the only way of constraining the decoder. In the second scenario, we define extra supervision to the network via another predictor (output channel). The first channel is responsible for generating translations and predicts one character at each time step, and the other one tries to understand the morphological status of the decoder by predicting the morphological annotation ( $l_i$ ) of the target character.", "The approach in the second scenario proposes a multi-task learning architecture, by which in one task we learn translations and in the other one morphological annotations. Therefore, all network modules –especially the last hidden layer just before the predictors– should provide information which is useful enough to make correct predictions in both channels, i.e. the decoder should preserve translation as well as morphological knowledge. Since we are translating into MRLs this type of mixed information (morphology+translation) can be quite useful.", "In our setting, the morphological annotation $l_i$ predicted via the second channel shows to which part of the word or morpheme the target character belongs, i.e. the label for the character is the morpheme that includes it. We clarify the prediction procedure via an example from our training set (see Section \"Experimental Study\" ). When the Turkish word `terbiyesizlik' is generated, the first channel is supposed to predict t, e, r, up to k, one after another. For the same word, the second channel is supposed to predict stem-C for the fist 7 steps as the first 7 characters `terbiye' belong to the stem of the word. The C sign indicates that stem-C is a class label. The second channel should also predict siz-C when the first channel predicts s (eighth character), i (ninth character), and z (tenth character), and lik-C when the first channel samples the last three characters. Clearly, the second channel is a classifier which works over the {stem-C, siz-C, lik-C, ...} classes. Figure 1 illustrates a segment of a sentence including this Turkish word and explains which class tags should be predicted by each channel.", "To implement the second scenario we require a single-source double-target training corpus: [source sentence] $\\rightarrow $ [sequence of target characters $\\&$ sequence of morphological annotations] (see Section \"Experimental Study\" ). The objective function should also be manipulated accordingly. Given a training set $\\lbrace {\\bf x}_t, {\\bf y}_t, {\\bf m}_t\\rbrace _{t=1}^{T}$ the goal is to maximize the joint loss function shown in ( 7 ): ", "$$\\lambda \\sum _{t=1}^{T}\\log {P({\\bf y}_t|{\\bf x}_t;\\theta )} + (1-\\lambda ) \\sum _{t=1}^{T}\\log {P({\\bf m}_t|{\\bf x}_t;\\theta )}$$ (Eq. 7) ", "where $\\textbf {x}_t$ is the $t$ -th input sentence whose translation is a sequence of target characters shown by $\\textbf {y}_t$ . $\\textbf {m}_t$ is the sequence of morphological annotations and $T$ is the size of the training set. $\\theta $ is the set of network parameters and $\\lambda $ is a scalar to balance the contribution of each cost function. $\\lambda $ is adjusted on the development set during training." ], [ "In the first scenario, we aim to provide the decoder with useful information about morphological properties of the target language, but we are not sure whether signals sent from the table are what we really need. They might be helpful or even harmful, so there should be a mechanism to control their quality. In the second scenario we also have a similar problem as the last layer requires some information to predict the correct morphological class through the second channel, but there is no guarantee to ensure that information in the decoder is sufficient for this sort of prediction. In order to address these problems, in the third extension we combine both scenarios as they are complementary and can potentially help each other.", "The morphology table acts as an additional useful source of knowledge as it already consists of affixes, but its content should be adapted according to the decoder and its actual needs. Accordingly, we need a trainer to update the table properly. The extra prediction channel plays this role for us as it forces the network to predict the target language's affixes at the output layer. The error computed in the second channel is back-propagated to the network including the morphology table and updates its affix information into what the decoder actually needs for its prediction. Therefore, the second output channel helps us train better affix embeddings.", "The morphology table also helps the second predictor. Without considering the table, the last layer only includes information about the input sequence and previously predicted outputs, which is not directly related to morphological information. The second attention module retrieves useful affixes from the morphology table and concatenates to the last layer, which means the decoder is explicitly fed with morphological information. Therefore, these two modules mutually help each other. The external channel helps update the morphology table with high-quality affixes (backward pass) and the table sends its high-quality signals to the prediction layer (forward pass). The relation between these modules and the NMT architecture is illustrated in Figure 2 ." ], [ "As previously reviewed, different models try to capture complexities on the encoder side, but to the best of our knowledge the only model which proposes a technique to deal with complex constituents on the decoder side is that of chung-cho-bengio, which should be an appropriate baseline for our comparisons. Moreover, it outperforms other existing NMT models, so we prefer to compare our network to the best existing model. This model is referred to as CDNMT in our experiments. In the next sections first we explain our experimental setting, corpora, and how we build the morphology table (Section \"Experimental Setting\" ), and then report experimental results (Section \"Experimental Results\" )." ], [ "In order to make our work comparable we try to follow the same experimental setting used in CDNMT, where the GRU size is 1024, the affix and word embedding size is 512, and the beam width is 20. Our models are trained using stochastic gradient descent with Adam BIBREF6 . chung-cho-bengio and sennrich2015neural demonstrated that bpe boosts NMT, so similar to CDNMT we also preprocess the source side of our corpora using bpe. We use WMT-15 corpora to train the models, newstest-2013 for tuning and newstest-2015 as the test sets. For English–Turkish (En–Tr) we use the OpenSubtitle2016 collection BIBREF7 . The training side of the English–German (En–De), English–Russian (En–Ru), and En–Tr corpora include $4.5$ , $2.1$ , and 4 million parallel sentences, respectively. We randomly select 3K sentences for each of the development and test sets for En–Tr. For all language pairs we keep the 400 most frequent characters as the target-side character set and replace the remainder (infrequent characters) with a specific character.", "One of the key modules in our architecture is the morphology table. In order to implement it we use a look-up table whose columns include embeddings for the target language's affixes (each column represents one affix) which are updated during training. As previously mentioned, the table is intended to provide useful, morphological information so it should be initialized properly, for which we use a morphology-aware embedding-learning model. To this end, we use the neural language model of botha2014compositional in which each word is represented via a linear combination of the embeddings of its surface form and subunits, e.g. $\\overrightarrow{terbiyesizlik} = \\overrightarrow{terbiyesizlik} + \\overrightarrow{terbiye} + \\overrightarrow{siz} + \\overrightarrow{lik}$ . Given a sequence of words, the neural language model tries to predict the next word, so it learns sentence-level dependencies as well as intra-word relations. The model trains surface form and subword-level embeddings which provides us with high-quality affix embeddings.", "Our neural language model is a recurrent network with a single 1000-dimensional GRU layer, which is trained on the target sides of our parallel corpora. The embedding size is 512 and we use a batch size of 100 to train the model. Before training the neural language model, we need to manipulate the training corpus to decompose words into morphemes for which we use Morfessor BIBREF8 , an unsupervised morphological analyzer. Using Morfessor each word is segmented into different subunits where we consider the longest part as the stem of each word; what appears before the stem is taken as a member of the set of prefixes (there might be one or more prefixes) and what follows the stem is considered as a member of the set of suffixes.", "Since Morfessor is an unsupervised analyzer, in order to minimize segmentation errors and avoid noisy results we filter its output and exclude subunits which occur fewer than 500 times. After decomposing, filtering, and separating stems from affixes, we extracted several affixes which are reported in Table 2 . We emphasize that there might be wrong segmentations in Morfessor's output, e.g. Turkish is a suffix-based language, so there are no prefixes in this language, but based on what Morfessor generated we extracted 11 different types of prefixes. We do not post-process Morfessor's outputs.", "Using the neural language model we train word, stem, and affix embeddings, and initialize the look-up table (but not other parts) of the decoder using those affixes. The look-up table includes high-quality affixes trained on the target side of the parallel corpus by which we train the translation model. Clearly, such an affix table is an additional knowledge source for the decoder. It preserves information which is very close to what the decoder actually needs. However, there might be some missing pieces of information or some incompatibility between the decoder and the table, so we do not freeze the morphology table during training, but let the decoder update it with respect to its needs in the forward and backward passes." ], [ "Table 3 summarizes our experimental results. We report results for the bpe $\\rightarrow $ char setting, which means the source token is a bpe unit and the decoder samples a character at each time step. CDNMT is the baseline model. Table 3 includes scores reported from the original CDNMT model BIBREF0 as well as the scores from our reimplementation. To make our work comparable and show the impact of the new architecture, we tried to replicate CDNMT's results in our experimental setting, we kept everything (parameters, iterations, epochs etc.) unchanged and evaluated the extended model in the same setting. Table 3 reports BLEU scores BIBREF9 of our NMT models.", "Table 3 can be interpreted from different perspectives but the main findings are summarized as follows:", "The morphology table yields significant improvements for all languages and settings.", "The morphology table boosts the En–Tr engine more than others and we think this is because of the nature of the language. Turkish is an agglutinative language in which morphemes are clearly separable from each other, but in German and Russian morphological transformations rely more on fusional operations rather than agglutination.", "It seems that there is a direct relation between the size of the morphology table and the gain provided for the decoder, because Russian and Turkish have bigger tables and benefit from the table more than German which has fewer affixes.", "The auxiliary output channel is even more useful than the morphology table for all settings but En–Ru, and we think this is because of the morpheme-per-word ratio in Russian. The number of morphemes attached to a Russian word is usually more than those of German and Turkish words in our corpora, and it makes the prediction harder for the classifier (the more the number of suffixes attached to a word, the harder the classification task).", "The combination of the morphology table and the extra output channel provides the best result for all languages.", "Figure 3 depicts the impact of the morphology table and the extra output channel for each language.", "To further study our models' behaviour and ensure that our extensions do not generate random improvements we visualized some attention weights when generating `terbiyesizlik'. In Figure 4 , the upper figure shows attention weights for all Turkish affixes, where the y axis shows different time steps and the x axis includes attention weights of all affixes (304 columns) for those time steps, e.g. the first row and the first column represents the attention weight assigned to the first Turkish affix when sampling t in `terbiyesizlik'. While at the first glance the figure may appear to be somewhat confusing, but it provides some interesting insights which we elaborate next.", "In addition to the whole attention matrix we also visualized a subset of weights to show how the morphology table provides useful information. In the second figure we study the behaviour of the morphology table for the first (t $_1$ ), fifth (i $_5$ ), ninth (i $_{9}$ ), and twelfth (i $_{12}$ ) time steps when generating the same Turkish word `t $_1$ erbi $_5$ yesi $_9$ zli $_{12}$ k'. t $_1$ is the first character of the word. We also have three i characters from different morphemes, where the first one is part of the stem, the second one belongs to the suffix `siz', and the third one to `lik'. It is interesting to see how the table reacts to the same character from different parts. For each time step we selected the top-10 affixes which have the highest attention weights. The set of top-10 affixes can be different for each step, so we made a union of those sets which gives us 22 affixes. The bottom part of Figure 4 shows the attention weights for those 22 affixes at each time step.", "After analyzing the weights we observed interesting properties about the morphology table and the auxiliary attention module. The main findings about the behaviour of the table are as follows:", "The model assigns high attention weights to stem-C for almost all time steps. However, the weights assigned to this class for t $_1$ and i $_5$ are much higher than those of affix characters (as they are part of the stem). The vertical lines in both figures approve this feature (bad behaviour).", "For some unknown reasons there are some affixes which have no direct relation to that particulate time step but they receive a high attention, such as maz in t $_{12}$ (bad behaviour).", "For almost all time steps the highest attention weight belongs to the class which is expected to be selected, e.g. weights for (i $_5$ ,stem-C) or (i $_{9}$ ,siz-C) (good behaviour).", "The morphology table may send bad or good signals but it is consistent for similar or co-occurring characters, e.g. for the last three time steps l $_{11}$ , i $_{12}$ , and k $_{13}$ , almost the same set of affixes receives the highest attention weights. This consistency is exactly what we are looking for, as it can define a reliable external constraint for the decoder to guide it. Vertical lines on the figure also confirm this fact. They show that for a set of consecutive characters which belong to the same morpheme the attention module sends a signal from a particular affix (good behaviour).", "There are some affixes which might not be directly related to that time step but receive high attention weights. This is because those affixes either include the same character which the decoder tries to predict (e.g. i-C for i $_{4}$ or t-C and tin-C for t $_{1}$ ), or frequently appear with that part of the word which includes the target character (e.g. mi-C has a high weight when predicting t $_1$ because t $_1$ belongs to terbiye which frequently collocates with mi-C: terbiye+mi) (good behaviour).", "Finally, in order to complete our evaluation study we feed the English-to-German NMT model with the sentence `Terms and conditions for sending contributions to the BBC', to show how the model behaves differently and generates a better target sentence. Translations generated by our models are illustrated in Table 4 .", "The table demonstrates that our architecture is able to control the decoder and limit its selections, e.g. the word `allgemeinen' generated by the baseline model is redundant. There is no constraint to inform the baseline model that this word should not be generated, whereas our proposed architecture controls the decoder in such situations. After analyzing our model, we realized that there are strong attention weights assigned to the w-space (indicating white space characters) and BOS (beginning of the sequence) columns of the affix table while sampling the first character of the word `Geschäft', which shows that the decoder is informed about the start point of the sequence. Similar to the baseline model's decoder, our decoder can sample any character including `a' of `allgemeinen' or `G' of `Geschäft'. Translation information stored in the baseline decoder is not sufficient for selecting the right character `G', so the decoder wrongly starts with `i' and continues along a wrong path up to generating the whole word. However, our decoder's information is accompanied with signals from the affix table which force it to start with a better initial character, whose sampling leads to generating the correct target word.", "Another interesting feature about the table is the new structure `Geschäft s bedingungen' generated by the improved model. As the reference translation shows, in the correct form these two structures should be glued together via `s', which can be considered as an infix. As our model is supposed to detect this sort of intra-word relation, it treats the whole structure as two compounds which are connected to one another via an infix. Although this is not a correct translation and it would be trivial to post-edit into the correct output form, it is interesting to see how our mechanism forces the decoder to pay attention to intra-word relations.", "Apart from these two interesting findings, the number of wrong character selections in the baseline model is considerably reduced in the improved model because of our enhanced architecture." ], [ "In this paper we proposed a new architecture to incorporate morphological information into the NMT pipeline. We extended the state-of-the-art NMT model BIBREF0 with a morphology table. The table could be considered as an external knowledge source which is helpful as it increases the capacity of the model by increasing the number of network parameters. We tried to benefit from this advantage. Moreover, we managed to fill the table with morphological information to further boost the NMT model when translating into MRLs. Apart from the table we also designed an additional output channel which forces the decoder to predict morphological annotations. The error signals coming from the second channel during training inform the decoder with morphological properties of the target language. Experimental results show that our techniques were useful for NMT of MRLs.", "As our future work we follow three main ideas. $i$ ) We try to find more efficient ways to supply morphological information for both the encoder and decoder. $ii$ ) We plan to benefit from other types of information such as syntactic and semantic annotations to boost the decoder, as the table is not limited to morphological information alone and can preserve other sorts of information. $iii$ ) Finally, we target sequence generation for fusional languages. Although our model showed significant improvements for both German and Russian, the proposed model is more suitable for generating sequences in agglutinative languages." ], [ "We thank our anonymous reviewers for their valuable feedback, as well as the Irish centre for high-end computing (www.ichec.ie) for providing computational infrastructures. This work has been supported by the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund. " ] ] }
{ "question": [ "How are the auxiliary signals from the morphology table incorporated in the decoder?", "What type of morphological information is contained in the \"morphology table\"?" ], "question_id": [ "7aab78e90ba1336950a2b0534cc0cb214b96b4fd", "b7fe91e71da8f4dc11e799b3bd408d253230e8c6" ], "nlp_background": [ "infinity", "infinity" ], "topic_background": [ "familiar", "familiar" ], "paper_read": [ "no", "no" ], "search_query": [ "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "an additional morphology table including target-side affixes.", "We inject the decoder with morphological properties of the target language." ], "yes_no": null, "free_form_answer": "", "evidence": [ "In order to address the aforementioned problems we redesign the neural decoder in three different scenarios. In the first scenario we equip the decoder with an additional morphology table including target-side affixes. We place an attention module on top of the table which is controlled by the decoder. At each step, as the decoder samples a character, it searches the table to find the most relevant information which can enrich its state. Signals sent from the table can be interpreted as additional constraints. In the second scenario we share the decoder between two output channels. The first one samples the target character and the other one predicts the morphological annotation of the character. This multi-tasking approach forces the decoder to send morphology-aware information to the final layer which results in better predictions. In the third scenario we combine these two models. Section \"Proposed Architecture\" provides more details on our models." ], "highlighted_evidence": [ "In the first scenario we equip the decoder with an additional morphology table including target-side affixes. We place an attention module on top of the table which is controlled by the decoder. At each step, as the decoder samples a character, it searches the table to find the most relevant information which can enrich its state. Signals sent from the table can be interpreted as additional constraints. In the second scenario we share the decoder between two output channels. The first one samples the target character and the other one predicts the morphological annotation of the character. This multi-tasking approach forces the decoder to send morphology-aware information to the final layer which results in better predictions. In the third scenario we combine these two models. " ] } ], "annotation_id": [ "0c30047a09c8ae76d8d19cfbdd5e99373cca653b" ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "target-side affixes" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In order to address the aforementioned problems we redesign the neural decoder in three different scenarios. In the first scenario we equip the decoder with an additional morphology table including target-side affixes. We place an attention module on top of the table which is controlled by the decoder. At each step, as the decoder samples a character, it searches the table to find the most relevant information which can enrich its state. Signals sent from the table can be interpreted as additional constraints. In the second scenario we share the decoder between two output channels. The first one samples the target character and the other one predicts the morphological annotation of the character. This multi-tasking approach forces the decoder to send morphology-aware information to the final layer which results in better predictions. In the third scenario we combine these two models. Section \"Proposed Architecture\" provides more details on our models." ], "highlighted_evidence": [ "In the first scenario we equip the decoder with an additional morphology table including target-side affixes." ] } ], "annotation_id": [ "ff4d2624ba02347ad9c6d7f4d6a0b1eb73435788" ], "worker_id": [ "1ba1b5b562aef9cd264cace5b7bdd46a7c065c0a" ] } ] }
{ "caption": [ "Table 1: Illustrating subword units in MCWs. The boldfaced part indicates the stem.", "Figure 1: The target label that each output channel is supposed to predict when generating the Turkish sequence ‘bu1 terbiyesizlik2 için3’ meaning ‘because3 of3 this1 rudeness2’.", "Figure 2: The architecture of the NMT model with an auxiliary prediction channel and an extra morphology table. This network includes only one decoder layer and one encoder layer. ⊕ shows the attention modules.", "Table 2: The number of affixes extracted for each language.", "Table 3: CDNMT∗ is our implementation of CDNMT. m and o indicates that the base model is extended with the morphology table and the additional output channel, respectively. mo is the combination of both the extensions. The improvement provided by the boldfaced number compared to CDNMT∗ is statistically significant according to paired bootstrap re-sampling (Koehn, 2004) with p = 0.05.", "Figure 3: The y axis shows the difference between the BLEU score of CDNMT∗ and the extended model. The first, second, and third bars show the m, o, and mo extensions, respectively.", "Figure 4: Visualizing the attention weights between the morphology table and the decoder when generating ‘terbiyesizlik." ], "file": [ "1-Table1-1.png", "4-Figure1-1.png", "5-Figure2-1.png", "6-Table2-1.png", "7-Table3-1.png", "7-Figure3-1.png", "8-Figure4-1.png" ] }
1904.07342
Learning Twitter User Sentiments on Climate Change with Limited Labeled Data
While it is well-documented that climate change accepters and deniers have become increasingly polarized in the United States over time, there has been no large-scale examination of whether these individuals are prone to changing their opinions as a result of natural external occurrences. On the sub-population of Twitter users, we examine whether climate change sentiment changes in response to five separate natural disasters occurring in the U.S. in 2018. We begin by showing that relevant tweets can be classified with over 75% accuracy as either accepting or denying climate change when using our methodology to compensate for limited labeled data; results are robust across several machine learning models and yield geographic-level results in line with prior research. We then apply RNNs to conduct a cohort-level analysis showing that the 2018 hurricanes yielded a statistically significant increase in average tweet sentiment affirming climate change. However, this effect does not hold for the 2018 blizzard and wildfires studied, implying that Twitter users' opinions on climate change are fairly ingrained on this subset of natural disasters.
{ "section_name": [ "Background", "Data", "Labeling Methodology", "Outcome Analysis", "Results & Discussion" ], "paragraphs": [ [ "Much prior work has been done at the intersection of climate change and Twitter, such as tracking climate change sentiment over time BIBREF2 , finding correlations between Twitter climate change sentiment and seasonal effects BIBREF3 , and clustering Twitter users based on climate mentalities using network analysis BIBREF4 . Throughout, Twitter has been accepted as a powerful tool given the magnitude and reach of samples unattainable from standard surveys. However, the aforementioned studies are not scalable with regards to training data, do not use more recent sentiment analysis tools (such as neural nets), and do not consider unbiased comparisons pre- and post- various climate events (which would allow for a more concrete evaluation of shocks to climate change sentiment). This paper aims to address these three concerns as follows.", "First, we show that machine learning models formed using our labeling technique can accurately predict tweet sentiment (see Section SECREF2 ). We introduce a novel method to intuit binary sentiments of large numbers of tweets for training purposes. Second, we quantify unbiased outcomes from these predicted sentiments (see Section SECREF4 ). We do this by comparing sentiments within the same cohort of Twitter users tweeting both before and after specific natural disasters; this removes bias from over-weighting Twitter users who are only compelled to compose tweets after a disaster." ], [ "We henceforth refer to a tweet affirming climate change as a “positive\" sample (labeled as 1 in the data), and a tweet denying climate change as a “negative\" sample (labeled as -1 in the data). All data were downloaded from Twitter in two separate batches using the “twint\" scraping tool BIBREF5 to sample historical tweets for several different search terms; queries always included either “climate change\" or “global warming\", and further included disaster-specific search terms (e.g., “bomb cyclone,\" “blizzard,\" “snowstorm,\" etc.). We refer to the first data batch as “influential\" tweets, and the second data batch as “event-related\" tweets.", "The first data batch consists of tweets relevant to blizzards, hurricanes, and wildfires, under the constraint that they are tweeted by “influential\" tweeters, who we define as individuals certain to have a classifiable sentiment regarding the topic at hand. For example, we assume that any tweet composed by Al Gore regarding climate change is a positive sample, whereas any tweet from conspiracy account @ClimateHiJinx is a negative sample. The assumption we make in ensuing methods (confirmed as reasonable in Section SECREF2 ) is that influential tweeters can be used to label tweets in bulk in the absence of manually-labeled tweets. Here, we enforce binary labels for all tweets composed by each of the 133 influential tweeters that we identified on Twitter (87 of whom accept climate change), yielding a total of 16,360 influential tweets.", "The second data batch consists of event-related tweets for five natural disasters occurring in the U.S. in 2018. These are: the East Coast Bomb Cyclone (Jan. 2 - 6); the Mendocino, California wildfires (Jul. 27 - Sept. 18); Hurricane Florence (Aug. 31 - Sept. 19); Hurricane Michael (Oct. 7 - 16); and the California Camp Fires (Nov. 8 - 25). For each disaster, we scraped tweets starting from two weeks prior to the beginning of the event, and continuing through two weeks after the end of the event. Summary statistics on the downloaded event-specific tweets are provided in Table TABREF1 . Note that the number of tweets occurring prior to the two 2018 sets of California fires are relatively small. This is because the magnitudes of these wildfires were relatively unpredictable, whereas blizzards and hurricanes are often forecast weeks in advance alongside public warnings. The first (influential tweet data) and second (event-related tweet data) batches are de-duplicated to be mutually exclusive. In Section SECREF2 , we perform geographic analysis on the event-related tweets from which we can scrape self-reported user city from Twitter user profile header cards; overall this includes 840 pre-event and 5,984 post-event tweets.", "To create a model for predicting sentiments of event-related tweets, we divide the first data batch of influential tweets into training and validation datasets with a 90%/10% split. The training set contains 49.2% positive samples, and the validation set contains 49.0% positive samples. We form our test set by manually labeling a subset of 500 tweets from the the event-related tweets (randomly chosen across all five natural disasters), of which 50.0% are positive samples." ], [ "Our first goal is to train a sentiment analysis model (on training and validation datasets) in order to perform classification inference on event-based tweets. We experimented with different feature extraction methods and classification models. Feature extractions examined include Tokenizer, Unigram, Bigram, 5-char-gram, and td-idf methods. Models include both neural nets (e.g. RNNs, CNNs) and standard machine learning tools (e.g. Naive Bayes with Laplace Smoothing, k-clustering, SVM with linear kernel). Model accuracies are reported in Table FIGREF3 .", "The RNN pre-trained using GloVe word embeddings BIBREF6 achieved the higest test accuracy. We pass tokenized features into the embedding layer, followed by an LSTM BIBREF7 with dropout and ReLU activation, and a dense layer with sigmoid activation. We apply an Adam optimizer on the binary crossentropy loss. Implementing this simple, one-layer LSTM allows us to surpass the other traditional machine learning classification methods. Note the 13-point spread between validation and test accuracies achieved. Ideally, the training, validation, and test datasets have the same underlying distribution of tweet sentiments; the assumption made with our labeling technique is that the influential accounts chosen are representative of all Twitter accounts. Critically, when choosing the influential Twitter users who believe in climate change, we highlighted primarily politicians or news sources (i.e., verifiably affirming or denying climate change); these tweets rarely make spelling errors or use sarcasm. Due to this skew, the model yields a high rate of false negatives. It is likely that we could lessen the gap between validation and test accuracies by finding more “real\" Twitter users who are climate change believers, e.g. by using the methodology found in BIBREF4 ." ], [ "Our second goal is to compare the mean values of users' binary sentiments both pre- and post- each natural disaster event. Applying our highest-performing RNN to event-related tweets yields the following breakdown of positive tweets: Bomb Cyclone (34.7%), Mendocino Wildfire (80.4%), Hurricane Florence (57.2%), Hurricane Michael (57.6%), and Camp Fire (70.1%). As sanity checks, we examine the predicted sentiments on a subset with geographic user information and compare results to the prior literature.", "In Figure FIGREF3 , we map 4-clustering results on three dimensions: predicted sentiments, latitude, and longitude. The clusters correspond to four major regions of the U.S.: the Northeast (green), Southeast (yellow), Midwest (blue), and West Coast (purple); centroids are designated by crosses. Average sentiments within each cluster confirm prior knowledge BIBREF1 : the Southeast and Midwest have lower average sentiments ( INLINEFORM0 and INLINEFORM1 , respectively) than the West Coast and Northeast (0.22 and 0.09, respectively). In Figure FIGREF5 , we plot predicted sentiment averaged by U.S. city of event-related tweeters. The majority of positive tweets emanate from traditionally liberal hubs (e.g. San Francisco, Los Angeles, Austin), while most negative tweets come from the Philadelphia metropolitan area. These regions aside, rural areas tended to see more negative sentiment tweeters post-event, whereas urban regions saw more positive sentiment tweeters; however, overall average climate change sentiment pre- and post-event was relatively stable geographically. This map further confirms findings that coastal cities tend to be more aware of climate change BIBREF8 .", "From these mapping exercises, we claim that our “influential tweet\" labeling is reasonable. We now discuss our final method on outcomes: comparing average Twitter sentiment pre-event to post-event. In Figure FIGREF8 , we display these metrics in two ways: first, as an overall average of tweet binary sentiment, and second, as a within-cohort average of tweet sentiment for the subset of tweets by users who tweeted both before and after the event (hence minimizing awareness bias). We use Student's t-test to calculate the significance of mean sentiment differences pre- and post-event (see Section SECREF4 ). Note that we perform these mean comparisons on all event-related data, since the low number of geo-tagged samples would produce an underpowered study." ], [ "In Figure FIGREF8 , we see that overall sentiment averages rarely show movement post-event: that is, only Hurricane Florence shows a significant difference in average tweet sentiment pre- and post-event at the 1% level, corresponding to a 0.12 point decrease in positive climate change sentiment. However, controlling for the same group of users tells a different story: both Hurricane Florence and Hurricane Michael have significant tweet sentiment average differences pre- and post-event at the 1% level. Within-cohort, Hurricane Florence sees an increase in positive climate change sentiment by 0.21 points, which is contrary to the overall average change (the latter being likely biased since an influx of climate change deniers are likely to tweet about hurricanes only after the event). Hurricane Michael sees an increase in average tweet sentiment of 0.11 points, which reverses the direction of tweets from mostly negative pre-event to mostly positive post-event. Likely due to similar bias reasons, the Mendocino wildfires in California see a 0.06 point decrease in overall sentiment post-event, but a 0.09 point increase in within-cohort sentiment. Methodologically, we assert that overall averages are not robust results to use in sentiment analyses.", "We now comment on the two events yielding similar results between overall and within-cohort comparisons. Most tweets regarding the Bomb Cyclone have negative sentiment, though sentiment increases by 0.02 and 0.04 points post-event for overall and within-cohort averages, respectively. Meanwhile, the California Camp Fires yield a 0.11 and 0.27 point sentiment decline in overall and within-cohort averages, respectively. This large difference in sentiment change can be attributed to two factors: first, the number of tweets made regarding wildfires prior to the (usually unexpected) event is quite low, so within-cohort users tend to have more polarized climate change beliefs. Second, the root cause of the Camp Fires was quickly linked to PG&E, bolstering claims that climate change had nothing to do with the rapid spread of fire; hence within-cohort users were less vocally positive regarding climate change post-event.", "There are several caveats in our work: first, tweet sentiment is rarely binary (this work could be extended to a multinomial or continuous model). Second, our results are constrained to Twitter users, who are known to be more negative than the general U.S. population BIBREF9 . Third, we do not take into account the aggregate effects of continued natural disasters over time. Going forward, there is clear demand in discovering whether social networks can indicate environmental metrics in a “nowcasting\" fashion. As climate change becomes more extreme, it remains to be seen what degree of predictive power exists in our current model regarding climate change sentiments with regards to natural disasters." ] ] }
{ "question": [ "Do they report results only on English data?", "Do the authors mention any confounds to their study?", "Which machine learning models are used?", "What methodology is used to compensate for limited labelled data?", "Which five natural disasters were examined?" ], "question_id": [ "16fa6896cf4597154363a6c9a98deb49fffef15f", "0f60864503ecfd5b048258e21d548ab5e5e81772", "fe578842021ccfc295209a28cf2275ca18f8d155", "00ef9cc1d1d60f875969094bb246be529373cb1d", "279b633b90fa2fd69e84726090fadb42ebdf4c02" ], "nlp_background": [ "five", "five", "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "no" ], "search_query": [ "twitter", "twitter", "twitter", "twitter", "twitter" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "We henceforth refer to a tweet affirming climate change as a “positive\" sample (labeled as 1 in the data), and a tweet denying climate change as a “negative\" sample (labeled as -1 in the data). All data were downloaded from Twitter in two separate batches using the “twint\" scraping tool BIBREF5 to sample historical tweets for several different search terms; queries always included either “climate change\" or “global warming\", and further included disaster-specific search terms (e.g., “bomb cyclone,\" “blizzard,\" “snowstorm,\" etc.). We refer to the first data batch as “influential\" tweets, and the second data batch as “event-related\" tweets." ], "highlighted_evidence": [ "All data were downloaded from Twitter in two separate batches using the “twint\" scraping tool BIBREF5 to sample historical tweets for several different search terms; queries always included either “climate change\" or “global warming\", and further included disaster-specific search terms (e.g., “bomb cyclone,\" “blizzard,\" “snowstorm,\" etc.). " ] } ], "annotation_id": [ "344fc2c81c2b0173e51bafa2f8a8edbca4e1be14" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "There are several caveats in our work: first, tweet sentiment is rarely binary (this work could be extended to a multinomial or continuous model). Second, our results are constrained to Twitter users, who are known to be more negative than the general U.S. population BIBREF9 . Third, we do not take into account the aggregate effects of continued natural disasters over time. Going forward, there is clear demand in discovering whether social networks can indicate environmental metrics in a “nowcasting\" fashion. As climate change becomes more extreme, it remains to be seen what degree of predictive power exists in our current model regarding climate change sentiments with regards to natural disasters." ], "highlighted_evidence": [ "There are several caveats in our work: first, tweet sentiment is rarely binary (this work could be extended to a multinomial or continuous model). Second, our results are constrained to Twitter users, who are known to be more negative than the general U.S. population BIBREF9 . Third, we do not take into account the aggregate effects of continued natural disasters over time. Going forward, there is clear demand in discovering whether social networks can indicate environmental metrics in a “nowcasting\" fashion. As climate change becomes more extreme, it remains to be seen what degree of predictive power exists in our current model regarding climate change sentiments with regards to natural disasters." ] } ], "annotation_id": [ "0c3efc4450d194483719636dbab54fb1730333cb" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "RNNs", "CNNs", "Naive Bayes with Laplace Smoothing", "k-clustering", "SVM with linear kernel" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Our first goal is to train a sentiment analysis model (on training and validation datasets) in order to perform classification inference on event-based tweets. We experimented with different feature extraction methods and classification models. Feature extractions examined include Tokenizer, Unigram, Bigram, 5-char-gram, and td-idf methods. Models include both neural nets (e.g. RNNs, CNNs) and standard machine learning tools (e.g. Naive Bayes with Laplace Smoothing, k-clustering, SVM with linear kernel). Model accuracies are reported in Table FIGREF3 ." ], "highlighted_evidence": [ " Models include both neural nets (e.g. RNNs, CNNs) and standard machine learning tools (e.g. Naive Bayes with Laplace Smoothing, k-clustering, SVM with linear kernel). " ] } ], "annotation_id": [ "a146205ea460d7b1fdd248ced2a5504d3f06a708" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Influential tweeters ( who they define as individuals certain to have a classifiable sentiment regarding the topic at hand) is used to label tweets in bulk in the absence of manually-labeled tweets.", "evidence": [ "The first data batch consists of tweets relevant to blizzards, hurricanes, and wildfires, under the constraint that they are tweeted by “influential\" tweeters, who we define as individuals certain to have a classifiable sentiment regarding the topic at hand. For example, we assume that any tweet composed by Al Gore regarding climate change is a positive sample, whereas any tweet from conspiracy account @ClimateHiJinx is a negative sample. The assumption we make in ensuing methods (confirmed as reasonable in Section SECREF2 ) is that influential tweeters can be used to label tweets in bulk in the absence of manually-labeled tweets. Here, we enforce binary labels for all tweets composed by each of the 133 influential tweeters that we identified on Twitter (87 of whom accept climate change), yielding a total of 16,360 influential tweets." ], "highlighted_evidence": [ "The first data batch consists of tweets relevant to blizzards, hurricanes, and wildfires, under the constraint that they are tweeted by “influential\" tweeters, who we define as individuals certain to have a classifiable sentiment regarding the topic at hand. For example, we assume that any tweet composed by Al Gore regarding climate change is a positive sample, whereas any tweet from conspiracy account @ClimateHiJinx is a negative sample. The assumption we make in ensuing methods (confirmed as reasonable in Section SECREF2 ) is that influential tweeters can be used to label tweets in bulk in the absence of manually-labeled tweets. " ] } ], "annotation_id": [ "7444fcf3eb94af572135d50d73c7ab6e1ff84c3c" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "the East Coast Bomb Cyclone", " the Mendocino, California wildfires", "Hurricane Florence", "Hurricane Michael", "the California Camp Fires" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The second data batch consists of event-related tweets for five natural disasters occurring in the U.S. in 2018. These are: the East Coast Bomb Cyclone (Jan. 2 - 6); the Mendocino, California wildfires (Jul. 27 - Sept. 18); Hurricane Florence (Aug. 31 - Sept. 19); Hurricane Michael (Oct. 7 - 16); and the California Camp Fires (Nov. 8 - 25). For each disaster, we scraped tweets starting from two weeks prior to the beginning of the event, and continuing through two weeks after the end of the event. Summary statistics on the downloaded event-specific tweets are provided in Table TABREF1 . Note that the number of tweets occurring prior to the two 2018 sets of California fires are relatively small. This is because the magnitudes of these wildfires were relatively unpredictable, whereas blizzards and hurricanes are often forecast weeks in advance alongside public warnings. The first (influential tweet data) and second (event-related tweet data) batches are de-duplicated to be mutually exclusive. In Section SECREF2 , we perform geographic analysis on the event-related tweets from which we can scrape self-reported user city from Twitter user profile header cards; overall this includes 840 pre-event and 5,984 post-event tweets." ], "highlighted_evidence": [ "The second data batch consists of event-related tweets for five natural disasters occurring in the U.S. in 2018. These are: the East Coast Bomb Cyclone (Jan. 2 - 6); the Mendocino, California wildfires (Jul. 27 - Sept. 18); Hurricane Florence (Aug. 31 - Sept. 19); Hurricane Michael (Oct. 7 - 16); and the California Camp Fires (Nov. 8 - 25). " ] } ], "annotation_id": [ "602dcef9005c4c448d3d33589fb21b705d9eb2b2" ], "worker_id": [ "34c35a1877e453ecaebcf625df3ef788e1953cc4" ] } ] }
{ "caption": [ "Table 1: Tweets collected for each U.S. 2018 natural disaster", "Figure 1: Four-clustering on sentiment, latitude, and longitude", "Table 2: Selected binary sentiment analysis accuracies", "Figure 2: Pre-event (left) and post-event (right) average climate sentiment aggregated over five U.S. natural disasters in 2018", "Figure 3: Comparisons of overall (left) and within-cohort (right) average sentiments for tweets occurring two weeks before or after U.S. natural disasters occurring in 2018" ], "file": [ "2-Table1-1.png", "3-Figure1-1.png", "3-Table2-1.png", "4-Figure2-1.png", "5-Figure3-1.png" ] }
2001.06888
A multimodal deep learning approach for named entity recognition from social media
Named Entity Recognition (NER) from social media posts is a challenging task. User generated content which forms the nature of social media, is noisy and contains grammatical and linguistic errors. This noisy content makes it much harder for tasks such as named entity recognition. However some applications like automatic journalism or information retrieval from social media, require more information about entities mentioned in groups of social media posts. Conventional methods applied to structured and well typed documents provide acceptable results while compared to new user generated media, these methods are not satisfactory. One valuable piece of information about an entity is the related image to the text. Combining this multimodal data reduces ambiguity and provides wider information about the entities mentioned. In order to address this issue, we propose a novel deep learning approach utilizing multimodal deep learning. Our solution is able to provide more accurate results on named entity recognition task. Experimental results, namely the precision, recall and F1 score metrics show the superiority of our work compared to other state-of-the-art NER solutions.
{ "section_name": [ "Introduction", "Related Work", "Related Work ::: Unimodal Named Entity Recognition", "Related Work ::: Multimodal Named Entity Recognition", "The Proposed Approach", "Experimental Evaluation", "Experimental Evaluation ::: Dataset", "Experimental Evaluation ::: Experimental Setup", "Experimental Evaluation ::: Evaluation Results", "Conclusion" ], "paragraphs": [ [ "A common social media delivery system such as Twitter supports various media types like video, image and text. This media allows users to share their short posts called Tweets. Users are able to share their tweets with other users that are usually following the source user. Hovewer there are rules to protect the privacy of users from unauthorized access to their timeline BIBREF0. The very nature of user interactions in Twitter micro-blogging social media is oriented towards their daily life, first witness news-reporting and engaging in various events (sports, political stands etc.). According to studies, news in twitter is propagated and reported faster than conventional news media BIBREF1. Thus, extracting first hand news and entities occurring in this fast and versatile online media gives valuable information. However, abridged and noisy content of Tweets makes it even more difficult and challenging for tasks such as named entity recognition and information retrieval BIBREF2.", "The task of tracking and recovering information from social media posts is a concise definition of information retrieval in social media BIBREF3, BIBREF4. However many challenges are blocking useful solutions to this issue, namely, the noisy nature of user generated content and the perplexity of words used in short posts. Sometimes different entities are called exactly the same, for example \"Micheal Jordan\" refers to a basketball player and also a computer scientist in the field of artificial intelligence. The only thing that divides both of these is the context in which entity appeared. If the context refers to something related to AI, the reader can conclude \"Micheal Jordan\" is the scientist, and if the context is refers to sports and basketball then he is the basketball player. The task of distinguishing between different named entities that appear to have the same textual appearance is called named entity disambiguation. There is more useful data on the subject rather than on plain text. For example images and visual data are more descriptive than just text for tasks such as named entity recognition and disambiguation BIBREF5 while some methods only use the textual data BIBREF6.", "The provided extra information is closely related to the textual data. As a clear example, figure FIGREF1 shows a tweet containing an image. The combination of these multimodal data in order to achieve better performance in NLP related tasks is a promising alternative explored recently.", "An NLP task such as named entity recognition in social media is a most challenging task because users tend to invent, mistype and epitomize words. Sometimes these words correspond to named entities which makes the recognition task even more difficult BIBREF7. In some cases, the context that carries the entity (surrounding words and related image) is more descriptive than the entity word presentation BIBREF8.", "To find a solution to the issues at hand, and keeping multimodal data in mind, recognition of named entities from social media has become a research interest which utilizes image compared to NER task in a conventional text. Researchers in this field have tried to propose multimodal architectures based on deep neural networks with multimodal input that are capable of combining text and image BIBREF9, BIBREF8, BIBREF10.", "In this paper we draw a better solution in terms of performance by proposing a new novel method called CWI (Character-Word-Image model). We used multimodal deep neural network to overcome the NER task in micro-blogging social media.", "The rest of the paper is organized as follows: section SECREF2 provides an insight view of previous methods; section SECREF3 describes the method we propose; section SECREF4 shows experimental evaluation and test results; finally, section SECREF5 concludes the whole article." ], [ "Many algorithms and methods have been proposed to detect, classify or extract information from single type of data such as audio, text, image etc. However, in the case of social media, data comes in a variety of types such as text, image, video or audio in a bounded style. Most of the time, it is very common to caption a video or image with textual information. This information about the video or image can refer to a person, location etc. From a multimodal learning perspective, jointly computing such data is considered to be more valuable in terms of representation and evaluation. Named entity recognition task, on the other hand, is the task of recognizing named entities from a sentence or group of sentences in a document format.", "Named entity is formally defined as a word or phrase that clearly identifies an item from set of other similar items BIBREF11, BIBREF12. Equation DISPLAY_FORM2 expresses a sequence of tokens.", "From this equation, the NER task is defined as recognition of tokens that correspond to interesting items. These items from natural language processing perspective are known as named entity categories; BIO2 proposes four major categories, namely, organization, person, location and miscellaneous BIBREF13. From the biomedical domain, gene, protein, drug and disease names are known as named entities BIBREF14, BIBREF15. Output of NER task is formulated in . $I_s\\in [1,N]$ and $I_e\\in [1,N]$ is the start and end indices of each named entity and $t$ is named entity type BIBREF16.", "BIO2 tagging for named entity recognition is defined in equation . Table TABREF3 shows BIO2 tags and their respective meanings; B and I indicate beginning and inside of the entity respectively, while O shows the outside of it. Even though many tagging standards have been proposed for NER task, BIO is the foremost accepted by many real world applications BIBREF17.", "A named entity recognizer gets $s$ as input and provides entity tags for each token. This sequential process requires information from the whole sentence rather than only tokens and for that reason, it is also considered to be a sequence tagging problem. Another analogous problem to this issue is part of speech tagging and some methods are capable of doing both. However, in cases where noise is present and input sequence has linguistic typos, many methods fail to overcome the problem. As an example, consider a sequence of tokens where a new token invented by social media users gets trended. This trending new word is misspelled and is used in a sequence along with other tokens in which the whole sequence does not follow known linguistic grammar. For this special case, classical methods and those which use engineered features do not perform well.", "Using the sequence $s$ itself or adding more information to it divides two approaches to overcome this problem: unimodal and multimodal.", "Although many approaches for NER have been proposed and reviewing them is not in the scope of this article, we focus on foremost analogues classical and deep learning approaches for named entity recognition in two subsections. In subsection SECREF4 unimodal approaches for named entity recognition are presented while in subsection SECREF7 emerging multimodal solutions are described." ], [ "The recognition of named entities from only textual data (unimodal learning approach) is a well studied and explored research criteria. For a prominent example of this category, the Stanford NER is a widely used baseline for many applications BIBREF18. The incorporation of non-local information in information extraction is proposed by the authors using of Gibbs sampling. The conditional random field (CRF) approach used in this article, creates a chain of cliques, where each clique represents the probabilistic relationship between two adjacent states. Also, Viterbi algorithm has been used to infer the most likely state in the CRF output sequence. Equation DISPLAY_FORM5 shows the proposed CRF method.", "where $\\phi $ is the potential function.", "CRF finds the most probable likelihood by modeling the input sequence of tokens $s$ as a normalized product of feature functions. In a simpler explanation, CRF outputs the most probable tags that follow each other. For example it is more likely to have an I-PER, O or any other that that starts with B- after B-PER rather than encountering tags that start with I-.", "T-NER is another approach that is specifically aimed to answer NER task in twitter BIBREF19. A set of algorithms in their original work have been published to answer tasks such as POS (part of speech tagging), named entity segmentation and NER. Labeled LDA has been used by the authors in order to outperform baseline in BIBREF20 for NER task. Their approach strongly relies on dictionary, contextual and orthographic features.", "Deep learning techniques use distributed word or character representation rather than raw one-hot vectors. Most of this research in NLP field use pretrained word embeddings such as Word2Vec BIBREF21, GloVe BIBREF22 or fastText BIBREF23. These low dimensional real valued dense vectors have proved to provide better representation for words compared to one-hot vector or other space vector models.", "The combination of word embedding along with bidirectional long-short term memory (LSTM) neural networks are examined in BIBREF24. The authors also propose to add a CRF layer at the end of their neural network architecture in order to preserve output tag relativity. Utilization of recurrent neural networks (RNN) provides better sequential modeling over data. However, only using sequential information does not result in major improvements because these networks tend to rely on the most recent tokens. Instead of using RNN, authors used LSTM. The long and short term memory capability of these networks helps them to keep in memory what is important and forget what is not necessary to remember. Equation DISPLAY_FORM6 formulates forget-gate of an LSTM neural network, eq. shows input-gate, eq. notes output-gate and eq. presents memory-cell. Finally, eq. shows the hidden part of an LSTM unit BIBREF25, BIBREF26.", "for all these equations, $\\sigma $ is activation function (sigmoid or tanh are commonly used for LSTM) and $\\circ $ is concatenation operation. $W$ and $U$ are weights and $b$ is the bias which should be learned over training process.", "LSTM is useful for capturing the relation of tokens in a forward sequential form, however in natural language processing tasks, it is required to know the upcoming token. To overcome this problem, the authors have used a backward and forward LSTM combining output of both.", "In a different approach, character embedding followed by a convolution layer is proposed in BIBREF27 for sequence labeling. The utilized architecture is followed by a bidirectional LSTM layer that ends in a CRF layer. Character embedding is a useful technique that the authors tried to use it in a combination with word embedding. Character embedding with the use of convolution as feature extractor from character level, captures relations between characters that form a word and reduces spelling noise. It also helps the model to have an embedding when pretrained word embedding is empty or initialized as random for new words. These words are encountered when they were not present in the training set, thus, in the test phase, model fails to provide a useful embedding." ], [ "Multimodal learning has become an emerging research interest and with the rise of deep learning techniques, it has become more visible in different research areas ranging from medical imaging to image segmentation and natural language processing BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF9, BIBREF37, BIBREF38, BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, BIBREF44, BIBREF45. On the other hand, very little research has been focused on the extraction of named entities with joint image and textual data concerning short and noisy content BIBREF46, BIBREF47, BIBREF9, BIBREF8 while several studies have been explored in textual named entity recognition using neural models BIBREF48, BIBREF49, BIBREF24, BIBREF50, BIBREF27, BIBREF51, BIBREF10, BIBREF52.", "State-of-the-art methods have shown acceptable evaluation on structured and well formatted short texts. Techniques based on deep learning such as utilization of convolutional neural networks BIBREF52, BIBREF49, recurrent neural networks BIBREF50 and long short term memory neural networks BIBREF27, BIBREF24 are aimed to solve NER problem.", "The multimodal named entity recognizers can be categorized in two categories based on the tasks at hand, one tries to improve NER task with utilization of visual data BIBREF46, BIBREF8, BIBREF47, and the other tries to give further information about the task at hand such as disambiguation of named entities BIBREF9. We will refer to both of these tasks as MNER. To have a better understanding of MNER, equation DISPLAY_FORM9 formulates the available multimodal data while equations and are true for this task.", "$i$ refers to image and the rest goes same as equation DISPLAY_FORM2 for word token sequence.", "In BIBREF47 pioneering research was conducted using feature extraction from both image and textual data. The extracted features were fed to decision trees in order to output the named entity classes. Researchers have used multiple datasets ranging from buildings to human face images to train their image feature extractor (object detector and k-means clustering) and a text classifier has been trained on texts acquired from DBPedia.", "Researchers in BIBREF46 proposed a MNER model with regards to triplet embedding of words, characters and image. Modality attention applied to this triplet indicates the importance of each embedding and their impact on the output while reducing the impact of irrelevant modals. Modality attention layer is applied to all embedding vectors for each modal, however the investigation of fine-grained attention mechanism is still unclear BIBREF53. The proposed method with Inception feature extraction BIBREF54 and pretrained GloVe word vectors shows good results on the dataset that the authors aggregated from Snapchat. This method shows around 0.5 for precision and F-measure for four entity types (person, location, organization and misc) while for segmentation tasks (distinguishing between a named entity and a non-named entity) it shows around 0.7 for the metrics mentioned.", "An adaptive co-attention neural network with four generations are proposed in BIBREF8. The adaptive co-attention part is similar to the multimodal attention proposed in BIBREF46 that enabled the authors to have better results over the dataset they collected from Twitter. In their main proposal, convolutional layers are used for word representation, BiLSTM is utilized to combine word and character embeddings and an attention layer combines the best of the triplet (word, character and image features). VGG-Net16 BIBREF55 is used as a feature extractor for image while the impact of other deep image feature extractors on the proposed solution is unclear, however the results show its superiority over related unimodal methods." ], [ "In the present work, we propose a new multimodal deep approach (CWI) that is able to handle noise by co-learning semantics from three modalities, character, word and image. Our method is composed of three parts, convolutional character embedding, joint word embedding (fastText-GloVe) and InceptionV3 image feature extraction BIBREF54, BIBREF23, BIBREF22. Figure FIGREF11 shows CWI architecture in more detail.", "Character Feature Extraction shown in the left part of figure FIGREF11 is a composition of six layers. Each sequence of words from a single tweet, $\\langle w_1, w_2, \\dots , w_n \\rangle $ is converted to a sequence of character representation $\\langle [c_{(0,0)}, c_{(0,1)}, \\dots , c_{(0,k)}], \\dots , [c_{(n,0)}, c_{(n,1)}, \\dots , c_{(n,k)}] \\rangle $ and in order to apply one dimensional convolution, it is required to be in a fixed length. $k$ shows the fixed length of the character sequence representing each word. Rather than using the one-hot representation of characters, a randomly initialized (uniform distribution) embedding layer is used. The first three convolution layers are followed by a one dimensional pooling layer. In each layer, kernel size is increased incrementally from 2 to 4 while the number of kernels are doubled starting from 16. Just like the first part, the second segment of this feature extractor uses three layers but with slight changes. Kernel size is reduced starting from 4 to 2 and the number of kernels is halved starting from 64. In this part, $\\otimes $ sign shows concatenation operation. TD + GN + SineRelu note targeted dropout, group normalization and sine-relu BIBREF56, BIBREF57, BIBREF58. These layers prevent the character feature extractor from overfitting. Equation DISPLAY_FORM12 defines SineRelu activation function which is slightly different from Relu.", "Instead of using zero in the second part of this equation, $\\epsilon (\\sin {x}-\\cos {x})$ has been used for negative inputs, $\\epsilon $ is a hyperparameter that controls the amplitude of $\\sin {x}-\\cos {x}$ wave. This slight change prevents network from having dead-neurons and unlike Relu, it is differentiable everywhere. On the other hand, it has been proven that using GroupNormalization provides better results than BatchNormalization on various tasks BIBREF57.", "However the dropout has major improvement on the neural network as an overfitting prevention technique BIBREF59, in our setup the TargtedDropout shows to provide better results. TargetedDropout randomly drops neurons whose output is over a threshold.", "Word Feature Extraction is presented in the middle part of figure FIGREF11. Joint embeddings from pretrained word vectors of GloVe BIBREF22 and fastText BIBREF23 by concatenation operation results in 500 dimensional word embedding. In order to have forward and backward information for each hidden layer, we used a bidirectional long-short term memory BIBREF25, BIBREF26. For the words which were not in the pretrained tokens, we used a random initialization (uniform initialization) between -0.25 and 0.25 at each embedding. The result of this phase is extracted features for each word.", "Image Feature Extraction is shown in the right part of figure FIGREF11. For this part, we have used InceptionV3 pretrained on ImageNet BIBREF60. Many models were available as first part of image feature extraction, however the main reason we used InceptionV3 as feature extractor backbone is better performance of it on ImageNet and the results obtained by this particular model were slightly better compared to others.", "Instead of using headless version of InceptionV3 for image feature extraction, we have used the full model which outputs the 1000 classes of ImageNet. Each of these classes resembles an item, the set of these items can present a person, location or anything that is identified as a whole. To have better features extracted from the image, we have used an embedding layer. In other words, we looked at the top 5 extracted probabilities as words that is shown in eq. DISPLAY_FORM16; Based on our assumption, these five words present textual keywords related to the image and combination of these words should provide useful information about the objects in visual data. An LSTM unit has been used to output the final image features. These combined embeddings from the most probable items in image are the key to have extra information from a social media post.", "where $IW$ is image-word vector, $x$ is output of InceptionV3 and $i$ is the image. $x$ is in domain of [0,1] and $\\sum \\limits _{\\forall k\\in x}k=1$ holds true, while $\\sum \\limits _{\\forall k\\in IW}k\\le 1$.", "Multimodal Fusion in our work is presented as concatenation of three feature sets extracted from words, characters and images. Unlike previous methods, our original work does not include an attention layer to remove noisy features. Instead, we stacked LSTM units from word and image feature extractors to have better results. The last layer presented at the top right side of figure FIGREF11 shows this part. In our second proposed method, we have used attention layer applied to this triplet. Our proposed attention mechanism is able to detect on which modality to increase or decrease focus. Equations DISPLAY_FORM17, and show attention mechanism related to second proposed model.", "Conditional Random Field is the last layer in our setup which forms the final output. The same implementation explained in eq. DISPLAY_FORM5 is used for our method." ], [ "The present section provides evaluation results of our model against baselines. Before diving into our results, a brief description of dataset and its statistics are provided." ], [ "In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets." ], [ "In order to obtain the best results in tab. TABREF20 for our first model (CWI), we have used the following setup in tables TABREF22, TABREF23, TABREF24 and TABREF25. For the second proposed method, the same parameter settings have been used with an additional attention layer. This additional layer has been added after layer 31 in table TABREF25 and before the final CRF layer, indexed as 32. $Adam$ optimizer with $8\\times 10^{-5}$ has been used in training phase with 10 epochs." ], [ "Table TABREF20 presents evaluation results of our proposed models. Compared to other state of the art methods, our first proposed model shows $1\\%$ improvement on f1 score. The effect of different word embedding sizes on our proposed method is presented in TABREF26. Sensitivity to TD+SineRelu+GN is presented in tab. TABREF28." ], [ "In this article we have proposed a novel named entity recognizer based on multimodal deep learning. In our proposed model, we have used a new architecture in character feature extraction that has helped our model to overcome the issue of noise. Instead of using direct image features from near last layers of image feature extractors such as Inception, we have used the direct output of the last layer. This last layer which is 1000 classes of diverse objects that is result of InceptionV3 trained on ImageNet dataset. We used top 5 classes out of these and converted them to one-hot vectors. The resulting image feature embedding out of these high probability one-hot vectors helped our model to overcome the issue of noise in images posted by social media users. Evaluation results of our proposed model compared to other state of the art methods show its superiority to these methods in overall while in two categories (Person and Miscellaneous) our model outperformed others." ] ] }
{ "question": [ "Which social media platform is explored?", "What datasets did they use?", "What are the baseline state of the art models?" ], "question_id": [ "0106bd9d54e2f343cc5f30bb09a5dbdd171e964b", "e015d033d4ee1c83fe6f192d3310fb820354a553", "8a871b136ccef78391922377f89491c923a77730" ], "nlp_background": [ "", "", "" ], "topic_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "search_query": [ "social media", "social media", "social media" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "twitter " ], "yes_no": null, "free_form_answer": "", "evidence": [ "In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets." ], "highlighted_evidence": [ "In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets.\n\n" ] } ], "annotation_id": [ "0c5be00c50cc9fa7c1921c32aca6b2cb254dd249" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "BIBREF8 a refined collection of tweets gathered from twitter" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets." ], "highlighted_evidence": [ "In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets.\n\n" ] } ], "annotation_id": [ "d8f8f58e892ccf7370b6a3224007cc8240468fdf" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Stanford NER, BiLSTM+CRF, LSTM+CNN+CRF, T-NER and BiLSTM+CNN+Co-Attention", "evidence": [ "FLOAT SELECTED: Table 3: Evaluation results of different approaches compared to ours" ], "highlighted_evidence": [ "FLOAT SELECTED: Table 3: Evaluation results of different approaches compared to ours" ] } ], "annotation_id": [ "97c19183567ea4de915809602b70217ba8fb19bb" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Figure 1: A Tweet containing Image and Text: Geoffrey Hinton and Demis Hassabis are referred in text and respective images are provided with Tweet", "Table 1: BIO Tags and their respective meaning", "Figure 2: Proposed CWI Model: Character (left), Word (middle) and Image (right) feature extractors combined by bidirectional long-short term memory and the conditional random field at the end", "Table 2: Statistics of named entity types in train, development and test sets [9]", "Table 3: Evaluation results of different approaches compared to ours", "Table 6: Implementation details of our model (CWI): Image Feature Extractor", "Table 8: Effect of different word embedding sizes on our proposed model" ], "file": [ "2-Figure1-1.png", "3-Table1-1.png", "6-Figure2-1.png", "8-Table2-1.png", "8-Table3-1.png", "9-Table6-1.png", "10-Table8-1.png" ] }
1911.00547
Uncover Sexual Harassment Patterns from Personal Stories by Joint Key Element Extraction and Categorization
The number of personal stories about sexual harassment shared online has increased exponentially in recent years. This is in part inspired by the \#MeToo and \#TimesUp movements. Safecity is an online forum for people who experienced or witnessed sexual harassment to share their personal experiences. It has collected \textgreater 10,000 stories so far. Sexual harassment occurred in a variety of situations, and categorization of the stories and extraction of their key elements will provide great help for the related parties to understand and address sexual harassment. In this study, we manually annotated those stories with labels in the dimensions of location, time, and harassers' characteristics, and marked the key elements related to these dimensions. Furthermore, we applied natural language processing technologies with joint learning schemes to automatically categorize these stories in those dimensions and extract key elements at the same time. We also uncovered significant patterns from the categorized sexual harassment stories. We believe our annotated data set, proposed algorithms, and analysis will help people who have been harassed, authorities, researchers and other related parties in various ways, such as automatically filling reports, enlightening the public in order to prevent future harassment, and enabling more effective, faster action to be taken.
{ "section_name": [ "Introduction", "Related Work", "Data Collection and Annotation", "Proposed Models", "Proposed Models ::: CNN Based Joint Learning Models", "Proposed Models ::: BiLSTM Based Joint Learning Models", "Experiments and Results ::: Experimental Settings", "Experiments and Results ::: Results and Discussions", "Patterns of Sexual Harassment", "Conclusions", "Acknowledgments" ], "paragraphs": [ [ "Sexual violence, including harassment, is a pervasive, worldwide problem with a long history. This global problem has finally become a mainstream issue thanks to the efforts of survivors and advocates. Statistics show that girls and women are put at high risk of experiencing harassment. Women have about a 3 in 5 chance of experiencing sexual harassment, whereas men have slightly less than 1 in 5 chance BIBREF0, BIBREF1, BIBREF2. While women in developing countries are facing distinct challenges with sexual violence BIBREF3, however sexual violence is ubiquitous. In the United States, for example, there are on average >300,000 people who are sexually assaulted every year BIBREF4. Additionally, these numbers could be underestimated, due to reasons like guilt, blame, doubt and fear, which stopped many survivors from reporting BIBREF5. Social media can be a more open and accessible channel for those who have experienced harassment to be empowered to freely share their traumatic experiences and to raise awareness of the vast scale of sexual harassment, which then allows us to understand and actively address abusive behavior as part of larger efforts to prevent future sexual harassment. The deadly gang rape of a medical student on a Delhi bus in 2012 was a catalyst for protest and action, including the development of Safecity, which uses online and mobile technology to work towards ending sexual harassment and assault. More recently, the #MeToo and #TimesUp movements, further demonstrate how reporting personal stories on social media can raise awareness and empower women. Millions of people around the world have come forward and shared their stories. Instead of being bystanders, more and more people become up-standers, who take action to protest against sexual harassment online. The stories of people who experienced harassment can be studied to identify different patterns of sexual harassment, which can enable solutions to be developed to make streets safer and to keep women and girls more secure when navigating city spaces BIBREF6. In this paper, we demonstrated the application of natural language processing (NLP) technologies to uncover harassment patterns from social media data. We made three key contributions:", "1. Safecity is the largest publicly-available online forum for reporting sexual harassment BIBREF6. We annotated about 10,000 personal stories from Safecity with the key elements, including information of harasser (i.e. the words describing the harasser), time, location and the trigger words (i.e. the phrases indicate the harassment that occurred). The key elements are important for studying the patterns of harassment and victimology BIBREF5, BIBREF7. Furthermore, we also associated each story with five labels that characterize the story in multiple dimensions (i.e. age of harasser, single/multiple harasser(s), type of harasser, type of location and time of day). The annotation data are available online.", "2. We proposed joint learning NLP models that use convolutional neural network (CNN) BIBREF8 and bi-directional long short-term memory (BiLSTM) BIBREF9, BIBREF10 as basic units. Our models can automatically extract the key elements from the sexual harassment stories and at the same time categorize the stories in different dimensions. The proposed models outperformed the single task models, and achieved higher than previously reported accuracy in classifications of harassment forms BIBREF6.", "3. We uncovered significant patterns from the categorized sexual harassment stories." ], [ "Conventional surveys and reports are often used to study sexual harassment, but harassment on these is usually under-reported BIBREF2, BIBREF5. The high volume of social media data available online can provide us a much larger collection of firsthand stories of sexual harassment. Social media data has already been used to analyze and predict distinct societal and health issues, in order to improve the understanding of wide-reaching societal concerns, including mental health, detecting domestic abuse, and cyberbullying BIBREF11, BIBREF12, BIBREF13, BIBREF14.", "There are a very limited number of studies on sexual harassment stories shared online. Karlekar and Bansal karlekar2018safecity were the first group to our knowledge that applied NLP to analyze large amount ( $\\sim $10,000) of sexual harassment stories. Although their CNN-RNN classification models demonstrated high performance on classifying the forms of harassment, only the top 3 majority forms were studied. In order to study the details of the sexual harassment, the trigger words are crucial. Additionally, research indicated that both situational factors and person (or individual difference) factors contribute to sexual harassment BIBREF15. Therefore, the information about perpetrators needs to be extracted as well as the location and time of events. Karlekar and Bansal karlekar2018safecity applied several visualization techniques in order to capture such information, but it was not obtained explicitly. Our preliminary research demonstrated automatic extraction of key element and story classification in separate steps BIBREF16. In this paper, we proposed joint learning NLP models to directly extract the information of the harasser, time, location and trigger word as key elements and categorize the harassment stories in five dimensions as well. Our approach can provide an avenue to automatically uncover nuanced circumstances informing sexual harassment from online stories." ], [ "We obtained 9,892 stories of sexual harassment incidents that was reported on Safecity. Those stories include a text description, along with tags of the forms of harassment, e.g. commenting, ogling and groping. A dataset of these stories was published by Karlekar and Bansal karlekar2018safecity. In addition to the forms of harassment, we manually annotated each story with the key elements (i.e. “harasser\", “time\", “location\", “trigger\"), because they are essential to uncover the harassment patterns. An example is shown in Figure FIGREF3. Furthermore, we also assigned each story classification labels in five dimensions (Table TABREF4). The detailed definitions of classifications in all dimensions are explained below.", "Age of Harasser: Individual difference such as age can affect harassment behaviors. Therefore, we studied the harassers in two age groups, young and adult. Young people in this paper refer to people in the early 20s or younger.", "Single/Multiple Harasser(s): Harassers may behave differently in groups than they do alone.", "Type of Harasser: Person factors in harassment include the common relationships or titles of the harassers. Additionally, the reactions of people who experience harassment may vary with the harassers' relations to themselves BIBREF5. We defined 10 groups with respects to the harassers' relationships or titles. We put conductors and drivers in one group, as they both work on the public transportation. Police and guards are put in the same category, because they are employed to provide security. Manager, supervisors, and colleagues are in the work-related group. The others are described by their names.", "Type of Location: It will be helpful to reveal the places where harassment most frequently occurs BIBREF7, BIBREF6. We defined 14 types of locations. “Station/stop” refers to places where people wait for public transportation or buy tickets. Private places include survivors' or harassers' home, places of parties and etc. The others are described by their names.", "Time of Day: The time of an incident may be reported as “in evening” or at a specific time, e.g. “10 pm”. We considered that 5 am to 6 pm as day time, and the rest of the day as the night.", "Because many of the stories collected are short, many do not contain all of the key elements. For example, “A man came near to her tried to be physical with her .”. The time and location are unknown from the story. In addition, the harassers were strangers to those they harassed in many cases. For instance, “My friend was standing in the queue to pay bill and was ogled by a group of boys.”, we can only learn that there were multiple young harassers, but the type of harasser is unclear. The missing information is hence marked as “unspecified”. It is different from the label “other\", which means the information is provided but the number of them is too small to be represented by a group, for example, a “trader”.", "All the data were labeled by two annotators with training. Inter-rater agreement was measured by Cohen's kappa coefficient, ranging from 0.71 to 0.91 for classifications in different dimensions and 0.75 for key element extraction (details can refer to Table 1 in supplementary file). The disagreements were reviewed by a third annotator and a final decision was made." ], [ "The key elements can be very informative when categorizing the incidents. For instance, in Figure 1, with identified key elements, one can easily categorize the incident in dimensions of “age of harasser” (adult), “single/multiple harasser(s)” (single), “type of harasser” (unspecified), “type of location” (park) , “time of day” (day time). Therefore, we proposed two joint learning schemes to extract the key elements and categorize the incidents together. In the models' names, “J”, “A”, “SA” stand for joint learning, attention, and supervised attention, respectively." ], [ "In Figure FIGREF6, the first proposed structure consists of two layers of CNN modules.", "J-CNN: To predict the type of key element, it is essential for the CNN model to capture the context information around each word. Therefore, the word along with its surrounding context of a fixed window size was converted into a context sequence. Assuming a window size of $2l + 1$ around the target word $w_0$, the context sequence is $[(w_{-l}, w_{-l+1},...w_0, ...w_{l-1},w_l)]$, where $w_i (i \\in [-l,l])$ stands for the $ith$ word from $w_0$.", "Because the context of the two consecutive words in the original text are only off by one position, it will be difficult for the CNN model to detect the difference. Therefore, the position of each word in this context sequence is crucial information for the CNN model to make the correct predictions BIBREF17. That position was embedded as a $p$ dimensional vector, where $p$ is a hyperparameter. The position embeddings were learned at the training stage. Each word in the original text was then converted into a sequence of the concatenation of word and position embeddings. Such sequence was fed into the CNN modules in the first layer of the model, which output the high level word representation ($h_i, i\\in [0,n-1]$, where n is the number of input words). The high level word representation was then passed into a fully connected layer, to predict the key element type for the word. The CNN modules in this layer share the same parameters.", "We input the sequence of high level word representations ($h_i$) from the first layer into another layer of multiple CNN modules to categorize the harassment incident in each dimension (Figure FIGREF6). Inside each CNN module, the sequence of word representations were first passed through a convolution layer to generate a sequence of new feature vectors ($C =[c_0,c_1,...c_q]$). This vector sequence ($C$) was then fed into a max pooling layer. This is followed by a fully connected layer. Modules in this layer do not share parameters across classification tasks.", "J-ACNN: We also experimented with attentive pooling, by replacing the max pooling layer. The attention layer aggregates the sequence of feature vectors ($C$) by measuring the contribution of each vector to form the high level representation of the harassment story. Specifically,", "That is, a fully connected layer with non-linear activation was applied to each vector $c_{i}$ to get its hidden representation $u_{i}$. The similarity of $u_{i}$ with a context vector $u_{w}$ was measured and get normalized through a softmax function, as the importance weight $\\alpha _{i}$. The final representation of the incident story $v$ was an aggregation of all the feature vectors weighted by $\\alpha _{i}$. $W_{\\omega }$, $b_{\\omega }$ and $u_{w}$ were learned during training.", "The final representation ($v$) was passed into one fully connected layer for each classification task. We also applied different attention layers for different classifications, because the classification modules categorize the incident in different dimensions, their focuses vary. For example, to classify “time of day”, one needs to focus on the time phrases, but pays more attention to harassers when classifying “age of harasser”.", "J-SACNN: To further exploit the information of the key elements, we applied supervision BIBREF18 to the attentive pooling layer, with the annotated key element types of the words as ground truth. For instance, in classification of “age of harasser”, the ground truth attention labels for words with key element types of “harasser” are 1 and others are 0. To conform to the CNN structure, we applied convolution to the sequence of ground truth attention labels, with the same window size ($w$) that was applied to the word sequence (Eq. DISPLAY_FORM11).", "where $\\circ $ is element-wise multiplication, $e_t$ is the ground truth attention label, and the $W \\in R^{w\\times 1}$ is a constant matrix with all elements equal to 1. $\\alpha ^{*}$ was normalized through a softmax function and used as ground truth weight values of the vector sequence ($C$) output from the convolution layer. The loss was calculated between learned attention $\\alpha $ and $\\alpha ^{*}$ (Eq. DISPLAY_FORM12), and added to the total loss." ], [ "J-BiLSTM: The model input the sequence of word embeddings to the BiLSTM layer. To extract key elements, the hidden states from the forward and backward LSTM cells were concatenated and used as word representations to predict the key element types.", "To classify the harassment story in different dimensions, concatenation of the forward and backward final states of BiLSTM layer was used as document level representation of the story.", "J-ABiLSTM: We also experimented on BiLSTM model with the attention layer to aggregate the outputs from BiLSTM layer (Figure FIGREF7). The aggregation of the outputs was used as document level representation.", "J-SABiLSTM: Similarly, we experimented with the supervised attention.", "In all the models, softmax function was used to calculate the probabilities at the prediction step, and the cross entropy losses from extraction and classification tasks were added together. In case of supervised attention, the loss defined in Eq. DISPLAY_FORM12 was added to the total loss as well. We applied the stochastic gradient descent algorithm with mini-batches and the AdaDelta update Rule (rho=0.95 and epsilon=1e-6) BIBREF19, BIBREF20. The gradients were computed using back-propagation. During training, we also optimized the word and position embeddings." ], [ "Data Splits: We used the same splits of train, develop, and test sets used by Karlekar and Bansal BIBREF6, with 7201, 990 and 1701 stories, respectively. In this study, we only considered single label classifications.", "Baseline Models: CNN and BiLSTM models that perform classification and extraction separately were used as baseline models. In classification, we also experimented with BiLSTM with the attention layer. To demonstrate that the improvement came from joint learning structure rather the two layer structure in J-CNN, we investigated the same model structure without training on key element extraction. We use J-CNN* to denote it.", "Preprocess: All the texts were converted to lowercase and preprocessed by removing non-alphanumeric characters, excluding “. ! ? ” . The word embeddings were pre-trained using fastText BIBREF21 with dimension equaling 100.", "Hyperparameters: For the CNN model, the filter size was chosen to be (1,2,3,4), with 50 filters per filter size. Batch size was set to 50 and the dropout rate was 0.5. The BiLSTM model comprises two layers of one directional LSTM. Every LSTM cell has 50 hidden units. The dropout rate was 0.25. Attention size was 50." ], [ "We compared joint learning models with the single task models. Results are averages from five experiments. Although not much improvement was achieved in key element extraction (Figure TABREF16), classification performance improved significantly with joint learning schemes (Table TABREF17). Significance t-test results are shown in Table 2 in the supplementary file.", "BiLSTM Based Models: Joint learning BiLSTM with attention outperformed single task BiLSTM models. One reason is that it directed the attention of the model to the correct part of the text. For example,", "S1: “ foogreen!1.7003483371809125 foowhen foogreen!3.4324652515351772 fooi foogreen!10.76661329716444 foowas foogreen!20.388443022966385 fooreturning foogreen!9.704475291073322 foomy foogreen!6.052316632121801 foohome foogreen!2.477810252457857 fooafter foogreen!3.5612427163869143 foofinishing foogreen!4.7736018896102905 foomy foogreen!4.634172189980745 fooclass foogreen!0.6899426807649434 foo. foogreen!0.35572052001953125 fooi foogreen!0.3427551419008523 foowas foogreen!0.293194578262046 fooin foogreen!0.2028885210165754 fooqueue foogreen!0.10553237370913848 footo foogreen!0.19472737039905041 fooget foogreen!0.44946340494789183 fooon foogreen!0.5511227645911276 foothe foogreen!2.056689700111747 foomicro foogreen!2.597035141661763 foobus foogreen!2.5683704297989607 fooand foogreen!4.6382867731153965 foothere foogreen!9.827975183725357 foowas foogreen!21.346069872379303 fooa foogreen!22.295180708169937 foogirl foogreen!11.672522872686386 fooopposite foogreen!8.892465382814407 footo foogreen!18.20233091711998 foome foogreen!13.192926533520222 foojust foogreen!26.24184638261795 foothen foogreen!40.2555949985981 fooa foogreen!30.108729377388954 fooyoung foogreen!115.02625793218613 fooman foogreen!93.40204298496246 footried foogreen!58.68498980998993 footo foogreen!144.01434361934662 footouch foogreen!108.82275551557541 fooher foogreen!80.9452086687088 fooon foogreen!47.26015031337738 foothe foogreen!47.71501570940018 foobreast foogreen!19.392695277929306 foo.”", "S2: “ foogreen!0.2212507533840835 foowhen foogreen!0.26129744946956635 fooi foogreen!0.3014186804648489 foowas foogreen!0.314583390718326 fooreturning foogreen!0.23829322890378535 foomy foogreen!0.018542312318459153 foohome foogreen!0.06052045864635147 fooafter foogreen!0.3865368489641696 foofinishing foogreen!0.5127551266923547 foomy foogreen!0.569560332223773 fooclass foogreen!0.037081812479300424 foo. foogreen!0.061129467212595046 fooi foogreen!0.12043083552271128 foowas foogreen!0.2053432835964486 fooin foogreen!0.038308095099637285 fooqueue foogreen!0.05270353358355351 footo foogreen!0.07939991337480024 fooget foogreen!0.14962266141083091 fooon foogreen!0.11444976553320885 foothe foogreen!0.013002995729038958 foomicro foogreen!0.016201976904994808 foobus foogreen!0.14046543219592422 fooand foogreen!0.12413455988280475 foothere foogreen!0.18423641449771821 foowas foogreen!0.3394613158889115 fooa foogreen!1.0372470133006573 foogirl foogreen!0.20553644571918994 fooopposite foogreen!0.2821453963406384 footo foogreen!0.5574009846895933 foome foogreen!0.2709480468183756 foojust foogreen!0.2582515007816255 foothen foogreen!0.9223996312357485 fooa foogreen!788.9420390129089 fooyoung foogreen!199.1765946149826 fooman foogreen!0.39259070763364434 footried foogreen!0.27069455245509744 footo foogreen!0.5092779756523669 footouch foogreen!0.7033208385109901 fooher foogreen!0.6793316570110619 fooon foogreen!0.5892394692637026 foothe foogreen!0.4084075626451522 foobreast foogreen!0.14951340563129634 foo.”", "S3: “ foogreen!0.23944019631017 foowhen foogreen!0.16698541003279388 fooi foogreen!0.3381385176908225 foowas foogreen!0.21315943740773946 fooreturning foogreen!0.3222442464902997 foomy foogreen!0.8483575657010078 foohome foogreen!0.10339960863348097 fooafter foogreen!0.2440519310766831 foofinishing foogreen!0.39699181797914207 foomy foogreen!1.2218113988637924 fooclass foogreen!0.1232976937899366 foo. foogreen!0.10928708070423454 fooi foogreen!0.2562549489084631 foowas foogreen!0.8099888218566775 fooin foogreen!2.9650430660694838 fooqueue foogreen!0.507337914314121 footo foogreen!0.727736041881144 fooget foogreen!0.7367140497080982 fooon foogreen!0.711284636054188 foothe foogreen!194.2763775587082 foomicro foogreen!786.8869304656982 foobus foogreen!0.4422159108798951 fooand foogreen!0.43104542419314384 foothere foogreen!0.4694198723882437 foowas foogreen!0.5085613229312003 fooa foogreen!0.4430979897733778 foogirl foogreen!0.36199347232468426 fooopposite foogreen!0.31067250529304147 footo foogreen!0.2927705936599523 foome foogreen!0.24646619567647576 foojust foogreen!0.23911069729365408 foothen foogreen!0.11775700113503262 fooa foogreen!0.002219072712250636 fooyoung foogreen!0.0019248132048232947 fooman foogreen!0.32698659924790263 footried foogreen!0.3118939639534801 footo foogreen!0.5727249081246555 footouch foogreen!0.5670131067745388 fooher foogreen!0.7104063988663256 fooon foogreen!0.6698771030642092 foothe foogreen!0.4756081907544285 foobreast foogreen!0.26600153069011867 foo.”", "In S1, the regular BiLSTM with attention model for classification on “age of harasser” put some attention on phrases other than the harasser, and hence aggregated noise. This could explain why the regular BiLSTM model got lower performance than the CNN model. However, when training with key element extractions, it put almost all attention on the harasser “young man” (S2), which helped the model make correct prediction of “young harasser”. When predicting the “type of location” (S3), the joint learning model directed its attention to “micro bus”.", "CNN Based Models: Since CNN is efficient for capturing the most useful information BIBREF22, it is quite suitable for the classification tasks in this study. It achieved better performance than the BiLSTM model. The joint learning method boosted the performance even higher. This is because the classifications are related to the extracted key elements, and the word representation learned by the first layer of CNNs (Figure FIGREF6) is more informative than word embedding. By plotting of t-SNEs BIBREF23 of the two kinds of word vectors, we can see the word representations in the joint learning model made the words more separable (Figure 1 in supplementary file). In addition, no improvement was found with the J-CNN* model, which demonstrated the joint learning with extraction is essential for the improvement.", "With supervised attentive pooling, the model can get additional knowledge from key element labels. It helped the model in cases when certain location phrases were mentioned but the incidents did not happen at those locations. For instance, “I was followed on my way home .”, max pooling will very likely to predict it as “private places”. But, it is actually unknown. In other cases, with supervised attentive pooling, the model can distinguish “metro” and “metro station”, which are “transportation” and “stop/station” respectively. Therefore, the model further improved on classifications on “type of location” with supervised attention in terms of macro F1. For some tasks, like “time of day”, there are fewer cases with such disambiguation and hence max pooling worked well. Supervised attention improved macro F1 in location and harasser classifications, because it made more correct predictions in cases that mentioned location and harasser. But the majority did not mention them. Therefore, the accuracy of J-SACNN did not increase, compared with the other models.", "Classification on Harassment Forms: In Table TABREF18, we also compared the performance of binary classifications on harassment forms with the results reported by Karlekar and Bansal karlekar2018safecity. Joint learning models achieved higher accuracy. In some harassment stories, the whole text or a span of the text consists of trigger words of multiple forms, such as “stare, whistles, start to sing, commenting”. The supervised attention mechanism will force the model to look at all such words rather than just the one related to the harassment form for classification and hence it can introduce noise. This can explain why J-SACNN got lower accuracy in two of the harassment form classifications, compared to J-ACNN. In addition, J-CNN model did best in “ogling” classification." ], [ "We plotted the distribution of harassment incidents in each categorization dimension (Figure FIGREF19). It displays statistics that provide important evidence as to the scale of harassment and that can serve as the basis for more effective interventions to be developed by authorities ranging from advocacy organizations to policy makers. It provides evidence to support some commonly assumed factors about harassment: First, we demonstrate that harassment occurred more frequently during the night time than the day time. Second, it shows that besides unspecified strangers (not shown in the figure), conductors and drivers are top the list of identified types of harassers, followed by friends and relatives.", "Furthermore, we uncovered that there exist strong correlations between the age of perpetrators and the location of harassment, between the single/multiple harasser(s) and location, and between age and single/multiple harasser(s) (Figure FIGREF20). The significance of the correlation is tested by chi-square independence with p value less than 0.05. Identifying these patterns will enable interventions to be differentiated for and targeted at specific populations. For instance, the young harassers often engage in harassment activities as groups. This points to the influence of peer pressure and masculine behavioral norms for men and boys on these activities. We also found that the majority of young perpetrators engaged in harassment behaviors on the streets. These findings suggest that interventions with young men and boys, who are readily influenced by peers, might be most effective when education is done peer-to-peer. It also points to the locations where such efforts could be made, including both in schools and on the streets. In contrast, we found that adult perpetrators of sexual harassment are more likely to act alone. Most of the adult harassers engaged in harassment on public transportation. These differences in adult harassment activities and locations, mean that interventions should be responsive to these factors. For example, increasing the security measures on transit at key times and locations.", "In addition, we also found that the correlations between the forms of harassment with the age, single/multiple harasser, type of harasser, and location (Figure FIGREF21). For example, young harassers are more likely to engage in behaviors of verbal harassment, rather than physical harassment as compared to adults. It was a single perpetrator that engaged in touching or groping more often, rather than groups of perpetrators. In contrast, commenting happened more frequently when harassers were in groups. Last but not least, public transportation is where people got indecently touched most frequently both by fellow passengers and by conductors and drivers. The nature and location of the harassment are particularly significant in developing strategies for those who are harassed or who witness the harassment to respond and manage the everyday threat of harassment. For example, some strategies will work best on public transport, a particular closed, shared space setting, while other strategies might be more effective on the open space of the street.", "These results can provide valuable information for all members of the public. Sharing stories of harassment has been found by researchers to shift people’s cognitive and emotional orientation towards their traumatic experiences BIBREF24. Greater awareness of patterns and scale of harassment experiences promises to ensure those who have been subjected to this violence that they are not alone, empowering others to report incidents, and ensuring them that efforts are being made to prevent others from experiencing the same harassment. These results also provide various authorities tools to identify potential harassment patterns and to make more effective interventions to prevent further harassment incidents. For instance, the authorities can increase targeted educational efforts at youth and adults, and be guided in utilizing limited resources the most effectively to offer more safety measures, including policing and community-based responses. For example, focusing efforts on highly populated public transportation during the nighttime, when harassment is found to be most likely to occur." ], [ "We provided a large number of annotated personal stories of sexual harassment. Analyzing and identifying the social patterns of harassment behavior is essential to changing these patterns and social tolerance for them. We demonstrated the joint learning NLP models with strong performances to automatically extract key elements and categorize the stories. Potentiality, the approaches and models proposed in this study can be applied to sexual harassment stories from other sources, which can process and summarize the harassment stories and help those who have experienced harassment and authorities to work faster, such as by automatically filing reports BIBREF6. Furthermore, we discovered meaningful patterns in the situations where harassment commonly occurred. The volume of social media data is huge, and the more we can extract from these data, the more powerful we can be as part of the efforts to build a safer and more inclusive communities. Our work can increase the understanding of sexual harassment in society, ease the processing of such incidents by advocates and officials, and most importantly, raise awareness of this urgent problem." ], [ "We thank the Safecity for granting the permission of using the data." ] ] }
{ "question": [ "What is the size of the dataset?", "What model did they use?", "What patterns were discovered from the stories?", "Did they use a crowdsourcing platform?" ], "question_id": [ "acd05f31e25856b9986daa1651843b8dc92c2d99", "8c78b21ec966a5e8405e8b9d3d6e7099e95ea5fb", "af60462881b2d723adeb4acb5fbc07ea27b6bde2", "879bec20c0fdfda952444018e9435f91e34d8788" ], "nlp_background": [ "", "", "", "" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ " 9,892 stories of sexual harassment incidents" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We obtained 9,892 stories of sexual harassment incidents that was reported on Safecity. Those stories include a text description, along with tags of the forms of harassment, e.g. commenting, ogling and groping. A dataset of these stories was published by Karlekar and Bansal karlekar2018safecity. In addition to the forms of harassment, we manually annotated each story with the key elements (i.e. “harasser\", “time\", “location\", “trigger\"), because they are essential to uncover the harassment patterns. An example is shown in Figure FIGREF3. Furthermore, we also assigned each story classification labels in five dimensions (Table TABREF4). The detailed definitions of classifications in all dimensions are explained below." ], "highlighted_evidence": [ "We obtained 9,892 stories of sexual harassment incidents that was reported on Safecity. Those stories include a text description, along with tags of the forms of harassment, e.g. commenting, ogling and groping. A dataset of these stories was published by Karlekar and Bansal karlekar2018safecity. In addition to the forms of harassment, we manually annotated each story with the key elements (i.e. “harasser\", “time\", “location\", “trigger\"), because they are essential to uncover the harassment patterns. An example is shown in Figure FIGREF3. Furthermore, we also assigned each story classification labels in five dimensions (Table TABREF4). The detailed definitions of classifications in all dimensions are explained below." ] } ], "annotation_id": [ "faec3a145f93e8ac3b3fb7d2ec34955e32bad505" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "joint learning NLP models that use convolutional neural network (CNN) BIBREF8 and bi-directional long short-term memory (BiLSTM)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "2. We proposed joint learning NLP models that use convolutional neural network (CNN) BIBREF8 and bi-directional long short-term memory (BiLSTM) BIBREF9, BIBREF10 as basic units. Our models can automatically extract the key elements from the sexual harassment stories and at the same time categorize the stories in different dimensions. The proposed models outperformed the single task models, and achieved higher than previously reported accuracy in classifications of harassment forms BIBREF6." ], "highlighted_evidence": [ "We proposed joint learning NLP models that use convolutional neural network (CNN) BIBREF8 and bi-directional long short-term memory (BiLSTM) BIBREF9, BIBREF10 as basic units. Our models can automatically extract the key elements from the sexual harassment stories and at the same time categorize the stories in different dimensions. The proposed models outperformed the single task models, and achieved higher than previously reported accuracy in classifications of harassment forms BIBREF6." ] } ], "annotation_id": [ "0c9bc7918d08fc4a6fc6e260b17c5ece27fed2c5" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "we demonstrate that harassment occurred more frequently during the night time than the day time", "it shows that besides unspecified strangers (not shown in the figure), conductors and drivers are top the list of identified types of harassers, followed by friends and relatives", "we uncovered that there exist strong correlations between the age of perpetrators and the location of harassment, between the single/multiple harasser(s) and location, and between age and single/multiple harasser(s) ", "We also found that the majority of young perpetrators engaged in harassment behaviors on the streets", "we found that adult perpetrators of sexual harassment are more likely to act alone", "we also found that the correlations between the forms of harassment with the age, single/multiple harasser, type of harasser, and location ", "commenting happened more frequently when harassers were in groups. Last but not least, public transportation is where people got indecently touched most frequently both by fellow passengers and by conductors and drivers." ], "yes_no": null, "free_form_answer": "", "evidence": [ "We plotted the distribution of harassment incidents in each categorization dimension (Figure FIGREF19). It displays statistics that provide important evidence as to the scale of harassment and that can serve as the basis for more effective interventions to be developed by authorities ranging from advocacy organizations to policy makers. It provides evidence to support some commonly assumed factors about harassment: First, we demonstrate that harassment occurred more frequently during the night time than the day time. Second, it shows that besides unspecified strangers (not shown in the figure), conductors and drivers are top the list of identified types of harassers, followed by friends and relatives.", "Furthermore, we uncovered that there exist strong correlations between the age of perpetrators and the location of harassment, between the single/multiple harasser(s) and location, and between age and single/multiple harasser(s) (Figure FIGREF20). The significance of the correlation is tested by chi-square independence with p value less than 0.05. Identifying these patterns will enable interventions to be differentiated for and targeted at specific populations. For instance, the young harassers often engage in harassment activities as groups. This points to the influence of peer pressure and masculine behavioral norms for men and boys on these activities. We also found that the majority of young perpetrators engaged in harassment behaviors on the streets. These findings suggest that interventions with young men and boys, who are readily influenced by peers, might be most effective when education is done peer-to-peer. It also points to the locations where such efforts could be made, including both in schools and on the streets. In contrast, we found that adult perpetrators of sexual harassment are more likely to act alone. Most of the adult harassers engaged in harassment on public transportation. These differences in adult harassment activities and locations, mean that interventions should be responsive to these factors. For example, increasing the security measures on transit at key times and locations.", "In addition, we also found that the correlations between the forms of harassment with the age, single/multiple harasser, type of harasser, and location (Figure FIGREF21). For example, young harassers are more likely to engage in behaviors of verbal harassment, rather than physical harassment as compared to adults. It was a single perpetrator that engaged in touching or groping more often, rather than groups of perpetrators. In contrast, commenting happened more frequently when harassers were in groups. Last but not least, public transportation is where people got indecently touched most frequently both by fellow passengers and by conductors and drivers. The nature and location of the harassment are particularly significant in developing strategies for those who are harassed or who witness the harassment to respond and manage the everyday threat of harassment. For example, some strategies will work best on public transport, a particular closed, shared space setting, while other strategies might be more effective on the open space of the street." ], "highlighted_evidence": [ "We plotted the distribution of harassment incidents in each categorization dimension (Figure FIGREF19). It displays statistics that provide important evidence as to the scale of harassment and that can serve as the basis for more effective interventions to be developed by authorities ranging from advocacy organizations to policy makers. It provides evidence to support some commonly assumed factors about harassment: First, we demonstrate that harassment occurred more frequently during the night time than the day time. Second, it shows that besides unspecified strangers (not shown in the figure), conductors and drivers are top the list of identified types of harassers, followed by friends and relatives.", "Furthermore, we uncovered that there exist strong correlations between the age of perpetrators and the location of harassment, between the single/multiple harasser(s) and location, and between age and single/multiple harasser(s) (Figure FIGREF20). ", "We also found that the majority of young perpetrators engaged in harassment behaviors on the streets. These findings suggest that interventions with young men and boys, who are readily influenced by peers, might be most effective when education is done peer-to-peer. It also points to the locations where such efforts could be made, including both in schools and on the streets. ", "In contrast, we found that adult perpetrators of sexual harassment are more likely to act alone. Most of the adult harassers engaged in harassment on public transportation. These differences in adult harassment activities and locations, mean that interventions should be responsive to these factors. For example, increasing the security measures on transit at key times and locations.", "In addition, we also found that the correlations between the forms of harassment with the age, single/multiple harasser, type of harasser, and location (Figure FIGREF21). For example, young harassers are more likely to engage in behaviors of verbal harassment, rather than physical harassment as compared to adults. It was a single perpetrator that engaged in touching or groping more often, rather than groups of perpetrators.", "In contrast, commenting happened more frequently when harassers were in groups. Last but not least, public transportation is where people got indecently touched most frequently both by fellow passengers and by conductors and drivers. The nature and location of the harassment are particularly significant in developing strategies for those who are harassed or who witness the harassment to respond and manage the everyday threat of harassment. For example, some strategies will work best on public transport, a particular closed, shared space setting, while other strategies might be more effective on the open space of the street." ] } ], "annotation_id": [ "eb7816072b64c4eb4279f8d6a8329315e86c2c1d" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [ " foogreen!0.11444976553320885 foothe foogreen!0.013002995729038958 foomicro foogreen!0.016201976904994808 foobus foogreen!0.14046543219592422 fooand foogreen!0.12413455988280475 foothere foogreen!0.18423641449771821 foowas foogreen!0.3394613158889115 fooa foogreen!1.0372470133006573 foogirl foogreen!0.20553644571918994 fooopposite foogreen!0.2821453963406384 footo foogreen!0.5574009846895933 foome foogreen!0.2709480468183756 foojust foogreen!0.2582515007816255 foothen foogreen!0.9223996312357485 fooa", " foogreen!0.11444976553320885 foothe foogreen!0.013002995729038958 foomicro foogreen!0.016201976904994808 foobus foogreen!0.14046543219592422 fooand foogreen!0.12413455988280475 foothere foogreen!0.18423641449771821 foowas foogreen!0.3394613158889115 fooa foogreen!1.0372470133006573 foogirl foogreen!0.20553644571918994 fooopposite foogreen!0.2821453963406384 footo foogreen!0.5574009846895933 foome foogreen!0.2709480468183756 foojust foogreen!0.2582515007816255 foothen foogreen!0.9223996312357485 fooa" ] } ], "annotation_id": [ "73927dfb541dd4d183c474dbc8a960b683bda0e8" ], "worker_id": [ "a0b403873302db7cada39008f04d01155ef68f4f" ] } ] }
{ "caption": [ "Table 1: Definition of classes in different dimensions about sexual harassment.", "Figure 2: CNN based Joint learning Model. WL and WR are the left and right context around each word.", "Figure 3: BiLSM based Joint Learning Model. Here we use an input of five words as an example.", "Table 2: Key element extraction results.", "Table 3: Classification accuracy and macro F1 of the models. The best scores are in bold.", "Table 4: Harassment form classification accuracy of models. * Reported by Karlekar and Bansal (2018)", "Figure 4: Distributions of incidents. A) Distributions over age of harasser, B) over single/multiple harasser(s), C) over time of day, D) over type of harasser. E) over type of location.", "Figure 5: Distributions of incidents over two dimensions. A) Distributions of incidents A) with young/adult harassers at each location, B) with single/multiple harasser(s) at each location, C) across young/adult harassers and single/multiple harasser(s)", "Figure 6: Distributions of incidents with harassment forms and different dimensions. Distributions of harassment forms A) within each age group, B) within single/multiple harasser(s), C) over locations, D) within each harasser type." ], "file": [ "3-Table1-1.png", "4-Figure2-1.png", "4-Figure3-1.png", "6-Table2-1.png", "6-Table3-1.png", "7-Table4-1.png", "7-Figure4-1.png", "8-Figure5-1.png", "8-Figure6-1.png" ] }
1604.00117
Domain Adaptation of Recurrent Neural Networks for Natural Language Understanding
The goal of this paper is to use multi-task learning to efficiently scale slot filling models for natural language understanding to handle multiple target tasks or domains. The key to scalability is reducing the amount of training data needed to learn a model for a new task. The proposed multi-task model delivers better performance with less data by leveraging patterns that it learns from the other tasks. The approach supports an open vocabulary, which allows the models to generalize to unseen words, which is particularly important when very little training data is used. A newly collected crowd-sourced data set, covering four different domains, is used to demonstrate the effectiveness of the domain adaptation and open vocabulary techniques.
{ "section_name": [ "Introduction", "Model", "Data", "Experiments", "Training and Model Configuration Details", "Multi-task Model Experiments", "Open Vocabulary Model Experiments", "Conclusions" ], "paragraphs": [ [ "Slot filling models are a useful method for simple natural language understanding tasks, where information can be extracted from a sentence and used to perform some structured action. For example, dates, departure cities and destinations represent slots to fill in a flight booking task. This information is extracted from natural language queries leveraging typical context associated with each slot type. Researchers have been exploring data-driven approaches to learning models for automatic identification of slot information since the 90's, and significant advances have been made BIBREF0 . Our paper builds on recent work on slot-filling using recurrent neural networks (RNNs) with a focus on the problem of training from minimal annotated data, taking an approach of sharing data from multiple tasks to reduce the amount of data for developing a new task.", "As candidate tasks, we consider the actions that a user might perform via apps on their phone. Typically, a separate slot-filling model would be trained for each app. For example, one model understands queries about classified ads for cars BIBREF1 and another model handles queries about the weather BIBREF2 . As the number of apps increases, this approach becomes impractical due to the burden of collecting and labeling the training data for each model. In addition, using independent models for each task has high storage costs for mobile devices.", "Alternatively, a single model can be learned to handle all of the apps. This type of approach is known as multi-task learning and can lead to improved performance on all of the tasks due to information sharing between the different apps BIBREF3 . Multi-task learning in combination with neural networks has been shown to be effective for natural language processing tasks BIBREF4 . When using RNNs for slot filling, almost all of the model parameters can be shared between tasks. In our study, only the relatively small output layer, which consists of slot embeddings, is individual to each app. More sharing means that less training data per app can be used and there will still be enough data to effectively train the network. The multi-task approach has lower data requirements, which leads to a large cost savings and makes this approach scalable to large numbers of applications.", "The shared representation that we build on leverages recent work on slot filling models that use neural network based approaches. Early neural network based papers propose feedforward BIBREF5 or RNN architectures BIBREF6 , BIBREF7 . The focus shifted to RNN's with long-short term memory cells (LSTMs) BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 after LSTMs were shown to be effective for other tasks BIBREF12 . The most recent papers use variations on LSTM sequence models, including encoder-decoder, external memory, or attention architectures BIBREF13 , BIBREF14 , BIBREF15 . The particular variant that we build on is a bidirectional LSTM, similar to BIBREF16 , BIBREF11 .", "One highly desirable property of a good slot filling model is to generalize to previously unseen slot values. For instance, we should not expect that the model will see the names of all the cities during training time, especially when only a small amount of training data is used. We address the generalizability issue by incorporating the open vocabulary embeddings from Ling et al. into our model BIBREF17 . These embeddings work by using a character RNN to process a word one letter at a time. This way the model can learn to share parameters between different words that use the same morphemes. For example BBQ restaurants frequently use words like “smokehouse”, “steakhouse”, and “roadhouse” in their names and “Bayside”,“Bayview”, and “Baywood” are all streets in San Francisco. Recognizing these patterns would be helpful in detecting a restaurant or street name slot, respectively.", "The two main contributions of this work are the multi-task model and the use of the open vocabulary character-based embeddings, which together allow for scalable slot filling models. Our work on multi-task learning in slot filling differs from its previous use in BIBREF18 in that we allow for soft sharing between tasks instead of explicitly matching slots to each other across different tasks. A limitation of explicit slot matching is that two slots that appear to have the same underlying type, such as location-based slots, may actually use the slot information in different ways depending on the overall intent of the task. In our model, the sharing between tasks is done implicitly by the neural network. Our approach to handling words unseen in training data is different from the delexicalization proposed in BIBREF19 in that we do not require the vocabulary items associated with slots and values to be prespecified. It is complementary to work on extending domain coverage BIBREF20 , BIBREF21 .", "The proposed model is described in more detail in Section \"Model\" . The approach is assessed on a new data collection based on four apps, described in Section \"Data\" . The experiments described in Section \"Training and Model Configuration Details\" investigate how much data is necessary for the $n$ -th app using a multi-task model that leverages the data from the previous $n-1$ apps, with results compared against the single-task model that only utilizes the data from the $n$ -th app. We conclude in Section \"Conclusions\" with a summary of the key findings and discussion of opportunities for future work." ], [ "Our model has a word embedding layer, followed by a bi-directional LSTM (bi-LSTM), and a softmax output layer. The bi-LSTM allows the model to use information from both the right and left contexts of each word when making predictions. We choose this architecture because similar models have been used in prior work on slot filling and have achieved good results BIBREF16 , BIBREF11 . The LSTM gates are used as defined by Sak et al. including the use of the linear projection layer on the output of the LSTM BIBREF22 . The purpose of the projection layer is to produce a model with fewer parameters without reducing the number of LSTM memory cells. For the multi-task model, the word embeddings and the bi-LSTM parameters are shared across tasks but each task has its own softmax layer. This means that if the multi-task model has half a million parameters, only a couple thousand of them are unique to each task and the other 99.5% are shared between all of the tasks.", "The slot labels are encoded in BIO format BIBREF23 indicating if a word is the beginning, inside or outside any particular slot. Decoding is done greedily. If a label does not follow the BIO syntax rules, i.e. an inside tag must follow the appropriate begin tag, then it is replaced with the outside label. Evaluation is done using the CoNLL evaluation script BIBREF24 to calculate the F1 score. This is the standard way of evaluating slot-filling models in the literature.", "In recent work on language modeling, a neural architecture that combined fixed word embeddings with character-based embeddings was found to to be useful for handling previously unseen words BIBREF25 . Based on that result, the embeddings in the open vocabulary model are a concatenation of the character-based embeddings with fixed word embeddings. When an out-of-vocabulary word is encountered, its character-based embedding is concatenated with the embedding for the unknown word token. The character-based embeddings are generated from a two layer bi-LSTM that processes each word one character at a time. The character-based word embedding is produced by concatenating the last states from each of the directional LSTM's in the second layer and passing them through a linear layer for dimensionality reduction." ], [ "Crowd-sourced data was collected simulating common use cases for four different apps: United Airlines, Airbnb, Greyhound bus service and OpenTable. The corresponding actions are booking a flight, renting a home, buying bus tickets, and making a reservation at a restaurant. In order to elicit natural language, crowd workers were instructed to simulate a conversation with a friend planning an activity as opposed to giving a command to the computer. Workers were prompted with a slot type/value pair and asked to form a reply to their friend using that information. The instructions were to not include any other potential slots in the sentence but this instruction was not always followed by the workers.", "Slot types were chosen to roughly correspond to form fields and UI elements, such as check boxes or dropdown menus, on the respective apps. The amount of data collected per app and the number of slot types is listed in Table 1 . The slot types for each app are described in Table 2 , and an example labeled sentence from each app is given in Table 3 . One thing to notice is that the the number of slot types is relatively small when compared to the popular ATIS dataset that has over one hundred slot types BIBREF0 . In ATIS, separate slot types would be used for names of cities, states, or countries whereas in this data all of those would fall under a single slot for locations.", "Slot values were pulled from manually created lists of locations, dates and times, restaurants, etc. Values for prompting each rater were sampled from these lists. Workers were instructed to use different re-phrasings of the prompted values, but most people used the prompted value verbatim. Occasionally, workers used an unprompted slot value not in the list.", "For the word-level LSTM, the data was lower-cased and tokenized using a standard tokenizer. Spelling mistakes were not corrected. All digits were replaced by the '#' character. Words that appear only once in the training data are replaced with an unknown word token. For the character-based word embeddings used in the open vocabulary model, no lower casing or digit replacement is done.", "Due to the way the OpenTable data was collected some slot values were over-represented leading to over fitting to those particular values. To correct this problem sentences that used the over-represented slot values had their values replaced by sampling from a larger list of potential values. The affected slot types are the ones for cuisine, restaurant names, and locations. This substitution made the OpenTable data more realistic as well as more similar to the other data that was collected.", "The data we collected for the United Airlines app is an exception in a few ways: we collected four times as much data for this app than the other ones; workers were occasionally prompted with up to four slot type/value pairs; and workers were instructed to give commands to their device instead of simulating a conversation with a friend. For all of the other apps, workers were prompted to use a single slot type per sentence. We argue that having varying amounts of data for different apps is a realistic scenario.", "Another possible source of data is the Air Travel Information Service (ATIS) data set collected in the early 1990's BIBREF0 . However, this data is sufficiently similar to the United collection, that it is not likely to add sufficient variety to improve the target domains. Further, it suffers from artifacts of data collected at a time with speech recognition systems had much higher error rates. The new data collected for this work fills a need raised in BIBREF26 , which concluded that lack of data was an impediment to progress in slot filling." ], [ "The section describes two sets of experiments: the first is designed to test the effectiveness of the multi-task model and the second is designed to test the generalizability of the open vocabulary model. The scenario is that we already have $n-1$ models in place and we wish to discover how much data will be necessary to build a model for an additional application." ], [ "The data is split to use 30% for training with 70% to be used for test data. The reason that a majority of the data is used for testing is that in the second experiment the results are reported separately for sentences containing out of vocabulary tokens and a large amount of data is needed to get a sufficient sample size. Hyperparameter tuning presents a challenge when operating in a low resource scenario. When there is barely enough data to train the model none can be spared for a validation set. We used data from the United app for hyperparameter tuning since it is the largest and assumed that the hyperparameter settings generalized to the other apps.", "Training is done using stochastic gradient descent with minibatches of 25 sentences. The initial learning rate is 0.3 and is set to decay to 98% of its value every 100 minibatches. For the multi-task model, training proceeds by alternating between each of the tasks when selecting the next minibatch. All the parameters are initialized uniformly in the range [-0.1, 0.1]. Dropout is used for regularization on the word embeddings and on the outputs from each LSTM layer with the dropout probability set to 60% BIBREF27 .", "For the single-task model, the word embeddings are 60 dimensional and the LSTM is dimension 100 with a 70 dimensional projection layer on the LSTM. For the multi-task model, word embeddings are 200 dimensional, and the LSTM has 250 dimensions with a 170 dimensional projection layer. For the open vocabulary version of the model, the 200-dimensional input is a concatenation of 160-dimensional traditional word embeddings with 40-dimensional character-based word embeddings. The character embedding layer is 15 dimensions, the first LSTM layer is 40 dimensions with a 20 dimensional projection layer, and the second LSTM layer is 130 dimensions." ], [ "We compare a single-task model against the multi-task model for varying amounts of training data. In the multi-task model, the full amount of data is used for $n-1$ apps and the amount of data is allowed to vary only for the $n$ -th application. These experiments use the traditional word embeddings with a closed vocabulary. Since the data for the United app is bigger than the other three apps combined, it is used as an anchor for the multi-task model. The other three apps alternate in the position of the $n$ -th app. The data usage for the $n$ -th app is varied while the other $n-1$ apps in each experiment use the full amount of available training data. The full amount of training data is different for each app. The data used for the $n$ -th app is 200, 400, or 800 sentences or all available training data depending on the experiment. The test set remains fixed for all of the experiments even as part of the training data is discarded to simulate the low resource scenario.", "In Figure 1 we show the single-task vs. multi-task model performance for each of three different applications. The multi-task model outperforms the single-task model at all data sizes, and the relative performance increases as the size of the training data decreases. When only 200 sentences of training data are used, the performance of the multi-task model is about 60% better than the single-task model for both the Airbnb and Greyhound apps. The relative gain for the OpenTable app is 26%. Because the performance of the multi-task model decays much more slowly as the amount of training data is reduced, the multi-task model can deliver the same performance with a considerable reduction in the amount of labeled data." ], [ "The open vocabulary model experiments test the ability of the model to handle unseen words in test time, which are particularly likely to occur when using a reduced amount of training data. In these experiments the open vocabulary model is compared against the fixed embedding model. The results are reported separately for the sentences that contain out of vocabulary tokens, since these are where the open vocabulary system is expected to have an advantage.", "Figure 2 gives the OOV rate for each app for varying amounts of training data plotted on a log-log scale. The OOV words tend to be task-specific terminology. For example, the OpenTable task is the only one that has names of restaurants but names of cities are present in all four tasks so they tend to be covered better. The OOV rate dramatically increases when the size of the training data is less than 500 sentences. Since our goal is to operate in the regime of less than 500 sentences per task, handling OOVs is a priority. The multi-task model is used in these experiments. The only difference between the closed vocabulary and open vocabulary systems is that the closed vocabulary system uses the traditional word embeddings and the open vocabulary system uses the traditional word embeddings concatenated with character-based embeddings.", "Table 4 reports F1 scores on the test set for both the closed and open vocabulary systems. The results differ between the tasks, but none have an overall benefit from the open vocabulary system. Looking at the subset of sentences that contain an OOV token, the open vocabulary system delivers increased performance on the Airbnb and Greyhound tasks. These two are the most difficult apps out of the four and therefore had the most room for improvement. The United app is also all lower case and casing is an important clue for detecting proper nouns that the open vocabulary model takes advantage of.", "Looking a little deeper, in Figure 3 we show the breakdown in performance across individual slot types. Only those slot types which occur at least one hundred times in the test data are shown in this figure. The slot types that are above the diagonal saw a performance improvement using the open vocabulary model. The opposite is true for those that are below the diagonal. The open vocabulary system appears to do worse on slots that express quantities, dates and times and better on slots with greater slot perplexity (i.e., greater variation in slot values) like ones relating to locations. The three slots where the open vocabulary model gave the biggest gain are the Greyhound LeavingFrom and GoingTo slots along with the Airbnb Amenities slot. The three slots where the open vocabulary model did the worst relative to the closed vocabulary model are the Airbnb Price slot, along with the Greyhound DiscountType and DepartDate slots. The Amenities slot is an example of a slot with higher perplexity (with options related to pets, availability of a gym, parking, fire extinguishers, proximity to attractions), and the DiscountType is one with lower perplexity (three options cover almost all cases). We hypothesize that the reason that the numerical slots are better under the closed vocabulary model is due to their relative simplicity and not an inability of the character embeddings to learn representations for numbers." ], [ "In summary, we find that using a multi-task model with shared embeddings gives a large reduction in the minimum amount of data needed to train a slot-filling model for a new app. This translates into a cost savings for deploying slot filling models for new applications. The combination of the multi-task model with the open vocabulary embeddings increases the generalizability of the model especially when there are OOVs in the sentence. These contributions allow for scalable slot filling models.", "For future work, there are some improvements that could be made to the model such as the addition of an attentional mechanism to help with long distance dependencies BIBREF15 , use of beam-search to improve decoding, and exploring unsupervised adaptation as in BIBREF19 .", "Another item for future work is to collect additional tasks to examine the scalability of the multi-task model beyond the four applications that were used in this work. Due to their extra depth, character-based methods usually require more data than word based models BIBREF28 . Since this paper uses limited data, the collection of additional tasks may significantly improve the performance of the open vocabulary model." ] ] }
{ "question": [ "Does the performance increase using their method?", "What tasks are they experimenting with in this paper?", "What is the size of the open vocabulary?" ], "question_id": [ "3c378074111a6cc7319c0db0aced5752c30bfffb", "b464bc48f176a5945e54051e3ffaea9a6ad886d7", "3b40799f25dbd98bba5b526e0a1d0d0bb51173e0" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "search_query": [ "domain adaptation", "domain adaptation", "domain adaptation" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "The multi-task model outperforms the single-task model at all data sizes", "but none have an overall benefit from the open vocabulary system" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In Figure 1 we show the single-task vs. multi-task model performance for each of three different applications. The multi-task model outperforms the single-task model at all data sizes, and the relative performance increases as the size of the training data decreases. When only 200 sentences of training data are used, the performance of the multi-task model is about 60% better than the single-task model for both the Airbnb and Greyhound apps. The relative gain for the OpenTable app is 26%. Because the performance of the multi-task model decays much more slowly as the amount of training data is reduced, the multi-task model can deliver the same performance with a considerable reduction in the amount of labeled data.", "Table 4 reports F1 scores on the test set for both the closed and open vocabulary systems. The results differ between the tasks, but none have an overall benefit from the open vocabulary system. Looking at the subset of sentences that contain an OOV token, the open vocabulary system delivers increased performance on the Airbnb and Greyhound tasks. These two are the most difficult apps out of the four and therefore had the most room for improvement. The United app is also all lower case and casing is an important clue for detecting proper nouns that the open vocabulary model takes advantage of." ], "highlighted_evidence": [ "The multi-task model outperforms the single-task model at all data sizes, and the relative performance increases as the size of the training data decreases. When only 200 sentences of training data are used, the performance of the multi-task model is about 60% better than the single-task model for both the Airbnb and Greyhound apps. The relative gain for the OpenTable app is 26%.", "The results differ between the tasks, but none have an overall benefit from the open vocabulary system." ] } ], "annotation_id": [ "881740c5dc710f7bff9fa3cafe7fed562098076b" ], "worker_id": [ "08f81a5d78e451df16193028defb70150c4201c9" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Slot filling", "we consider the actions that a user might perform via apps on their phone", "The corresponding actions are booking a flight, renting a home, buying bus tickets, and making a reservation at a restaurant" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Slot filling models are a useful method for simple natural language understanding tasks, where information can be extracted from a sentence and used to perform some structured action. For example, dates, departure cities and destinations represent slots to fill in a flight booking task. This information is extracted from natural language queries leveraging typical context associated with each slot type. Researchers have been exploring data-driven approaches to learning models for automatic identification of slot information since the 90's, and significant advances have been made BIBREF0 . Our paper builds on recent work on slot-filling using recurrent neural networks (RNNs) with a focus on the problem of training from minimal annotated data, taking an approach of sharing data from multiple tasks to reduce the amount of data for developing a new task.", "As candidate tasks, we consider the actions that a user might perform via apps on their phone. Typically, a separate slot-filling model would be trained for each app. For example, one model understands queries about classified ads for cars BIBREF1 and another model handles queries about the weather BIBREF2 . As the number of apps increases, this approach becomes impractical due to the burden of collecting and labeling the training data for each model. In addition, using independent models for each task has high storage costs for mobile devices.", "Crowd-sourced data was collected simulating common use cases for four different apps: United Airlines, Airbnb, Greyhound bus service and OpenTable. The corresponding actions are booking a flight, renting a home, buying bus tickets, and making a reservation at a restaurant. In order to elicit natural language, crowd workers were instructed to simulate a conversation with a friend planning an activity as opposed to giving a command to the computer. Workers were prompted with a slot type/value pair and asked to form a reply to their friend using that information. The instructions were to not include any other potential slots in the sentence but this instruction was not always followed by the workers." ], "highlighted_evidence": [ "Slot filling models are a useful method for simple natural language understanding tasks, where information can be extracted from a sentence and used to perform some structured action", "As candidate tasks, we consider the actions that a user might perform via apps on their phone.", "Crowd-sourced data was collected simulating common use cases for four different apps: United Airlines, Airbnb, Greyhound bus service and OpenTable. The corresponding actions are booking a flight, renting a home, buying bus tickets, and making a reservation at a restaurant." ] } ], "annotation_id": [ "dfa31c92a5940f1e198d052c23570ca0764ce3fa" ], "worker_id": [ "08f81a5d78e451df16193028defb70150c4201c9" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "0cc0f1560c1835543b940a802a645694f17b213a" ], "worker_id": [ "08f81a5d78e451df16193028defb70150c4201c9" ] } ] }
{ "caption": [ "Table 1: Data statistics for each of the four target applications.", "Table 2: Listing of slot types for each app.", "Figure 1: F1 score for multi-task vs. single-task models.", "Table 3: Example labeled sentences from each application.", "Figure 2: OOV rate for each of the n apps.", "Figure 3: Comparison of performance on individual slot types.", "Table 4: Comparison of F1 scores for open and closed vocabulary systems on the full test set vs. the subset with OOV words." ], "file": [ "2-Table1-1.png", "2-Table2-1.png", "3-Figure1-1.png", "3-Table3-1.png", "4-Figure2-1.png", "4-Figure3-1.png", "4-Table4-1.png" ] }
1908.06725
Align, Mask and Select: A Simple Method for Incorporating Commonsense Knowledge into Language Representation Models
Neural language representation models such as Bidirectional Encoder Representations from Transformers (BERT) pre-trained on large-scale corpora can well capture rich semantics from plain text, and can be fine-tuned to consistently improve the performance on various natural language processing (NLP) tasks. However, the existing pre-trained language representation models rarely consider explicitly incorporating commonsense knowledge or other knowledge. In this paper, we develop a pre-training approach for incorporating commonsense knowledge into language representation models. We construct a commonsense-related multi-choice question answering dataset for pre-training a neural language representation model. The dataset is created automatically by our proposed"align, mask, and select"(AMS) method. We also investigate different pre-training tasks. Experimental results demonstrate that pre-training models using the proposed approach followed by fine-tuning achieves significant improvements on various commonsense-related tasks, such as CommonsenseQA and Winograd Schema Challenge, while maintaining comparable performance on other NLP tasks, such as sentence classification and natural language inference (NLI) tasks, compared to the original BERT models.
{ "section_name": [ "Introduction", "Language Representation Model", "Commonsense Reasoning", "Distant Supervision", "Commonsense Knowledge Base", "Constructing Pre-training Dataset", "Pre-training BERT_CS", "Experiments", "CommonsenseQA", "Winograd Schema Challenge", "GLUE", "Pre-training Strategy", "Performance Curve", "Error Analysis", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "Pre-trained language representation models, including feature-based methods BIBREF0 , BIBREF1 and fine-tuning methods BIBREF2 , BIBREF3 , BIBREF4 , can capture rich language information from text and then benefit many NLP tasks. Bidirectional Encoder Representations from Transformers (BERT) BIBREF4 , as one of the most recently developed models, has produced the state-of-the-art results by simple fine-tuning on various NLP tasks, including named entity recognition (NER) BIBREF5 , text classification BIBREF6 , natural language inference (NLI) BIBREF7 , question answering (QA) BIBREF8 , BIBREF9 , and has achieved human-level performances on several datasets BIBREF8 , BIBREF9 .", "However, commonsense reasoning is still a challenging task for modern machine learning methods. For example, recently BIBREF10 proposed a commonsense-related task, CommonsenseQA, and showed that the BERT model accuracy remains dozens of points lower than human accuracy on the questions about commonsense knowledge. Some examples from CommonsenseQA are shown in Table 1 part A. As can be seen from the examples, although it is easy for humans to answer the questions based on their knowledge about the world, it is a great challenge for machines when there is limited training data.", "We hypothesize that exploiting knowledge graphs for commonsense in QA modeling can help model choose correct answers. For example, as shown in the part B of Table 1 , some triples from ConceptNet BIBREF11 are quite related to the questions above. Exploiting these triples in the QA modeling may benefit the QA models to make a correct decision.", "In this paper, we propose a pre-training approach that can leverage commmonsense knowledge graphs, such as ConceptNet BIBREF11 , to improve the commonsense reasoning capability of language representation models, such as BERT. And at the same time, the proposed approach targets maintaining comparable performances on other NLP tasks with the original BERT models. It is challenging to incorporate the commonsense knowledge into language representation models since the commonsense knowledge is represented as a structured format, such as (concept $_1$ , relation, concept $_2$ ) in ConceptNet, which is inconsistent with the data used for pre-training language representation models. For example, BERT is pre-trained on the BooksCorpus and English Wikipedia that are composed of unstructured natural language sentences.", "To tackle the challenge mentioned above, inspired by the distant supervision approach BIBREF12 , we propose the “align, mask and select\" (AMS) method that can align the commonsense knowledge graphs with a large text corpus to construct a dataset consisting of sentences with labeled concepts. Different from the pre-training tasks for BERT, the masked language model (MLM) and next sentence prediction (NSP) tasks, we use the generated dataset in a multi-choice question answering task. We then pre-train the BERT model on this dataset with the multi-choice question answering task and fine-tune it on various commonsense-related tasks, such as CommonsenseQA BIBREF10 and Winograd Schema Challenge (WSC) BIBREF13 , and achieve significant improvements. We also fine-tune and evaluate the pre-trained models on other NLP tasks, such as sentence classification and NLI tasks, such as GLUE BIBREF6 , and achieve comparable performance with the original BERT models.", "In summary, the contributions of this paper are threefold. First, we propose a pre-training approach for incorporating commonsense knowledge into language representation models for improving the commonsense reasoning capabilities of these models. Second, We propose an “align, mask and select\" (AMS) method, inspired by the distant supervision approaches, to automatically construct a multi-choice question answering dataset. Third, Experiments demonstrate that the pre-trained model from the proposed approach with fine-tuning achieves significant performance improvements on several commonsense-related tasks, such as CommonsenseQA BIBREF10 and Winograd Schema Challenge BIBREF13 , and still maintains comparable performances on several sentence classification and NLI tasks in GLUE BIBREF6 ." ], [ "Language representation models have demonstrated their effectiveness for improving many NLP tasks. These approaches can be categorized into feature-based approaches and fine-tuning approaches. The early Word2Vec BIBREF14 and Glove models BIBREF0 focused on feature-based approaches to transform words into distributed representations. However, these methods suffered from the insufficiency for word disambiguation. BIBREF15 further proposed Embeddings from Language Models (ELMo) that derive context-aware word vectors from a bidirectional LSTM, which is trained with a coupled language model (LM) objective on a large text corpus.", "The fine-tuning approaches are different from the above-mentioned feature-based language approaches which only use the pre-trained language representations as input features. BIBREF2 pre-trained sentence encoders from unlabeled text and fine-tuned for a supervised downstream task. BIBREF3 proposed a generative pre-trained Transformer BIBREF16 (GPT) to learn language representations. BIBREF4 proposed a deep bidirectional model with multi-layer Transformers (BERT), which achieved the state-of-the-art performance for a wide variety of NLP tasks. The advantage of these approaches is that few parameters need to be learned from scratch.", "Though both feature-based and fine-tuning language representation models have achieved great success, they did not incorporate the commonsense knowledge. In this paper, we focus on incorporate commonsense knowledge into pre-training of language representation models." ], [ "Commonsense reasoning is a challenging task for modern machine learning methods. As demonstrated in recent work BIBREF17 , incorporating commonsense knowledge into question answering models in a model-integration fashion helped improve commonsense reasoning ability. Instead of ensembling two independent models as in BIBREF17 , an alternative direction is to directly incorporate commonsense knowledge into an unified language representation model. BIBREF18 proposed to directly pre-training BERT on commonsense knowledge triples. For any triple (concept $_1$ , relation, concept $_2$ ), they took the concatenation of concept $_1$ and relation as the question and concept $_2$ as the correct answer. Distractors were formed by randomly picking words or phrases in the ConceptNet. In this work, we also investigate directly incorporating commonsense knowledge into an unified language representation model. However, we hypothesize that the language representations learned in BIBREF18 may be tampered since the inputs to the model constructed this way are not natural language sentences. To address this issue, we propose a pre-training approach for incorporating commonsense knowledge that includes a method to construct large-scale, natural language sentences. BIBREF19 collected the Common Sense Explanations (CoS-E) dataset using Amazon Mechanical Turk and applied a Commonsense Auto-Generated Explanations (CAGE) framework to language representation models, such as GPT and BERT. However, collecting this dataset used a large amount of human efforts. In contrast, in this paper, we propose an “align, mask and select\" (AMS) method, inspired by the distant supervision approaches, to automatically construct a multi-choice question answering dataset." ], [ "The distant supervision approach was originally proposed for generating training data for the relation classification task. The distant supervision approach BIBREF12 assumes that if two entities/concepts participate in a relation, all sentences that mention these two entities/concepts express that relation. Note that it is inevitable that there exists noise in the data labeled by distant supervision BIBREF20 . In this paper, instead of employing the relation labels labeled by distant supervision, we focus on the aligned entities/concepts. We propose the AMS method to construct a multi-choice QA dataset that align sentences with commonsense knowledge triples, mask the aligned words (entities/concepts) in sentences and treat the masked sentences as questions, and select several entities/concepts from knowledge graphs as candidate choices." ], [ "This section describes the commonsense knowledge base investigated in our experiments. We use the ConceptNet BIBREF11 , one of the most widely used commonsense knowledge bases. ConceptNet is a semantic network that represents the large sets of words and phrases and the commonsense relationships between them. It contains over 21 million edges and over 8 million nodes. Its English vocabulary contains approximately 1,500,000 nodes, and for 83 languages, it contains at least 10,000 nodes for each of them, respectively. ConceptNet contains a core of 36 relations.", "Each instance in ConceptNet can be generally represented as a triple $r_i$ = (concept $_1$ , relation, concept $_2$ ), indicating relation between the two concepts concept $_1$ and concept $_2$ . For example, the triple (semicarbazide, IsA, chemical compound) means that “semicarbazide is a kind of chemical compounds\"; the triple (cooking dinner, Causes, cooked food) means that “the effect of cooking dinner is cooked food\", etc." ], [ "In this section, we describe the details of constructing the commonsense-related multi-choice question answering dataset. Firstly, we filter the triples in ConceptNet with the following steps: (1) Filter triples in which one of the concepts is not English words. (2) Filter triples with the general relations “RelatedTo\" and “IsA\", which hold a large proportion in ConceptNet. (3) Filter triples in which one of the concepts has more than four words or the edit distance between the two concepts is less than four. After filtering, we obtain 606,564 triples.", "Each training sample is generated by three steps: align, mask and select, which we call as AMS method. Each sample in the dataset consists of a question and several candidate answers, which has the same form as the CommonsenseQA dataset. An example of constructing one training sample by masking concept $_2$ is shown in Table 2 .", "Firstly, we align each triple (concept $_1$ , relation, concept $_2$ ) from ConceptNet to the English Wikipedia dataset to extract the sentences with their concepts labeled. Secondly, we mask the concept $_1$ /concept $_2$ in one sentence with a special token [QW] and treat this sentence as a question, where QW is a replacement word of the question words “what\", “where\", etc. And the masked concept $_1$ /concept $_2$ is the correct answer for this question. Thirdly, for generating the distractors, BIBREF18 proposed a method to form distractors by randomly picking words or phrases in ConceptNet. In this paper, in order to generate more confusing distractors than the random selection approach, we request those distractors and the correct answer share the same concept $_2$ or concept $_1$ and the relation. That is to say, we search ( $\\ast $ , relation, concept $_2$ ) and (concept $_2$0 , relation, $_2$1 ) in ConceptNet to select the distractors instead of random selection, where $_2$2 is a wildcard character that can match any word or phrase. For each question, we reserve four distractors and one correct answer. If there are less than four matched distractors, we discard this question instead of complementing it with random selection. If there are more than four distractors, we randomly select four distractors from them. After applying the AMS method, we create 16,324,846 multi-choice question answering samples." ], [ "We investigate a multi-choice question-answering task for pre-training the English BERT base and BERT large models released by Google on our constructed dataset. The resulting models are denoted BERT_CS $_{base}$ and BERT_CS $_{large}$ , respectively. We then investigate the performance of fine-tuning the BERT_CS models on several NLP tasks, including commonsense-related tasks and common NLP tasks, presented in Section \"Experiments\" .", "To reduce the large cost of training BERT_CS models from scratch, we initialize the BERT_CS models (for both BERT $_{base}$ and BERT $_{large}$ models) with the parameter weights released by Google. We concatenate the question with each answer to construct a standard input sequence for BERT_CS (i.e., “[CLS] the largest [QW] by ... ? [SEP] city [SEP]”, where [CLS] and [SEP] are two special tokens), and the hidden representations over the [CLS] token are run through a softmax layer to create the predictions.", "The objective function is defined as follows: ", "$$L = - {\\rm logp}(c_i|s),$$ (Eq. 10) ", "$${\\rm p}(c_i|s) = \\frac{{\\rm exp}(\\mathbf {w}^{T}\\mathbf {c}_{i})}{\\sum _{k=1}^{N}{\\rm exp}(\\mathbf {w}^{T}\\mathbf {c}_{k})},$$ (Eq. 11) ", "where $c_i$ is the correct answer, $\\mathbf {w}$ are the parameters in the softmax layer, N is the total number of all candidates, and $\\mathbf {c}_i$ is the vector representation of the special token [CLS]. We pre-train BERT_CS models with the batch size 160, the initial learning rate $2e^{-5}$ and the max sequence length 128 for 1 epoch. The pre-training is conducted on 16 NVIDIA V100 GPU cards with 32G memory for about 3 days for the BERT_CS $_{large}$ model and 1 day for the BERT_CS $_{base}$ model." ], [ "In this section, we investigate the performance of fine-tuning the BERT_CS models on several NLP tasks. Note that when fine tuning on multi-choice QA tasks, e.g., CommonsenseQA and Winograd Schema Challenge (see section 5.3), we fine-tune all parameters in BERT_CS, including the last softmax layer from the token [CLS]; whereas, for other tasks, we randomly initialize the classifier layer and train it from scratch.", "Additionally, as described in BIBREF4 , fine-tuning on BERT sometimes is observed to be unstable on small datasets, so we run experiments with 5 different random seeds and select the best model based on the development set for all of the fine-tuning experiments in this section." ], [ "In this subsection, we conduct experiments on a commonsense-related multi-choice question answering benchmark, the CommonsenseQA dataset BIBREF10 . The CommonsenseQA dataset consists of 12,247 questions with one correct answer and four distractor answers. This dataset consists of two splits – the question token split and the random split. Our experiments are conducted on the more challenging random split, which is the main evaluation split according to BIBREF10 . The statistics of the CommonsenseQA dataset are shown in Table 3 .", "Same as the pre-training stage, the input data for fine-tuning the BERT_CS models is formed by concatenating each question-answer pair as a sequence. The hidden representations over the [CLS] token are run through a softmax layer to create the predictions. The objective function is the same as Equations 10 and 11 . We fine-tune the BERT_CS models on CommonsenseQA for 2 epochs with a learning rate of 1e-5 and a batch size of 16.", "Table 4 shows the accuracies on the CommonsenseQA test set from the baseline BERT models released by Google, the previous state-of-the-art model CoS-E BIBREF19 , and our BERT_CS models. Note that CoS-E model requires a large amount of human effort to collect the Common Sense Explanations (CoS-E) dataset. In comparison, we construct our multi-choice question-answering dataset automatically. The BERT_CS models significantly outperform the baseline BERT model counterparts. BERT_CS $_{large}$ achieves a 5.5% absolute improvement on the CommonsenseQA test set over the baseline BERT $_{large}$ model and a 4% absolute improvement over the previous SOTA CoS-E model." ], [ "The Winograd Schema Challenge (WSC) BIBREF13 is introduced for testing AI agents for commonsense knowledge. The WSC consists of 273 instances of the pronoun disambiguation problem (PDP). For example, for sentence “The delivery truck zoomed by the school bus because it was going so fast.” and a corresponding question “What does the word it refers to?”, the machine is expected to answer “delivery truck” instead of “school bus”. In this task, we follow BIBREF22 and employ the WSCR dataset BIBREF23 as the extra training data. The WSCR dataset is split into a training set of 1322 examples and a test set of 564 examples. We use these data for fine-tuning and validating BERT_CS models, respectively, and test the fine-tuned BERT_CS models on the WSC dataset.", "We transform the pronoun disambiguation problem into a multi-choice question answering problem. We mask the pronoun word with a special token [QW] to construct a question, and put the two candidate paragraphs as candidate answers. The remaining procedures are the same as QA tasks. We use the same loss function as BIBREF22 , that is, if c $_1$ is correct and c $_2$ is not, the loss is ", "$$\\begin{aligned}\nL = &- {\\rm logp}(c_1|s) + \\\\\n&\\alpha \\cdot max(0, {\\rm logp}(c_2|s)-{\\rm logp}(c_1|s)+\\beta ), \\end{aligned}$$ (Eq. 16) ", "where $p(c_1|s)$ follows Equation 11 with $N=2$ , $\\alpha $ and $\\beta $ are two hyper-parameters. Similar to BIBREF22 , we search $\\alpha \\in \\lbrace 2.5,5,10,20\\rbrace $ and $\\beta \\in \\lbrace 0.05,0.1,0.2,0.4\\rbrace $ by comparing the accuracy on the WSCR test set (i.e., the development set for the WSC data set). We set the batch size 16 and the learning rate $1e^{-5}$ . We evaluate our models on the WSC dataset, as well as the various partitions of the WSC dataset, as described in BIBREF24 . We also evaluate the fine-tuned BERT_CS model (without using the WNLI training data for further fine-tuning) on the WNLI test set, one of the GLUE tasks. We first transform the examples in WNLI from the premise-hypothesis format into the pronoun disambiguation problem format and then transform it into the multi-choice QA format BIBREF22 .", "The results on the WSC dataset and its various partitions and the WNLI test set are shown in Table 5 . Note that the results for BIBREF21 are fine-tuned on the whole WSCR dataset, including the training and test sets. Results for LM ensemble BIBREF25 and Knowledge Hunter BIBREF26 are taken from BIBREF24 . Results for “BERT $_{large}$ + MTP\" is taken from BIBREF22 as the baseline of applying BERT to the WSC task.", "As can be seen from Table 5 , the “BERT $_{large}$ + MCQA\" achieves better performance than “BERT $_{large}$ + MTP\" on four of the seven evaluation criteria and achieves significant improvement on the assoc. and consist. partitions, which demonstrates that MCQA is a better pre-processing method than MTP for the WSC task. Also, the “BERT_CS $_{large}$ + MCQA\" achieves the best performance on all of the evaluation criteria but consist., and achieves a 3.3% absolute improvement on the WSC dataset over the previous SOTA results from BIBREF22 ." ], [ "The General Language Understanding Evaluation (GLUE) benchmark BIBREF6 is a collection of diverse natural language understanding tasks, including MNLI, QQP, QNLI, SST-2, CoLA, STS-B, MRPC, of which CoLA and SST-2 are single-sentence tasks, MRPC, STS-B and QQP are similarity and paraphrase tasks, and MNLI, QNLI, RTE and WNLI are natural language inference tasks. To investigate whether our multi-choice QA based pre-training approach degenerates the performance on common sentence classification tasks, we evaluate the BERT_CS $_{base}$ and BERT_CS $_{large}$ models on 8 GLUE datasets and compare the performances with those from the baseline BERT models.", "Following BIBREF4 , we use the batch size 32 and fine-tune for 3 epochs for all GLUE tasks, and select the fine-tuning learning rate (among 1e-5, 2e-5, and 3e-5) based on the performance on the development set. Results are presented in Table 6 . We observe that the BERT_CS $_{large}$ model achieves comparable performance with the BERT $_{large}$ model and the BERT_CS $_{base}$ model achieves slightly better performance than the BERT $_{base}$ model. We hypothesize that the commonsense knowledge may not be required for GLUE tasks. On the other hand, these results demonstrate that our proposed multi-choice QA pre-training task does not degrade the sentence representation capabilities of BERT models." ], [ "In this subsection, we conduct several comparison experiments using different data and different pre-training tasks on the BERT $_{base}$ model. For simplicity, we discard the subscript $base$ in this subsection.", "The first set of experiments is to compare the efficacy of our data creation approach versus the data creation approach in BIBREF18 . First, same as BIBREF18 , we collect 606,564 triples from ConceptNet, and construct 1,213,128 questions, each with a correct answer and four distractors. This dataset is denoted the TRIPLES dataset. We pre-train BERT models on the TRIPLES dataset with the same hyper-parameters as the BERT_CS models and the resulting model is denoted BERT_triple. We also create several model counterparts based on our constructed dataset:", "Distractors are formed by randomly picking concept $_1$ /concept $_2$ in ConceptNet instead of those sharing the same concept $_2$ /concept $_1$ and the relation with the correct answers. We denote the resulting model from this dataset BERT_CS_random.", "Instead of pre-training BERT with a multi-choice QA task that chooses the correct answer from several candidate answers, we mask concept $_1$ and concept $_2$ and pre-train BERT with a masked language model (MLM) task. We denote the resulting model from this pre-training task BERT_MLM.", "We randomly mask 15% WordPiece tokens BIBREF27 of the question as in BIBREF4 and then conduct both multi-choice QA task and MLM task simultaneously. The resulting model is denoted BERT_CS_MLM.", "All these BERT models are fine-tuned on the CommonsenseQA dataset with the same hyper-parameters as described in Section \"CommonsenseQA\" and the results are shown in Table 7 . We observe the following from Table 7 .", "Comparing model 1 and model 2, we find that pre-training on ConceptNet benefits the CommonsenseQA task even with the triples as input instead of sentences. Further comparing model 2 and model 6, we find that constructing sentences as input for pre-training BERT performs better on the CommonsenseQA task than using triples for pre-training BERT. We also conduct more detailed comparisons between fine-tuning model 1 and model 2 on GLUE tasks. The results are shown in Table 6 . BERT_triple $_{base}$ yields much worse results than BERT $_{base}$ and BERT_CS $_{base}$ , which demonstrates that pre-training directly on triples may hurt the sentence representation capabilities of BERT.", "Comparing model 3 and model 6, we find that pre-training BERT benefits from a more difficult dataset. In our selection method, all candidate answers share the same (concept $_1$ , relation) or (relation, concept $_2$ ), that is, these candidates have close meanings. These more confusing candidates force BERT_CS to distinguish synonym meanings, resulting in a more powerful BERT_CS model.", "Comparing model 5 and model 6, we find that the multi-choice QA task works better than the masked LM task as the pre-training task for the target multi-choice QA task. We argue that, for the masked LM task, BERT_CS is required to predict each masked wordpieces (in concepts) independently and for the multi-choice QA task, BERT is required to model the whole candidate phrases. In this way, BERT is able to model the whole concepts instead of paying much attention to the single wordpieces in the sentences. Comparing model 4 and model 6, we observe that adding the masked LM task may hurt the performance of BERT_CS. This is probably because the masked words in questions may have a negative influence on the multi-choice QA task. Finally, our proposed model BERT_CS achieves the best performance on the CommonsenseQA development set among these model counterparts." ], [ "In this subsection, we plot the performance curve on CommonsenseQA development set from BERT_CS over the pre-training steps. For every 10,000 training steps, we save the model as the initial model for fine-tuning. For every of these models, we run experiments for 10 times repeatedly with random restarts, that is, we use the same pre-trained checkpoint but perform different fine-tuning data shuffling. Due to the unstability of fine-tuning BERT BIBREF4 , we remove the results that are significantly lower than the mean. In our experiments, we remove the accuracy lower than 0.57 for BERT_CS $_{base}$ and 0.60 for BERT_CS $_{large}$ . We plot the mean and standard deviation values in Figure 1 . We observe that the performance of BERT_CS $_{base}$ converges around 50,000 training steps and BERT_CS $_{large}$ converges around the end of the pre-training stage or may not have converged, which demonstrates that the BERT_CS $_{large}$ is more powerful at incorporating commonsense knowledge. We also compare with pre-training BERT_CS models for 2 epochs. However, our model produces worse performance probably due to over-fitting. Pre-training on a larger corpus (with more QA samples) may benefit the BERT_CS models and we leave this to the future work." ], [ "Table 8 shows several cases from the Winograd Schema Challenge dataset. Questions 1 and 2 only differ in the words “compassionate\" and “cruel\". Our model BERT_CS $_{large}$ chooses correct answers for both questions while BERT $_{large}$ chooses the same choice “Bill\" for both questions. We speculate that BERT $_{large}$ tends to choosing the closer candidates. We split WSC test set into two parts CLOSE and FAR according as the correct candidate is closer or farther to the pronoun word in the sentence than another candidate. As shown in Table 9 , our model BERT_CS $_{large}$ achieves the same performance on CLOSE set and better performance on FAR set than BERT $_{large}$ . That's to say, BERT_CS $_{large}$ is more robust to the position of the words and focuses more on the semantic of the sentence.", "Questions 3 and 4 only differ in the words “large\" and “small\". However, neither BERT_CS $_{large}$ nor BERT $_{large}$ chooses the correct answers. We hypothesize that since “suitcase is large\" and “trophy is small\" are probably quite frequent for language models, both BERT $_{large}$ and BERT_CS $_{large}$ models make mistakes. In future work, we will investigate other approaches for overcoming the sensitivity of language models and improving commonsense reasoning." ], [ "In this paper, we develop a pre-training approach for incorporating commonsense knowledge into language representation models such as BERT. We construct a commonsense-related multi-choice question answering dataset for pre-training BERT. The dataset is created automatically by our proposed “align, mask, and select\" (AMS) method. Experimental results demonstrate that pre-training models using the proposed approach followed by fine-tuning achieves significant improvements on various commonsense-related tasks, such as CommonsenseQA and Winograd Schema Challenge, while maintaining comparable performance on other NLP tasks, such as sentence classification and natural language inference (NLI) tasks, compared to the original BERT models. In future work, we will incorporate the relationship information between two concepts into language representation models. We will also explore other structured knowledge graphs, such as Freebase, to incorporate entity information into language representation models. We also plan to incorporate commonsense knowledge information into other language representation models such as XLNet BIBREF28 ." ], [ "The authors would like to thank Lingling Jin, Pengfei Fan, Xiaowei Lu for supporting 16 NVIDIA V100 GPU cards." ] ] }
{ "question": [ "How do they select answer candidates for their QA task?" ], "question_id": [ "3c16d4cf5dc23223980d9c0f924cb9e4e6943f13" ], "nlp_background": [ "infinity" ], "topic_background": [ "research" ], "paper_read": [ "no" ], "search_query": [ "commonsense" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "AMS method." ], "yes_no": null, "free_form_answer": "", "evidence": [ "Each training sample is generated by three steps: align, mask and select, which we call as AMS method. Each sample in the dataset consists of a question and several candidate answers, which has the same form as the CommonsenseQA dataset. An example of constructing one training sample by masking concept $_2$ is shown in Table 2 ." ], "highlighted_evidence": [ "Each training sample is generated by three steps: align, mask and select, which we call as AMS method. Each sample in the dataset consists of a question and several candidate answers, which has the same form as the CommonsenseQA dataset." ] } ], "annotation_id": [ "0cd26d37acbdd99389bd11f817510e45aac0a82d" ], "worker_id": [ "101dbdd2108b3e676061cb693826f0959b47891b" ] } ] }
{ "caption": [ "Table 1: Some examples from the CommonsenseQA dataset shown in part A and some related triples from ConceptNet shown in part B. The correct answers in part A are in boldface.", "Table 2: The detailed procedures of constructing one multichoice question answering sample. The ∗ in the fourth step is a wildcard character. The correct answer for the question is underlined.", "Table 3: The statistics of CommonsenseQA and Winograd Schema Challenge datasets.", "Table 4: Accuracy (%) of different models on the CommonsenseQA test set.", "Table 5: Accuracy (%) of different models on the Winograd Schema Challenge dataset together with its subsets and the WNLI test set. MTP denotes masked token prediction, which is employed in (Kocijan et al. 2019). MCQA denotes multi-choice question-answering format, which is employed in this paper.", "Table 6: The accuracy (%) of different models on the GLUE test sets. We report Matthews corr. on CoLA, Spearman corr. on STS-B, accuracy on MNLI, QNLI, SST-2 and RTE, F1-score on QQP and MRPC, which is the same as (Devlin et al. 2018).", "Table 7: Accuracy (%) of different models on CommonsenseQA development set. The source data and tasks are employed to pre-train BERT CS. MCQA represents for multi-choice question answering task and MLM represents for masked language modeling task.", "Table 8: Several cases from the Winograd Schema Challenge dataset. The pronouns in questions are in square brackets. The correct candidates and correct decisions by models are in boldface.", "Table 9: The accuracy (%) of different models on two partitions of WSC dataset.", "Figure 1: The model performance curve on CommonsenseQA development set along with the pre-training steps." ], "file": [ "1-Table1-1.png", "3-Table2-1.png", "4-Table3-1.png", "4-Table4-1.png", "5-Table5-1.png", "5-Table6-1.png", "6-Table7-1.png", "7-Table8-1.png", "7-Table9-1.png", "7-Figure1-1.png" ] }
1604.05781
What we write about when we write about causality: Features of causal statements across large-scale social discourse
Identifying and communicating relationships between causes and effects is important for understanding our world, but is affected by language structure, cognitive and emotional biases, and the properties of the communication medium. Despite the increasing importance of social media, much remains unknown about causal statements made online. To study real-world causal attribution, we extract a large-scale corpus of causal statements made on the Twitter social network platform as well as a comparable random control corpus. We compare causal and control statements using statistical language and sentiment analysis tools. We find that causal statements have a number of significant lexical and grammatical differences compared with controls and tend to be more negative in sentiment than controls. Causal statements made online tend to focus on news and current events, medicine and health, or interpersonal relationships, as shown by topic models. By quantifying the features and potential biases of causality communication, this study improves our understanding of the accuracy of information and opinions found online.
{ "section_name": [ "Introduction", "Dataset, filtering, and corpus selection", "Tagging and corpus comparison", "Cause-trees", "Sentiment analysis", "Topic modeling", "Results", "Discussion", "Acknowledgments" ], "paragraphs": [ [ "Social media and online social networks now provide vast amounts of data on human online discourse and other activities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . With so much communication taking place online and with social media being capable of hosting powerful misinformation campaigns BIBREF7 such as those claiming vaccines cause autism BIBREF8 , BIBREF9 , it is more important than ever to better understand the discourse of causality and the interplay between online communication and the statement of cause and effect.", "Causal inference is a crucial way that humans comprehend the world, and it has been a major focus of philosophy, statistics, mathematics, psychology, and the cognitive sciences. Philosophers such as Hume and Kant have long argued whether causality is a human-centric illusion or the discovery of a priori truth BIBREF10 , BIBREF11 . Causal inference in science is incredibly important, and researchers have developed statistical measures such as Granger causality BIBREF12 , mathematical and probabilistic frameworks BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , and text mining procedures BIBREF17 , BIBREF18 , BIBREF19 to better infer causal influence from data. In the cognitive sciences, the famous perception experiments of Michotte et al. led to a long line of research exploring the cognitive biases that humans possess when attempting to link cause and effect BIBREF20 , BIBREF21 , BIBREF22 .", "How humans understand and communicate cause and effect relationships is complicated, and is influenced by language structure BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 and sentiment or valence BIBREF27 . A key finding is that the perceived emphasis or causal weight changes between the agent (the grammatical construct responsible for a cause) and the patient (the construct effected by the cause) depending on the types of verbs used to describe the cause and effect. Researchers have hypothesized BIBREF28 that this is because of the innate weighting property of the verbs in the English language that humans use to attribute causes and effects. Another finding is the role of a valence bias: the volume and intensity of causal reasoning may increase due to negative feedback or negative events BIBREF27 .", "Despite these long lines of research, causal attributions made via social media or online social networks have not been well studied. The goal of this paper is to explore the language and topics of causal statements in a large corpus of social media taken from Twitter. We hypothesize that language and sentiment biases play a significant role in these statements, and that tools from natural language processing and computational linguistics can be used to study them. We do not attempt to study the factual correctness of these statements or offer any degree of verification, nor do we exhaustively identify and extract all causal statements from these data. Instead, here we focus on statements that are with high certainty causal statements, with the goal to better understand key characteristics about causal statements that differ from everyday online communication.", "The rest of this paper is organized as follows: In Sec. \"Materials and Methods\" we discuss our materials and methods, including the dataset we studied, how we preprocessed that data and extracted a `causal' corpus and a corresponding `control' corpus, and the details of the statistical and language analysis tools we studied these corpora with. In Sec. \"Results\" we present results using these tools to compare the causal statements to control statements. We conclude with a discussion in Sec. \"Discussion\" ." ], [ "Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work.", "All document text was processed the same way. Punctuation, XML characters, and hyperlinks were removed, as were Twitter-specific “at-mentions” and “hashtags” (see also the Appendix). There is useful information here, but it is either not natural language text, or it is Twitter-specific, or both. Documents were broken into individual words (unigrams) on whitespace. Casing information was retained, as we will use it for our Named Entity analysis, but otherwise all words were considered lowercase only (see also the Appendix). Stemming BIBREF30 and lemmatization BIBREF31 were not performed.", "Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively." ], [ "Documents were further studied by annotating their unigrams with Parts-of-Speech (POS) and Named Entities (NE) tags. POS tagging was done using NLTK v3.1 BIBREF29 which implements an averaged perceptron classifier BIBREF32 trained on the Brown Corpus BIBREF33 . (POS tagging is affected by punctuation; we show in the Appendix that our results are relatively robust to the removal of punctuation.) POS tags denote the nouns, verbs, and other grammatical constructs present in a document. Named Entity Recognition (NER) was performed using the 4-class, distributional similarity tagger provided as part of the Stanford CoreNLP v3.6.0 toolkit BIBREF34 . NER aims to identify and classify proper words in a text. The NE classifications considered were: Organization, Location, Person, and Misc. The Stanford NER tagger uses a conditional random field model BIBREF35 trained on diverse sets of manually-tagged English-language data (CoNLL-2003) BIBREF34 . Conditional random fields allow dependencies between words so that `New York' and `New York Times', for example, are classified separately as a location and organization, respectively. These taggers are commonly used and often provide reasonably accurate results, but there is always potential ambiguity in written text and improving upon these methods remains an active area of research.", "Unigrams, POS, and NEs were compared between the cause and control corpora using odds ratios (ORs): ", "$$\\operatorname{OR}(x) = \\frac{p_C(x)/ (1-p_C(x))}{p_N(x) / (1-p_N(x))},$$ (Eq. 1) ", " where $p_C(x)$ and $p_N(x)$ are the probabilities that a unigram, POS, or NE $x$ occurs in the causal and control corpus, respectively. These probabilities were computed for each corpus separately as $p(x) = f(x) / \\sum _{x^{\\prime } \\in V} f(x^{\\prime })$ , where $f(x)$ is the total number of occurrences of $x$ in the corpus and $V$ is the relevant set of unigrams, POS, or NEs. Confidence intervals for the ORs were computed using Wald's methodology BIBREF36 .", "As there are many unique unigrams in the text, when computing unigram ORs we focused on the most meaningful unigrams within each corpus by using the following filtering criteria: we considered only the $\\operatorname{OR}$ s of the 1500 most frequent unigrams in that corpus that also have a term-frequency-inverse-document-frequency (tf-idf) score above the 90th percentile for that corpus BIBREF37 . The tf-idf was computed as ", "$$\\mbox{tf-idf}(w) = \\log f(w) \\times \\log \\left(D̑{\\mathit {df}(w)} \\right) ,$$ (Eq. 2) ", "where $D$ is the total number of documents in the corpus, and $\\mathit {df}(w)$ is the number of documents in the corpus containing unigram $w$ . Intuitively, unigrams with higher tf-idf scores appear frequently, but are not so frequent that they are ubiquitous through all documents. Filtering via tf-idf is standard practice in the information retrieval and data mining fields." ], [ "For a better understanding of the higher-order language structure present in text phrases, cause-trees were constructed. A cause-tree starts with a root cause word (either `caused', `causing' or `causes'), then the two most probable words following (preceding) the root are identified. Next, the root word plus one of the top probable words is combined into a bigram and the top two most probable words following (preceding) this bigram are found. Repeatedly applying this process builds a binary tree representing the $n$ -grams that begin with (terminate at) the root word. This process can continue until a certain $n$ -gram length is reached or until there are no more documents long enough to search." ], [ "Sentimental analysis was applied to estimate the emotional content of documents. Two levels of analysis were used: a method where individual unigrams were given crowdsourced numeric sentiment scores, and a second method involving a trained classifier that can incorporate document-level phrase information.", "For the first sentiment analysis, each unigram $w$ was assigned a crowdsourced “labMT” sentiment score $s(w)$ BIBREF5 . (Unlike BIBREF5 , scores were recentered by subtracting the mean, $s(w) \\leftarrow s(w)-\\left<s\\right>$ .) Unigrams determined by volunteer raters to have a negative emotional sentiment (`hate',`death', etc.) have $s(w) < 0$ , while unigrams determined to have a positive emotional sentiment (`love', `happy', etc.) tend to have $s(w) > 0$ . Unigrams that have labMT scores and are above the 90th percentile of tf-idf for the corpus form the set $\\tilde{V}$ . (Unigrams in $\\tilde{V}$ need not be among the 1500 most frequent unigrams.) The set $\\tilde{V}$ captures 87.9% (91.5%) of total unigrams in the causal (control) corpus. Crucially, the tf-idf filtering ensures that the words `caused', `causes', and `causing', which have a slight negative sentiment, are not included and do not introduce a systematic bias when comparing the two corpora.", "This sentiment measure works on a per-unigram basis, and is therefore best suited for large bodies of text, not short documents BIBREF5 . Instead of considering individual documents, the distributions of labMT scores over all unigrams for each corpus was used to compare the corpora. In addition, a single sentiment score for each corpus was computed as the average sentiment score over all unigrams in that corpus, weighed by unigram frequency: $\\sum _{w \\in \\tilde{V}} {f(w) s(w)} \\Big / \\sum _{w^{\\prime } \\in \\tilde{V}} f(w^{\\prime })$ .", "To supplement this sentiment analysis method, we applied a second method capable of estimating with reasonable accuracy the sentiment of individual documents. We used the sentiment classifier BIBREF38 included in the Stanford CoreNLP v3.6.0 toolkit to documents in each corpus. Documents were individually classified into one of five categories: very negative, negative, neutral, positive, very positive. The data used to train this classifier is taken from positive and negative reviews of movies (Stanford Sentiment Treebank v1.0) BIBREF38 ." ], [ "Lastly, we applied topic modeling to the causal corpus to determine what are the topical foci most discussed in causal statements. Topics were built from the causal corpus using Latent Dirichlet Allocation (LDA) BIBREF39 . Under LDA each document is modeled as a bag-of-words or unordered collection of unigrams. Topics are considered as mixtures of unigrams by estimating conditional distributions over unigrams: $P(w|T)$ , the probability of unigram $w$ given topic $T$ and documents are considered as mixtures of topics via $P(T|d)$ , the probability of topic $T$ given document $d$ . These distributions are then found via statistical inference given the observed distributions of unigrams across documents. The total number of topics is a parameter chosen by the practitioner. For this study we used the MALLET v2.0.8RC3 topic modeling toolkit BIBREF40 for model inference. By inspecting the most probable unigrams per topic (according to $P(w|T)$ ), we found 10 topics provided meaningful and distinct topics." ], [ "We have collected approximately 1M causal statements made on Twitter over the course of 2013, and for a control we gathered the same number of statements selected at random but controlling for time of year (see Methods). We applied Parts-of-Speech (POS) and Named Entity (NE) taggers to all these texts. Some post-processed and tagged example documents, both causal and control, are shown in Fig. 1 A. We also applied sentiment analysis methods to these documents (Methods) and we have highlighted very positive and very negative words throughout Fig. 1 .", "In Fig. 1 B we present odds ratios for how frequently unigrams (words), POS, or NE appear in causal documents relative to control documents. The three unigrams most strongly skewed towards causal documents were `stress', `problems', and `trouble', while the three most skewed towards control documents were `photo', `ready', and `cute'. While these are only a small number of the unigrams present, this does imply a negative sentiment bias among causal statements (we return to this point shortly).", "Figure 1 B also presents odds ratios for POS tags, to help us measure the differences in grammatical structure between causal and control documents (see also the Appendix for the effects of punctuation and casing on these odds ratios). The causal corpus showed greater odds for plural nouns (Penn Treebank tag: NNS), plural proper nouns (NNPS), Wh-determiners/pronouns (WDT, WP$) such as `whichever',`whatever', `whose', or `whosever', and predeterminers (PDT) such as `all' or `both'. Predeterminers quantify noun phrases such as `all' in `after all the events that caused you tears', showing that many causal statements, despite the potential brevity of social media, can encompass or delineate classes of agents and/or patients. On the other hand, the causal corpus has lower odds than the control corpus for list items (LS), proper singular nouns (NNP), and interjections (UH).", "Lastly, Fig. 1 B contains odds ratios for NE tags, allowing us to quantify the types of proper nouns that are more or less likely to appear in causal statements. Of the four tags, only the “Person” tag is less likely in the causal corpus than the control. (This matches the odds ratio for the proper singular noun discussed above.) Perhaps surprisingly, these results together imply that causal statements are less likely to involve individual persons than non-causal statements. There is considerable celebrity news and gossip on social media BIBREF4 ; discussions of celebrities may not be especially focused on attributing causes to these celebrities. All other NE tags, Organization, Location, and Miscellaneous, occur more frequently in the causal corpus than the control. All the odds ratios in Fig. 1 B were significant at the $\\alpha = 0.05$ level except the List item marker (LS) POS tag.", "The unigram analysis in Fig. 1 does not incorporate higher-order phrase structure present in written language. To explore these structures specifically in the causal corpus, we constructed “cause-trees”, shown in Fig. 2 . Inspired by association mining BIBREF41 , a cause-tree is a binary tree rooted at either `caused', `causes', or `causing', that illustrates the most frequently occurring $n$ -grams that either begin or end with that root cause word (see Methods for details).", "The “causes” tree shows the focused writing (sentence segments) that many people use to express either the relationship between their own actions and a cause-and-effect (“even if it causes”), or the uncontrollable effect a cause may have on themselves: “causes me to have” shows a person's inability to control a causal event (“[...] i have central heterochromia which causes me to have dual colors in both eyes”). The `causing' tree reveals our ability to confine causal patterns to specific areas, and also our ability to be affected by others causal decisions. Phrases like “causing a scene in/at” and “causing a ruckus in/at” (from documents like “causing a ruckus in the hotel lobby typical [...]”) show people commonly associate bounds on where causal actions take place. The causing tree also shows people's tendency to emphasize current negativity: Phrases like “pain this is causing” coming from documents like “cant you see the pain you are causing her” supports the sentiment bias that causal attribution is more likely for negative cause-effect associations. Finally, the `caused' tree focuses heavily on negative events and indicates people are more likely to remember negative causal events. Documents with phrases from the caused tree (“[...] appalling tragedy [...] that caused the death”, “[...] live with this pain that you caused when i was so young [...]”) exemplify the negative events that are focused on are large-scale tragedies or very personal negative events in one's life.", "Taken together, the popularity of negative sentiment unigrams (Fig. 1 ) and $n$ -grams (Fig. 2 ) among causal documents shows that emotional sentiment or “valence” may play a role in how people perform causal attribution BIBREF27 . The “if it bleeds, it leads” mentality among news media, where violent and negative news are more heavily reported, may appeal to this innate causal association mechanism. (On the other hand, many news media themselves use social media for reporting.) The prevalence of negative sentiment also contrasts with the “better angels of our nature” evidence of Pinker BIBREF42 , illustrating one bias that shows why many find the results of Ref. BIBREF42 surprising.", "Given this apparent sentiment skew, we further studied sentiment (Fig. 3 ). We compared the sentiment between the corpora in four different ways to investigate the observation (Figs. 1 B and 2 ) that people focus more about negative concepts when they discuss causality. First, we computed the mean sentiment score of each corpus using crowdsourced “labMT” scores weighted by unigram frequency (see Methods). We also applied tf-idf filtering (Methods) to exclude very common words, including the three cause-words, from the mean sentiment score. The causal corpus text was slightly negative on average while the control corpus was slightly positive (Fig. 3 A). The difference in mean sentiment score was significant (t-test: $p < 0.01$ ).", "Second, we moved from the mean score to the distribution of sentiment across all (scored) unigrams in the causal and control corpora (Fig. 3 B). The causal corpus contained a large group of negative sentiment unigrams, with labMT scores in the approximate range $-3 < s < -1/2$ ; the control corpus had significantly fewer unigrams in this score range.", "Third, in Fig. 3 C we used POS tags to categorize scored unigrams into nouns, verbs, and adjectives. Studying the distributions for each, we found that nouns explain much of the overall difference observed in Fig. 3 B, with verbs showing a similar but smaller difference between the two corpora. Adjectives showed little difference. The distributions in Fig. 3 C account for 87.8% of scored text in the causal corpus and 77.2% of the control corpus. The difference in sentiment between corpora was significant for all distributions (t-test: $p < 0.01$ ).", "Fourth, to further confirm that the causal documents tend toward negative sentiment, we applied a separate, independent sentiment analysis using the Stanford NLP sentiment toolkit BIBREF38 to classify the sentiment of individual documents not unigrams (see Methods). Instead of a numeric sentiment score, this classifier assigns documents to one of five categories ranging from very negative to very positive. The classifier showed that the causal corpus contains more negative and very negative documents than the control corpus, while the control corpus contains more neutral, positive, and very positive documents (Fig. 3 D).", "We have found language (Figs. 1 and 2 ) and sentiment (Fig. 3 ) differences between causal statements made on social media compared with other social media statements. But what is being discussed? What are the topical foci of causal statements? To study this, for our last analysis we applied topic models to the causal statements. Topic modeling finds groups of related terms (unigrams) by considering similarities between how those terms co-occur across a set of documents.", "We used the popular topic modeling method Latent Dirichlet Allocation (LDA) BIBREF39 . We ranked unigrams by how strongly associated they were with the topic. Inspecting these unigrams we found that a 10-topic model discovered meaningful topics. See Methods for full details. The top unigrams for each topic are shown in Tab. 1 .", "Topics in the causal corpus tend to fall into three main categories: (i) news, covering current events, weather, etc.; (ii) medicine and health, covering cancer, obesity, stress, etc.; and (iii) relationships, covering problems, stress, crisis, drama, sorry, etc.", "While the topics are quite different, they are all similar in their use of negative sentiment words. The negative/global features in the `news' topic are captured in the most representative words: damage, fire, power, etc. Similar to news, the `accident' topic balances the more frequent day-to-day minor frustrations with the less frequent but more severe impacts of car accidents. The words `traffic' and `delays' are the most probable words for this topic, and are common, low-impact occurrences. On the contrary, `crash', `car', `accident' and `death' are the next most probable words for the accident topic, and generally show a focus on less-common but higher-impact events.", "The `medical' topic also focused on negative words; highly probable words for this topic included `cancer', `break', `disease', `blood', etc. Meanwhile, the `body' topic contained words like: `stress', `lose', and `weight', giving a focus on on our more personal struggles with body image. Besides body image, the `injuries' topic uses specific pronouns (`his', `him', `her') in references to a person's own injuries or the injuries of others such as athletes.", "Aside from more factual information, social information is well represented in causal statements. The `problems' topic shows people attribute their problems to many others with terms like: `dont', `people', `they', `them'. The `stress' topic also uses general words such as `more', `than', or `people' to link stress to all people, and in the same vein, the `crisis' topic focuses on problems within organizations such as governments. The `drama' and `sorry' topics tend towards more specific causal statements. Drama used the words: `like', `she', and `her' while documents in the sorry topic tended to address other people.", "The topics of causal documents discovered by LDA showed that both general and specific statements are made regarding news, medicine, and relationships when individuals make causal attributions online." ], [ "The power of online communication is the speed and ease with which information can be propagated by potentially any connected users. Yet these strengths come at a cost: rumors and misinformation also spread easily. Causal misattribution is at the heart of many rumors, conspiracy theories, and misinformation campaigns.", "Given the central role of causal statements, further studies of the interplay of information propagation and online causal attributions are crucial. Are causal statements more likely to spread online and, if so, in which ways? What types of social media users are more or less likely to make causal statements? Will a user be more likely to make a causal statement if they have recently been exposed to one or more causal statements from other users?", "The topics of causal statements also bring forth important questions to be addressed: how timely are causal statements? Are certain topics always being discussed in causal statements? Are there causal topics that are very popular for only brief periods and then forgotten? Temporal dynamics of causal statements are also interesting: do time-of-day or time-of-year factors play a role in how causal statements are made? Our work here focused on a limited subset of causal statements, but more generally, these results may inform new methods for automatically detecting causal statements from unstructured, natural language text BIBREF17 . Better computational tools focused on causal statements are an important step towards further understanding misinformation campaigns and other online activities. Lastly, an important but deeply challenging open question is how, if it is even possible, to validate the accuracy of causal statements. Can causal statements be ranked by some confidence metric(s)? We hope to pursue these and other questions in future research.", "Parts-of-speech tagging depends on punctuation and casing, which we filtered in our data, so a study of how robust the POS algorithm is to punctuation and casing removal is important. We computed POS tags for the corpora with and without casing as well as with and without punctuation (which includes hashtags, links and at-symbols). Two tags mentioned in Fig. 1 B, NNPS and LS (which was not significant), were affected by punctuation removal. Otherwise, there is a strong correlation (Fig. 4 ) between Odds Ratios (causal vs. control) with punctuation and without punctuation, including casing and without casing ( $\\rho = 0.71$ and $0.80$ , respectively), indicating the POS differences between the corpora were primarily not due to the removal of punctuation or casing." ], [ "We thank R. Gallagher for useful comments and gratefully acknowledge the resources provided by the Vermont Advanced Computing Core. This material is based upon work supported by the National Science Foundation under Grant No. ISS-1447634." ] ] }
{ "question": [ "How do they extract causality from text?", "What is the source of the \"control\" corpus?", "What are the selection criteria for \"causal statements\"?", "Do they use expert annotations, crowdsourcing, or only automatic methods to analyze the corpora?", "how do they collect the comparable corpus?", "How do they collect the control corpus?" ], "question_id": [ "4c822bbb06141433d04bbc472f08c48bc8378865", "1baf87437b70cc0375b8b7dc2cfc2830279bc8b5", "0b31eb5bb111770a3aaf8a3931d8613e578e07a8", "7348e781b2c3755b33df33f4f0cab4b94fcbeb9b", "f68bd65b5251f86e1ed89f0c858a8bb2a02b233a", "e111925a82bad50f8e83da274988b9bea8b90005" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity", "two", "two" ], "topic_background": [ "familiar", "familiar", "familiar", "familiar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no", "yes", "yes" ], "search_query": [ "social", "social", "social", "social", "social", "social" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "They identify documents that contain the unigrams 'caused', 'causing', or 'causes'", "evidence": [ "Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively." ], "highlighted_evidence": [ "Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'." ] } ], "annotation_id": [ "f286d3a109fe0b38fcee6121e231001a4704e9c8" ], "worker_id": [ "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Randomly selected from a Twitter dump, temporally matched to causal documents", "evidence": [ "Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work.", "Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively." ], "highlighted_evidence": [ "Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API.", "Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present." ] } ], "annotation_id": [ "b2733052258dc2ad74edbb76c3f152740e30bdbc" ], "worker_id": [ "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Presence of only the exact unigrams 'caused', 'causing', or 'causes'", "evidence": [ "Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively." ], "highlighted_evidence": [ "Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'." ] } ], "annotation_id": [ "ae22aca6f06a3c10293e77feb2defd1a052ebf47" ], "worker_id": [ "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Only automatic methods", "evidence": [ "The rest of this paper is organized as follows: In Sec. \"Materials and Methods\" we discuss our materials and methods, including the dataset we studied, how we preprocessed that data and extracted a `causal' corpus and a corresponding `control' corpus, and the details of the statistical and language analysis tools we studied these corpora with. In Sec. \"Results\" we present results using these tools to compare the causal statements to control statements. We conclude with a discussion in Sec. \"Discussion\" ." ], "highlighted_evidence": [ "In Sec. \"Materials and Methods\" we discuss our materials and methods, including the dataset we studied, how we preprocessed that data and extracted a `causal' corpus and a corresponding `control' corpus, and the details of the statistical and language analysis tools we studied these corpora with." ] } ], "annotation_id": [ "0ce98e42cf869d3feab61c966335792e98d16ad0" ], "worker_id": [ "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Randomly from a Twitter dump", "evidence": [ "Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work.", "Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively." ], "highlighted_evidence": [ "Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API.", "Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present." ] } ], "annotation_id": [ "34a0794200f1e29c3849bfa03a4f6128de26733b" ], "worker_id": [ "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Randomly from Twitter", "evidence": [ "Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API. Twitter activity consists of short posts called tweets which are limited to 140 characters. Retweets, where users repost a tweet to spread its content, were not considered. (The spread of causal statements will be considered in future work.) We considered only English-language tweets for this study. To avoid cross-language effects, we kept only tweets with a user-reported language of `English' and, as a second constraint, individual tweets needed to match more English stopwords than any other language's set of stopwords. Stopwords considered for each language were determined using NLTK's database BIBREF29 . A tweet will be referred to as a `document' for the rest of this work.", "Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'. The word `cause' was not included due to its use as a popular contraction for `because'. One `cause-word' per document restricted the analysis to single relationships between two relata. Documents that contain bidirectional words (`associate', `relate', `connect', `correlate', and any of their stems) were also not selected for analysis. This is because our focus is on causality, an inherently one-sided relationship between two objects. We also did not consider additional synonyms of these cause words, although that could be pursued for future work. Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. The end result of this procedure identified 965,560 causal and 965,560 control documents. Each of the three “cause-words”, `caused', `causes', and `causing' appeared in 38.2%, 35.0%, and 26.8% of causal documents, respectively." ], "highlighted_evidence": [ "Data was collected from a 10% uniform sample of Twitter posts made during 2013, specifically the Gardenhose API.", "Control documents were also selected. These documents did not contain any of `caused', `causing', or `causes', nor any bidirectional words, and are further matched temporally to obtain the same number of control documents as causal documents in each fifteen-minute period during 2013. Control documents were otherwise selected randomly; causal synonyms may be present. " ] } ], "annotation_id": [ "d3219ac0de3157cec4bf78b9f020c264071b86a8" ], "worker_id": [ "057bf5a20e4406f1f05cf82ecd49cf4f227dd287" ] } ] }
{ "caption": [ "Fig. 1. Measuring the differences between causal and control documents. (A) Examples of processed documents tagged by Parts-of-Speech (POS) or Named Entities (NEs). Unigrams highlighted in red (yellow) are in the bottom 10% (top 10%) of the labMT sentiment scores. (B) Log Odds ratios with 95% Wald confidence intervals for the most heavily skewed unigrams, POS, and all NEs between the causal and control corpus. POS tags that are plural and use Wh-pronouns (that, what, which, ...) are more common in the causal corpus, while singular nouns and list items are more common in the controls. Finally, the ‘Person’ tag is the only NE less likely in the causal corpus. Certain unigrams were censored for presentation only, not analysis. All shown odds ratios were significant at the α = 0.05 level except LS (List item markers). See also the Appendix.", "Fig. 2. “Cause-trees” containing the most probable n-grams terminating at (left) or beginning with (right) a chosen root cause-word (see Methods). Line widths are log proportional to their corresponding n-gram frequency and bar plots measure the 4-gram per-document rate N(4-gram)/D. Most trees express negative sentiment consistent with the unigram analysis (Fig. 1). The ‘causes’ tree shows (i) people think in terms of causal probability (“you know what causes [. . . ]”), and (ii) people use causal language when they are directly affected or being affected by another (“causes you”, “causes me”). The ‘causing’ tree is more global (“causing a ruckus/scene”) and ego-centric (“pain you are causing”). The ‘caused’ tree focuses on negative sentiment and alludes to humans retaining negative causal thoughts in the past.", "Fig. 3. Sentiment analysis revealed differences between the causal and control corpora. (A) The mean unigram sentiment score (see Methods), computed from crowdsourced “labMT” scores [6], was more negative for the causal corpus than for the control. This held whether or not tf-idf filtering was applied. (B) The distribution of unigram sentiment scores for the two corpora showed more negative unigrams (with scores in the approximate range −3 < s < −1/2) in the causal corpus compared with the control corpus. (C) Breaking the sentiment distribution down by Parts-of-Speech, nouns show the most pronounced difference in sentiment between cause and control; verbs and adjectives are also more negative in the causal corpus than the control but with less of a difference than nouns. POS tags corresponding to nouns, verbs, and adjectives together account for 87.8% and 77.2% of the causal and control corpus text, respectively. (D) Applying a different sentiment analysis tool—a trained sentiment classifier [39] that assigns individual documents to one of five categories—the causal corpus had an overabundance of negative sentiment documents and fewer positive sentiment documents than the control. This shift from very positive to very negative documents further supports the tendency for causal statements to be negative.", "TABLE I TOPICAL FOCI OF CAUSAL DOCUMENTS. EACH COLUMN LISTS THE UNIGRAMS MOST HIGHLY ASSOCIATED (IN DESCENDING ORDER) WITH A TOPIC, COMPUTED FROM A 10-TOPIC LATENT DIRICHLET ALLOCATION MODEL. THE TOPICS GENERALLY FALL INTO THREE BROAD CATEGORIES: NEWS, MEDICINE, AND RELATIONSHIPS. MANY TOPICS PLACE AN EMPHASIS ON NEGATIVE SENTIMENT TERMS. TOPIC NAMES WERE DETERMINED MANUALLY. WORDS ARE HIGHLIGHTED ACCORDING TO SENTIMENT SCORE AS IN FIG. 1.", "Fig. 4. Comparison of Odds Ratios for all Parts-of-Speech (POS) tags with punctuation retained and removed for documents with and without casing. Tags Cardinal number (CD), List item marker (LS), and Proper noun plural (NNPS) were most affected by removing punctuation." ], "file": [ "4-Figure1-1.png", "5-Figure2-1.png", "6-Figure3-1.png", "7-TableI-1.png", "8-Figure4-1.png" ] }
1607.06275
Dataset and Neural Recurrent Sequence Labeling Model for Open-Domain Factoid Question Answering
While question answering (QA) with neural network, i.e. neural QA, has achieved promising results in recent years, lacking of large scale real-word QA dataset is still a challenge for developing and evaluating neural QA system. To alleviate this problem, we propose a large scale human annotated real-world QA dataset WebQA with more than 42k questions and 556k evidences. As existing neural QA methods resolve QA either as sequence generation or classification/ranking problem, they face challenges of expensive softmax computation, unseen answers handling or separate candidate answer generation component. In this work, we cast neural QA as a sequence labeling problem and propose an end-to-end sequence labeling model, which overcomes all the above challenges. Experimental results on WebQA show that our model outperforms the baselines significantly with an F1 score of 74.69% with word-based input, and the performance drops only 3.72 F1 points with more challenging character-based input.
{ "section_name": [ "Introduction", "Factoid QA as Sequence Labeling", "Overview", "Long Short-Term Memory (LSTM)", "Question LSTM", "Evidence LSTMs", "Sequence Labeling", "Training", "WebQA Dataset", "Baselines", "Evaluation Method", "Model Settings", "Comparison with Baselines", "Evaluation on the Entire WebQA Dataset", "Effect of Word Embedding", "Effect of q-e.comm and e-e.comm Features", "Effect of Question Representations", "Effect of Evidence LSTMs Structures", "Word-based v.s. Character-based Input", "Conclusion and Future Work" ], "paragraphs": [ [ "Question answering (QA) with neural network, i.e. neural QA, is an active research direction along the road towards the long-term AI goal of building general dialogue agents BIBREF0 . Unlike conventional methods, neural QA does not rely on feature engineering and is (at least nearly) end-to-end trainable. It reduces the requirement for domain specific knowledge significantly and makes domain adaption easier. Therefore, it has attracted intensive attention in recent years.", "Resolving QA problem requires several fundamental abilities including reasoning, memorization, etc. Various neural methods have been proposed to improve such abilities, including neural tensor networks BIBREF1 , recursive networks BIBREF2 , convolution neural networks BIBREF3 , BIBREF4 , BIBREF5 , attention models BIBREF6 , BIBREF5 , BIBREF7 , and memories BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , etc. These methods achieve promising results on various datasets, which demonstrates the high potential of neural QA. However, we believe there are still two major challenges for neural QA:", "System development and/or evaluation on real-world data: Although several high quality and well-designed QA datasets have been proposed in recent years, there are still problems about using them to develop and/or evaluate QA system under real-world settings due to data size and the way they are created. For example, bAbI BIBREF0 and the 30M Factoid Question-Answer Corpus BIBREF13 are artificially synthesized; the TREC datasets BIBREF14 , Free917 BIBREF15 and WebQuestions BIBREF16 are human generated but only have few thousands of questions; SimpleQuestions BIBREF11 and the CNN and Daily Mail news datasets BIBREF6 are large but generated under controlled conditions. Thus, a new large-scale real-world QA dataset is needed.", "A new design choice for answer production besides sequence generation and classification/ranking: Without loss of generality, the methods used for producing answers in existing neural QA works can be roughly categorized into the sequence generation type and the classification/ranking type. The former generates answers word by word, e.g. BIBREF0 , BIBREF10 , BIBREF6 . As it generally involves INLINEFORM0 computation over a large vocabulary, the computational cost is remarkably high and it is hard to produce answers with out-of-vocabulary word. The latter produces answers by classification over a predefined set of answers, e.g. BIBREF12 , or ranking given candidates by model score, e.g. BIBREF5 . Although it generally has lower computational cost than the former, it either also has difficulties in handling unseen answers or requires an extra candidate generating component which is hard for end-to-end training. Above all, we need a new design choice for answer production that is both computationally effective and capable of handling unseen words/answers.", "In this work, we address the above two challenges by a new dataset and a new neural QA model. Our contributions are two-fold:", "Experimental results show that our model outperforms baselines with a large margin on the WebQA dataset, indicating that it is effective. Furthermore, our model even achieves an F1 score of 70.97% on character-based input, which is comparable with the 74.69% F1 score on word-based input, demonstrating that our model is robust." ], [ "In this work, we focus on open-domain factoid QA. Taking Figure FIGREF3 as an example, we formalize the problem as follows: given each question Q, we have one or more evidences E, and the task is to produce the answer A, where an evidence is a piece of text of any length that contains relevant information to answer the question. The advantage of this formalization is that evidences can be retrieved from web or unstructured knowledge base, which can improve system coverage significantly.", "Inspired by BIBREF18 , we introduce end-to-end sequence labeling as a new design choice for answer production in neural QA. Given a question and an evidence, we use CRF BIBREF17 to assign a label to each word in the evidence to indicate whether the word is at the beginning (B), inside (I) or outside (O) of the answer (see Figure FIGREF3 for example). The key difference between our work and BIBREF18 is that BIBREF18 needs a lot work on feature engineering which further relies on POS/NER tagging, dependency parsing, question type analysis, etc. While we avoid feature engineering, and only use one single model to solve the problem. Furthermore, compared with sequence generation and classification/ranking methods for answer production, our method avoids expensive INLINEFORM0 computation and can handle unseen answers/words naturally in a principled way.", "Formally, we formalize QA as a sequence labeling problem as follows: suppose we have a vocabulary INLINEFORM0 of size INLINEFORM1 , given question INLINEFORM2 and evidence INLINEFORM3 , where INLINEFORM4 and INLINEFORM5 are one-hot vectors of dimension INLINEFORM6 , and INLINEFORM7 and INLINEFORM8 are the number of words in the question and evidence respectively. The problem is to find the label sequence INLINEFORM9 which maximizes the conditional probability under parameter INLINEFORM10 DISPLAYFORM0 ", "In this work, we model INLINEFORM0 by a neural network composed of LSTMs and CRF." ], [ "Figure FIGREF4 shows the structure of our model. The model consists of three components: (1) question LSTM for computing question representation; (2) evidence LSTMs for evidence analysis; and (3) a CRF layer for sequence labeling. The question LSTM in a form of a single layer LSTM equipped with a single time attention takes the question as input and generates the question representation INLINEFORM0 . The three-layer evidence LSTMs takes the evidence, question representation INLINEFORM1 and optional features as input and produces “features” for the CRF layer. The CRF layer takes the “features” as input and produces the label sequence. The details will be given in the following sections." ], [ "Following BIBREF19 , we define INLINEFORM0 as a function mapping its input INLINEFORM1 , previous state INLINEFORM2 and output INLINEFORM3 to current state INLINEFORM4 and output INLINEFORM5 : DISPLAYFORM0 ", "where INLINEFORM0 are parameter matrices, INLINEFORM1 are biases, INLINEFORM2 is LSTM layer width, INLINEFORM3 is the INLINEFORM4 function, INLINEFORM5 , INLINEFORM6 and INLINEFORM7 are the input gate, forget gate and output gate respectively." ], [ "The question LSTM consists of a single-layer LSTM and a single-time attention model. The question INLINEFORM0 is fed into the LSTM to produce a sequence of vector representations INLINEFORM1 DISPLAYFORM0 ", "where INLINEFORM0 is the embedding matrix and INLINEFORM1 is word embedding dimension. Then a weight INLINEFORM2 is computed by the single-time attention model for each INLINEFORM3 DISPLAYFORM0 ", "where INLINEFORM0 and INLINEFORM1 . And finally the weighted average INLINEFORM2 of INLINEFORM3 is used as the representation of the question DISPLAYFORM0 " ], [ "The three-layer evidence LSTMs processes evidence INLINEFORM0 INLINEFORM1 to produce “features” for the CRF layer.", "The first LSTM layer takes evidence INLINEFORM0 , question representation INLINEFORM1 and optional features as input. We find the following two simple common word indicator features are effective:", "Question-Evidence common word feature (q-e.comm): for each word in the evidence, the feature has value 1 when the word also occurs in the question, otherwise 0. The intuition is that words occurring in questions tend not to be part of the answers for factoid questions.", "", "Evidence-Evidence common word feature (e-e.comm): for each word in the evidence, the feature has value 1 when the word occurs in another evidence, otherwise 0. The intuition is that words shared by two or more evidences are more likely to be part of the answers.", "Although counterintuitive, we found non-binary e-e.comm feature values does not work well. Because the more evidences we considered, the more words tend to get non-zero feature values, and the less discriminative the feature is.", "The second LSTM layer stacks on top of the first LSTM layer, but processes its output in a reverse order. The third LSTM layer stacks upon the first and second LSTM layers with cross layer links, and its output serves as features for CRF layer.", "Formally, the computations are defined as follows DISPLAYFORM0 ", "where INLINEFORM0 and INLINEFORM1 are one-hot feature vectors, INLINEFORM2 and INLINEFORM3 are embeddings for the features, and INLINEFORM4 and INLINEFORM5 are the feature embedding dimensions. Note that we use the same word embedding matrix INLINEFORM6 as in question LSTM." ], [ "Following BIBREF20 , BIBREF21 , we use CRF on top of evidence LSTMs for sequence labeling. The probability of a label sequence INLINEFORM0 given question INLINEFORM1 and evidence INLINEFORM2 is computed as DISPLAYFORM0 ", "where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 is the number of label types, INLINEFORM3 is the transition weight from label INLINEFORM4 to INLINEFORM5 , and INLINEFORM6 is the INLINEFORM7 -th value of vector INLINEFORM8 ." ], [ "The objective function of our model is INLINEFORM0 ", "where INLINEFORM0 is the golden label sequence, and INLINEFORM1 is training set.", "We use a minibatch stochastic gradient descent (SGD) BIBREF22 algorithm with rmsprop BIBREF23 to minimize the objective function. The initial learning rate is 0.001, batch size is 120, and INLINEFORM0 . We also apply dropout BIBREF24 to the output of all the LSTM layers. The dropout rate is 0.05. All these hyper-parameters are determined empirically via grid search on validation set." ], [ "In order to train and evaluate open-domain factoid QA system for real-world questions, we build a new Chinese QA dataset named as WebQA. The dataset consists of tuples of (question, evidences, answer), which is similar to example in Figure FIGREF3 . All the questions, evidences and answers are collected from web. Table TABREF20 shows some statistics of the dataset.", "The questions and answers are mainly collected from a large community QA website Baidu Zhidao and a small portion are from hand collected web documents. Therefore, all these questions are indeed asked by real-world users in daily life instead of under controlled conditions. All the questions are of single-entity factoid type, which means (1) each question is a factoid question and (2) its answer involves only one entity (but may have multiple words). The question in Figure FIGREF3 is a positive example, while the question “Who are the children of Albert Enistein?” is a counter example because the answer involves three persons. The type and correctness of all the question answer pairs are verified by at least two annotators.", "All the evidences are retrieved from Internet by using a search engine with questions as queries. We download web pages returned in the first 3 result pages and take all the text pieces which have no more than 5 sentences and include at least one question word as candidate evidences. As evidence retrieval is beyond the scope of this work, we simply use TF-IDF values to re-rank these candidates.", "For each question in the training set, we provide the top 10 ranked evidences to annotate (“Annotated Evidence” in Table TABREF20 ). An evidence is annotated as positive if the question can be answered by just reading the evidence without any other prior knowledge, otherwise negative. Only evidences whose annotations are agreed by at least two annotators are retained. We also provide trivial negative evidences (“Retrieved Evidence” in Table TABREF20 ), i.e. evidences that do not contain golden standard answers.", "For each question in the validation and test sets, we provide one major positive evidence, and maybe an additional positive one to compute features. Both of them are annotated. Raw retrieved evidences are also provided for evaluation purpose (“Retrieved Evidence” in Table TABREF20 ).", "The dataset will be released on the project page http://idl.baidu.com/WebQA.html." ], [ "We compare our model with two sets of baselines:", "MemN2N BIBREF12 is an end-to-end trainable version of memory networks BIBREF9 . It encodes question and evidence with a bag-of-word method and stores the representations of evidences in an external memory. A recurrent attention model is used to retrieve relevant information from the memory to answer the question.", "Attentive and Impatient Readers BIBREF6 use bidirectional LSTMs to encode question and evidence, and do classification over a large vocabulary based on these two encodings. The simpler Attentive Reader uses a similar way as our work to compute attention for the evidence. And the more complex Impatient Reader computes attention after processing each question word.", "The key difference between our model and the two readers is that they produce answer by doing classification over a large vocabulary, which is computationally expensive and has difficulties in handling unseen words. However, as our model uses an end-to-end trainable sequence labeling technique, it avoids both of the two problems by its nature." ], [ "The performance is measured with precision (P), recall (R) and F1-measure (F1) DISPLAYFORM0 ", "where INLINEFORM0 is the list of correctly answered questions, INLINEFORM1 is the list of produced answers, and INLINEFORM2 is the list of all questions .", "As WebQA is collected from web, the same answer may be expressed in different surface forms in the golden standard answer and the evidence, e.g. “北京 (Beijing)” v.s. “北京市 (Beijing province)”. Therefore, we use two ways to count correctly answered questions, which are referred to as “strict” and “fuzzy” in the tables:", "Strict matching: A question is counted if and only if the produced answer is identical to the golden standard answer;", "Fuzzy matching: A question is counted if and only if the produced answer is a synonym of the golden standard answer;", "And we also consider two evaluation settings:", "Annotated evidence: Each question has one major annotated evidence and maybe another annotated evidence for computing q-e.comm and e-e.comm features (Section SECREF14 );", "Retrieved evidence: Each question is provided with at most 20 automatically retrieved evidences (see Section SECREF5 for details). All the evidences will be processed by our model independently and answers are voted by frequency to decide the final result. Note that a large amount of the evidences are negative and our model should not produce any answer for them." ], [ "If not specified, the following hyper-parameters will be used in the reset of this section: LSTM layer width INLINEFORM0 (Section SECREF7 ), word embedding dimension INLINEFORM1 (Section SECREF9 ), feature embedding dimension INLINEFORM2 (Section SECREF9 ). The word embeddings are initialized with pre-trained embeddings using a 5-gram neural language model BIBREF25 and is fixed during training.", "We will show that injecting noise data is important for improving performance on retrieved evidence setting in Section SECREF37 . In the following experiments, 20% of the training evidences will be negative ones randomly selected on the fly, of which 25% are annotated negative evidences and 75% are retrieved trivial negative evidences (Section SECREF5 ). The percentages are determined empirically. Intuitively, we provide the noise data to teach the model learning to recognize unreliable evidence.", "For each evidence, we will randomly sample another evidence from the rest evidences of the question and compare them to compute the e-e.comm feature (Section SECREF14 ). We will develop more powerful models to process multiple evidences in a more principle way in the future.", "As the answer for each question in our WebQA dataset only involves one entity (Section SECREF5 ), we distinguish label Os before and after the first B in the label sequence explicitly to discourage our model to produce multiple answers for a question. For example, the golden labels for the example evidence in Figure FIGREF3 will became “Einstein/O1 married/O1 his/O1 first/O1 wife/O1 Mileva/B Marić/I in/O2 1903/O2”, where we use “O1” and “O2” to denote label Os before and after the first B . “Fuzzy matching” is also used for computing golden standard labels for training set.", "For each setting, we will run three trials with different random seeds and report the average performance in the following sections." ], [ "As the baselines can only predict one-word answers, we only do experiments on the one-word answer subset of WebQA, i.e. only questions with one-word answers are retained for training, validation and test. As shown in Table TABREF23 , our model achieves significant higher F1 scores than all the baselines.", "The main reason for the relative low performance of MemN2N is that it uses a bag-of-word method to encode question and evidence such that higher order information like word order is absent to the model. We think its performance can be improved by designing more complex encoding methods BIBREF26 and leave it as a future work.", "The Attentive and Impatient Readers only have access to the fixed length representations when doing classification. However, our model has access to the outputs of all the time steps of the evidence LSTMs, and scores the label sequence as a whole. Therefore, our model achieves better performance." ], [ "In this section, we evaluate our model on the entire WebQA dataset. The evaluation results are shown in Table TABREF24 . Although producing multi-word answers is harder, our model achieves comparable results with the one-word answer subset (Table TABREF23 ), demonstrating that our model is effective for both single-word and multi-word word settings.", "“Softmax” in Table TABREF24 means we replace CRF with INLINEFORM0 , i.e. replace Eq. ( EQREF19 ) with DISPLAYFORM0 ", "CRF outperforms INLINEFORM0 significantly in all cases. The reason is that INLINEFORM1 predicts each label independently, suggesting that modeling label transition explicitly is essential for improving performance. A natural choice for modeling label transition in INLINEFORM2 is to take the last prediction into account as in BIBREF27 . The result is shown in Table TABREF24 as “Softmax( INLINEFORM3 -1)”. However, its performance is only comparable with “Softmax” and significantly lower than CRF. The reason is that we can enumerate all possible label sequences implicitly by dynamic programming for CRF during predicting but this is not possible for “Softmax( INLINEFORM4 -1)” , which indicates CRF is a better choice.", "“Noise” in Table TABREF24 means whether we inject noise data or not (Section SECREF34 ). As all evidences are positive under the annotated evidence setting, the ability for recognizing unreliable evidence will be useless. Therefore, the performance of our model with and without noise is comparable under the annotated evidence setting. However, the ability is important to improve the performance under the retrieved evidence setting because a large amount of the retrieved evidences are negative ones. As a result, we observe significant improvement by injecting noise data for this setting." ], [ "As stated in Section SECREF34 , the word embedding INLINEFORM0 is initialized with LM embedding and kept fixed in training. We evaluate different initialization and optimization methods in this section. The evaluation results are shown in Table TABREF40 . The second row shows the results when the embedding is optimized jointly during training. The performance drops significantly. Detailed analysis reveals that the trainable embedding enlarge trainable parameter number and the model gets over fitting easily. The model acts like a context independent entity tagger to some extend, which is not desired. For example, the model will try to find any location name in the evidence when the word “在哪 (where)” occurs in the question. In contrary, pre-trained fixed embedding forces the model to pay more attention to the latent syntactic regularities. And it also carries basic priors such as “梨 (pear)” is fruit and “李世石 (Lee Sedol)” is a person, thus the model will generalize better to test data with fixed embedding. The third row shows the result when the embedding is randomly initialized and jointly optimized. The performance drops significantly further, suggesting that pre-trained embedding indeed carries meaningful priors." ], [ "As shown in Table TABREF41 , both the q-e.comm and e-e.comm features are effective, and the q-e.comm feature contributes more to the overall performance. The reason is that the interaction between question and evidence is limited and q-e.comm feature with value 1, i.e. the corresponding word also occurs in the question, is a strong indication that the word may not be part of the answer." ], [ "In this section, we compare the single-time attention method for computing INLINEFORM0 ( INLINEFORM1 , Eq. ( EQREF12 , EQREF13 )) with two widely used options: element-wise max operation INLINEFORM2 : INLINEFORM3 and element-wise average operation INLINEFORM4 : INLINEFORM5 . Intuitively, INLINEFORM6 can distill information in a more flexible way from { INLINEFORM7 }, while INLINEFORM8 tends to hide the differences between them, and INLINEFORM9 lies between INLINEFORM10 and INLINEFORM11 . The results in Table TABREF41 suggest that the more flexible and selective the operation is, the better the performance is." ], [ "We investigate the effect of evidence LSTMs layer number, layer width and cross layer links in this section. The results are shown in Figure TABREF46 . For fair comparison, we do not use cross layer links in Figure TABREF46 (a) (dotted lines in Figure FIGREF4 ), and highlight the results with cross layer links (layer width 64) with circle and square for retrieved and annotated evidence settings respectively. We can conclude that: (1) generally the deeper and wider the model is, the better the performance is; (2) cross layer links are effective as they make the third evidence LSTM layer see information in both directions." ], [ "Our model achieves fuzzy matching F1 scores of 69.78% and 70.97% on character-based input in annotated and retrieved evidence settings respectively (Table TABREF46 ), which are only 3.72 and 3.72 points lower than the corresponding scores on word-based input respectively. The performance is promising, demonstrating that our model is robust and effective." ], [ "In this work, we build a new human annotated real-world QA dataset WebQA for developing and evaluating QA system on real-world QA data. We also propose a new end-to-end recurrent sequence labeling model for QA. Experimental results show that our model outperforms baselines significantly.", "There are several future directions we plan to pursue. First, multi-entity factoid and non-factoid QA are also interesting topics. Second, we plan to extend our model to multi-evidence cases. Finally, inspired by Residual Network BIBREF28 , we will investigate deeper and wider models in the future." ] ] }
{ "question": [ "What languages do they experiment with?", "What are the baselines?", "What was the inter-annotator agreement?", "Did they use a crowdsourcing platform?" ], "question_id": [ "ba48c095c496d01c7717eaa271470c3406bf2d7c", "42a61773aa494f7b12838f71a949034c12084de1", "48c3e61b2ed7b3f97706e2a522172bf9b51ec467", "61fba3ab10f7b6906e27b028fb1d42ec601c3fb8" ], "nlp_background": [ "", "", "", "" ], "topic_background": [ "", "", "", "" ], "paper_read": [ "", "", "", "" ], "search_query": [ "", "", "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Chinese" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In order to train and evaluate open-domain factoid QA system for real-world questions, we build a new Chinese QA dataset named as WebQA. The dataset consists of tuples of (question, evidences, answer), which is similar to example in Figure FIGREF3 . All the questions, evidences and answers are collected from web. Table TABREF20 shows some statistics of the dataset." ], "highlighted_evidence": [ "In order to train and evaluate open-domain factoid QA system for real-world questions, we build a new Chinese QA dataset named as WebQA." ] } ], "annotation_id": [ "9bc1e6d6512d64bca5bab9996d376e165aea7081" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "MemN2N BIBREF12", "Attentive and Impatient Readers BIBREF6" ], "yes_no": null, "free_form_answer": "", "evidence": [ "We compare our model with two sets of baselines:", "MemN2N BIBREF12 is an end-to-end trainable version of memory networks BIBREF9 . It encodes question and evidence with a bag-of-word method and stores the representations of evidences in an external memory. A recurrent attention model is used to retrieve relevant information from the memory to answer the question.", "Attentive and Impatient Readers BIBREF6 use bidirectional LSTMs to encode question and evidence, and do classification over a large vocabulary based on these two encodings. The simpler Attentive Reader uses a similar way as our work to compute attention for the evidence. And the more complex Impatient Reader computes attention after processing each question word." ], "highlighted_evidence": [ "We compare our model with two sets of baselines:\n\nMemN2N BIBREF12 is an end-to-end trainable version of memory networks BIBREF9 .", "Attentive and Impatient Readers BIBREF6 use bidirectional LSTMs to encode question and evidence, and do classification over a large vocabulary based on these two encodings." ] } ], "annotation_id": [ "48b8f39f3f973d4b9c07fde57dc20d413e661fb0" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "correctness of all the question answer pairs are verified by at least two annotators" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The questions and answers are mainly collected from a large community QA website Baidu Zhidao and a small portion are from hand collected web documents. Therefore, all these questions are indeed asked by real-world users in daily life instead of under controlled conditions. All the questions are of single-entity factoid type, which means (1) each question is a factoid question and (2) its answer involves only one entity (but may have multiple words). The question in Figure FIGREF3 is a positive example, while the question “Who are the children of Albert Enistein?” is a counter example because the answer involves three persons. The type and correctness of all the question answer pairs are verified by at least two annotators." ], "highlighted_evidence": [ "The type and correctness of all the question answer pairs are verified by at least two annotators." ] } ], "annotation_id": [ "c9e73bc1c63663bd1bc16df35c32c346beacf189" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "0cfa6a44d0b5b52a69d56c8d45edecae537ae84c" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: Factoid QA as sequence labeling.", "Figure 2: Neural recurrent sequence labeling model for factoid QA. The model consists of three components: “Question LSTM” for computing question representation (rq), “Evidence LSTMs” for analyzing evidence, and “CRF” for producing label sequence which indicates whether each word in the evidence is at the beginning (B), inside (I) or outside (O) of the answer. Each word in the evidence is also equipped with two 0-1 features (see Section 3.4). We plot rq multiple times for clarity.", "Table 1: Statistics of WebQA dataset.", "Table 2: Comparison with baselines on the one-word answer subset of WebQA.", "Table 3: Evaluation results on the entire WebQA dataset.", "Table 4: Effect of embedding initialization and training. Only fuzzy matching results are shown.", "Table 5: Effect of q-e.comm and e-e.comm features.", "Table 6: Effect of question representations.", "Figure 3: Effect of evidence LSTMs structures. For fair comparison, cross layer links are not used in (a).", "Table 7: Word-based v.s. character-based input.", "Table 8: Evaluation results on the CNN and Daily Mail news datasets." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "5-Table1-1.png", "7-Table2-1.png", "7-Table3-1.png", "8-Table4-1.png", "8-Table5-1.png", "8-Table6-1.png", "9-Figure3-1.png", "9-Table7-1.png", "10-Table8-1.png" ] }
1603.04553
Unsupervised Ranking Model for Entity Coreference Resolution
Coreference resolution is one of the first stages in deep language understanding and its importance has been well recognized in the natural language processing community. In this paper, we propose a generative, unsupervised ranking model for entity coreference resolution by introducing resolution mode variables. Our unsupervised system achieves 58.44% F1 score of the CoNLL metric on the English data from the CoNLL-2012 shared task (Pradhan et al., 2012), outperforming the Stanford deterministic system (Lee et al., 2013) by 3.01%.
{ "section_name": [ "Introduction", "Notations and Definitions", "Generative Ranking Model", "Resolution Mode Variables", "Features", "Model Learning", "Mention Detection", "Experimental Setup", "Results and Comparison", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "Entity coreference resolution has become a critical component for many Natural Language Processing (NLP) tasks. Systems requiring deep language understanding, such as information extraction BIBREF2 , semantic event learning BIBREF3 , BIBREF4 , and named entity linking BIBREF5 , BIBREF6 all benefit from entity coreference information.", "Entity coreference resolution is the task of identifying mentions (i.e., noun phrases) in a text or dialogue that refer to the same real-world entities. In recent years, several supervised entity coreference resolution systems have been proposed, which, according to ng:2010:ACL, can be categorized into three classes — mention-pair models BIBREF7 , entity-mention models BIBREF8 , BIBREF9 , BIBREF10 and ranking models BIBREF11 , BIBREF12 , BIBREF13 — among which ranking models recently obtained state-of-the-art performance. However, the manually annotated corpora that these systems rely on are highly expensive to create, in particular when we want to build data for resource-poor languages BIBREF14 . That makes unsupervised approaches, which only require unannotated text for training, a desirable solution to this problem.", "Several unsupervised learning algorithms have been applied to coreference resolution. haghighi-klein:2007:ACLMain presented a mention-pair nonparametric fully-generative Bayesian model for unsupervised coreference resolution. Based on this model, ng:2008:EMNLP probabilistically induced coreference partitions via EM clustering. poon-domingos:2008:EMNLP proposed an entity-mention model that is able to perform joint inference across mentions by using Markov Logic. Unfortunately, these unsupervised systems' performance on accuracy significantly falls behind those of supervised systems, and are even worse than the deterministic rule-based systems. Furthermore, there is no previous work exploring the possibility of developing an unsupervised ranking model which achieved state-of-the-art performance under supervised settings for entity coreference resolution.", "In this paper, we propose an unsupervised generative ranking model for entity coreference resolution. Our experimental results on the English data from the CoNLL-2012 shared task BIBREF0 show that our unsupervised system outperforms the Stanford deterministic system BIBREF1 by 3.01% absolute on the CoNLL official metric. The contributions of this work are (i) proposing the first unsupervised ranking model for entity coreference resolution. (ii) giving empirical evaluations of this model on benchmark data sets. (iii) considerably narrowing the gap to supervised coreference resolution accuracy." ], [ "In the following, $D = \\lbrace m_0, m_1, \\ldots , m_n\\rbrace $ represents a generic input document which is a sequence of coreference mentions, including the artificial root mention (denoted by $m_0$ ). The method to detect and extract these mentions is discussed later in Section \"Mention Detection\" . Let $C = \\lbrace c_1, c_2, \\ldots , c_n\\rbrace $ denote the coreference assignment of a given document, where each mention $m_i$ has an associated random variable $c_i$ taking values in the set $\\lbrace 0, i, \\ldots , i-1\\rbrace $ ; this variable specifies $m_i$ 's selected antecedent ( $c_i \\in \\lbrace 1, 2, \\ldots , i-1\\rbrace $ ), or indicates that it begins a new coreference chain ( $c_i = 0$ )." ], [ "The following is a straightforward way to build a generative model for coreference: ", "$$\\begin{array}{rcl}\nP(D, C) & = & P(D|C)P(C) \\\\\n& = & \\prod \\limits _{j=1}^{n}P(m_j|m_{c_j})\\prod \\limits _{j=1}^{n}P(c_j|j)\n\\end{array}$$ (Eq. 3) ", "where we factorize the probabilities $P(D|C)$ and $P(C)$ into each position $j$ by adopting appropriate independence assumptions that given the coreference assignment $c_j$ and corresponding coreferent mention $m_{c_j}$ , the mention $m_j$ is independent with other mentions in front of it. This independent assumption is similar to that in the IBM 1 model on machine translation BIBREF15 , where it assumes that given the corresponding English word, the aligned foreign word is independent with other English and foreign words. We do not make any independent assumptions among different features (see Section \"Features\" for details).", "Inference in this model is efficient, because we can compute $c_j$ separately for each mention: $\nc^*_j = \\operatornamewithlimits{argmax}\\limits _{c_j} P(m_j|m_{c_j}) P(c_j|j)\n$ ", "The model is a so-called ranking model because it is able to identify the most probable candidate antecedent given a mention to be resolved." ], [ "According to previous work BIBREF17 , BIBREF18 , BIBREF1 , antecedents are resolved by different categories of information for different mentions. For example, the Stanford system BIBREF1 uses string-matching sieves to link two mentions with similar text and precise-construct sieve to link two mentions which satisfy special syntactic or semantic relations such as apposition or acronym. Motivated by this, we introduce resolution mode variables $\\Pi = \\lbrace \\pi _1, \\ldots , \\pi _n\\rbrace $ , where for each mention $j$ the variable $\\pi _j \\in \\lbrace str, prec, attr\\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\\Pi $ is deterministic when $D$ is given (i.e. $P(\\Pi |D)$ is a point distribution). We determine $\\pi _j$ for each mention $m_j$ in the following way:", " $\\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 .", " $\\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve.", " $\\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions.", "Now, we can extend the generative model in Eq. 3 to: $\n\\begin{array}{rcl}\n& & P(D, C) = P(D, C, \\Pi ) \\\\\n& = & \\prod \\limits _{j=1}^{n}P(m_j|m_{c_j}, \\pi _j) P(c_j|\\pi _j, j) P(\\pi _j|j)\n\\end{array}\n$ ", "where we define $P(\\pi _j|j)$ to be uniform distribution. We model $P(m_j|m_{c_j}, \\pi _j)$ and $P(c_j|\\pi _j, j)$ in the following way: $\n\\begin{array}{l}\nP(m_j|m_{c_j}, \\pi _j) = t(m_j|m_{c_j}, \\pi _j) \\\\\nP(c_j|\\pi _j, j) = \\left\\lbrace \\begin{array}{ll}\nq(c_j|\\pi _j, j) & \\pi _j = attr \\\\\n\\frac{1}{j} & \\textrm {otherwise}\n\\end{array}\\right.\n\\end{array}\n$ ", "where $\\theta = \\lbrace t, q\\rbrace $ are parameters of our model. Note that in the attribute-matching mode ( $\\pi _j = attr$ ) we model $P(c_j|\\pi _j, j)$ with parameter $q$ , while in the other two modes, we use the uniform distribution. It makes sense because the position information is important for coreference resolved by matching attributes of two mentions such as resolving pronoun coreference, but not that important for those resolved by matching text or special relations like two mentions referring the same person and matching by the name. [t] Learning Model with EM Initialization: Initialize $\\theta _0 = \\lbrace t_0, q_0\\rbrace $ ", " $t=0$ to $T$ set all counts $c(\\ldots ) = 0$ ", "each document $D$ $j=1$ to $n$ $k=0$ to $j - 1$ $L_{jk} = \\frac{t(m_j|m_k,\\pi _j)q(k|\\pi _j, j)}{\\sum \\limits _{i = 0}^{j-1} t(m_j|m_i,\\pi _j)q(i|\\pi _j, j)}$ ", " $c(m_j, m_k, \\pi _j) \\mathrel {+}= L_{jk}$ ", " $c(m_k, \\pi _j) \\mathrel {+}= L_{jk}$ ", " $c(k, j, \\pi _j) \\mathrel {+}= L_{jk}$ ", " $c(j, \\pi _j) \\mathrel {+}= L_{jk}$ Recalculate the parameters $t(m|m^{\\prime }, \\pi ) = \\frac{c(m, m^{\\prime }, \\pi )}{c(m^{\\prime }, \\pi )}$ ", " $q(k, j, \\pi ) = \\frac{c(k, j, \\pi )}{c(j, \\pi )}$ " ], [ "In this section, we describe the features we use to represent mentions. Specifically, as shown in Table 1 , we use different features under different resolution modes. It should be noted that only the Distance feature is designed for parameter $q$ , all other features are designed for parameter $t$ ." ], [ "For model learning, we run EM algorithm BIBREF19 on our Model, treating $D$ as observed data and $C$ as latent variables. We run EM with 10 iterations and select the parameters achieving the best performance on the development data. Each iteration takes around 12 hours with 10 CPUs parallelly. The best parameters appear at around the 5th iteration, according to our experiments.The detailed derivation of the learning algorithm is shown in Appendix A. The pseudo-code is shown is Algorithm \"Resolution Mode Variables\" . We use uniform initialization for all the parameters in our model.", "Several previous work has attempted to use EM for entity coreference resolution. cherry-bergsma:2005 and charniak-elsner:2009 applied EM for pronoun anaphora resolution. ng:2008:EMNLP probabilistically induced coreference partitions via EM clustering. Recently, moosavi2014 proposed an unsupervised model utilizing the most informative relations and achieved competitive performance with the Stanford system." ], [ "The basic rules we used to detect mentions are similar to those of Lee:2013:CL, except that their system uses a set of filtering rules designed to discard instances of pleonastic it, partitives, certain quantified noun phrases and other spurious mentions. Our system keeps partitives, quantified noun phrases and bare NP mentions, but discards pleonastic it and other spurious mentions." ], [ "Datasets. Due to the availability of readily parsed data, we select the APW and NYT sections of Gigaword Corpus (years 1994-2010) BIBREF20 to train the model. Following previous work BIBREF3 , we remove duplicated documents and the documents which include fewer than 3 sentences. The development and test data are the English data from the CoNLL-2012 shared task BIBREF0 , which is derived from the OntoNotes corpus BIBREF21 . The corpora statistics are shown in Table 2 . Our system is evaluated with automatically extracted mentions on the version of the data with automatic preprocessing information (e.g., predicted parse trees).", "Evaluation Metrics. We evaluate our model on three measures widely used in the literature: MUC BIBREF22 , B $^{3}$ BIBREF23 , and Entity-based CEAF (CEAF $_e$ ) BIBREF24 . In addition, we also report results on another two popular metrics: Mention-based CEAF (CEAF $_m$ ) and BLANC BIBREF25 . All the results are given by the latest version of CoNLL-2012 scorer " ], [ "Table 3 illustrates the results of our model together as baseline with two deterministic systems, namely Stanford: the Stanford system BIBREF10 and Multigraph: the unsupervised multigraph system BIBREF26 , and one unsupervised system, namely MIR: the unsupervised system using most informative relations BIBREF27 . Our model outperforms the three baseline systems on all the evaluation metrics. Specifically, our model achieves improvements of 2.93% and 3.01% on CoNLL F1 score over the Stanford system, the winner of the CoNLL 2011 shared task, on the CoNLL 2012 development and test sets, respectively. The improvements on CoNLL F1 score over the Multigraph model are 1.41% and 1.77% on the development and test sets, respectively. Comparing with the MIR model, we obtain significant improvements of 2.62% and 3.02% on CoNLL F1 score.", "To make a thorough empirical comparison with previous studies, Table 3 (below the dashed line) also shows the results of some state-of-the-art supervised coreference resolution systems — IMS: the second best system in the CoNLL 2012 shared task BIBREF28 ; Latent-Tree: the latent tree model BIBREF29 obtaining the best results in the shared task; Berkeley: the Berkeley system with the final feature set BIBREF12 ; LaSO: the structured perceptron system with non-local features BIBREF30 ; Latent-Strc: the latent structure system BIBREF31 ; Model-Stack: the entity-centric system with model stacking BIBREF32 ; and Non-Linear: the non-linear mention-ranking model with feature representations BIBREF33 . Our unsupervised ranking model outperforms the supervised IMS system by 1.02% on the CoNLL F1 score, and achieves competitive performance with the latent tree model. Moreover, our approach considerably narrows the gap to other supervised systems listed in Table 3 ." ], [ "We proposed a new generative, unsupervised ranking model for entity coreference resolution into which we introduced resolution mode variables to distinguish mentions resolved by different categories of information. Experimental results on the data from CoNLL-2012 shared task show that our system significantly improves the accuracy on different evaluation metrics over the baseline systems.", "One possible direction for future work is to differentiate more resolution modes. Another one is to add more precise or even event-based features to improve the model's performance." ], [ "This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.", "Appendix A. Derivation of Model Learning", "Formally, we iteratively estimate the model parameters $\\theta $ , employing the following EM algorithm:", "For simplicity, we denote: $\n{\\small \\begin{array}{rcl}\nP(C|D; \\theta ) & = & \\tilde{P}(C|D) \\\\\nP(C|D; \\theta ^{\\prime }) & = & P(C|D)\n\\end{array}}\n$ ", "In addition, we use $\\tau (\\pi _j|j)$ to denote the probability $P(\\pi _j|j)$ which is uniform distribution in our model. Moreover, we use the following notation for convenience: $\n{\\small \\theta (m_j, m_k, j, k, \\pi _j) = t(m_j|m_k, \\pi _j) q(k|\\pi _j, j) \\tau (\\pi _j|j)\n}\n$ ", "Then, we have $\n{\\scriptsize {\n\\begin{array}{rl}\n& E_{\\tilde{P}(c|D)} [\\log P(D, C)] \\\\\n= & \\sum \\limits _{C} \\tilde{P}(C|D) \\log P(D, C) \\\\\n= & \\sum \\limits _{C} \\tilde{P}(C|D) \\big (\\sum \\limits _{j=1}^{n} \\log t(m_j|m_{c_j}, \\pi _j) + \\log q(c_j|\\pi _j, j) + \\log \\tau (\\pi _j|j) \\big ) \\\\\n= & \\sum \\limits _{j=1}^{n} \\sum \\limits _{k=0}^{j-1} L_{jk} \\big (\\log t(m_j|m_k, \\pi _j) + \\log q(k|\\pi _j, j) + \\log \\tau (\\pi _j|j) \\big )\n\\end{array}}}\n$ ", "Then the parameters $t$ and $q$ that maximize $E_{\\tilde{P}(c|D)} [\\log P(D, C)]$ satisfy that $\n{\\small \\begin{array}{rcl}\nt(m_j|m_k, \\pi _j) & = & \\frac{L_{jk}}{\\sum \\limits _{i = 1}^{n} L_{ik}} \\\\\nq(k|\\pi _j, j) & = & \\frac{L_{jk}}{\\sum \\limits _{i = 0}^{j-1} L_{ji}}\n\\end{array}}\n$ ", "where $L_{jk}$ can be calculated by $\n{\\small \\begin{array}{rcl}\nL_{jk} & = & \\sum \\limits _{C, c_j=k} \\tilde{P}(C|D) = \\frac{\\sum \\limits _{C, c_j=k} \\tilde{P}(C, D)}{\\sum \\limits _{C} \\tilde{P}(C, D)} \\\\\n& = & \\frac{\\sum \\limits _{C, c_j=k}\\prod \\limits _{i = 1}^{n}\\tilde{\\theta }(m_i, m_{c_i}, c_i, i, \\pi _i)}{\\sum \\limits _{C}\\prod \\limits _{i = 1}^{n}\\tilde{\\theta }(m_i, m_{c_i}, c_i, i, \\pi _i)} \\\\\n& = & \\frac{\\tilde{\\theta }(m_j, m_k, k, j, \\pi _j)\\sum \\limits _{C(-j)}\\tilde{P}(C(-j)|D)}{\\sum \\limits _{i=0}^{j-1}\\tilde{\\theta }(m_j, m_i, i, j, \\pi _j)\\sum \\limits _{C(-j)}\\tilde{P}(C(-j)|D)} \\\\\n& = & \\frac{\\tilde{\\theta }(m_j, m_k, k, j, \\pi _j)}{\\sum \\limits _{i=0}^{j-1}\\tilde{\\theta }(m_j, m_i, i, j, \\pi _j)} \\\\\n& = & \\frac{\\tilde{t}(m_j|m_k, \\pi _j) \\tilde{q}(k|\\pi _j, j) \\tilde{\\tau }(\\pi _j|j)}{\\sum \\limits _{i=0}^{j-1}\\tilde{t}(m_j|m_i, \\pi _j) \\tilde{q}(i|\\pi _j, j) \\tilde{\\tau }(\\pi _j|j)} \\\\\n& = & \\frac{\\tilde{t}(m_j|m_k, \\pi _j) \\tilde{q}(k|\\pi _j, j)}{\\sum \\limits _{i=0}^{j-1}\\tilde{t}(m_j|m_i, \\pi _j) \\tilde{q}(i|\\pi _j, j)}\n\\end{array}}\n$ ", "where $C(-j) = \\lbrace c_1, \\ldots , c_{j-1}, c_{j+1}, \\ldots , c_{n}\\rbrace $ . The above derivations correspond to the learning algorithm in Algorithm \"Resolution Mode Variables\" . " ] ] }
{ "question": [ "Are resolution mode variables hand crafted?", "What are resolution model variables?", "Is the model presented in the paper state of the art?" ], "question_id": [ "80de3baf97a55ea33e0fe0cafa6f6221ba347d0a", "f5707610dc8ae2a3dc23aec63d4afa4b40b7ec1e", "e76139c63da0f861c097466983fbe0c94d1d9810" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "familiar", "familiar", "familiar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "coreference", "coreference", "coreference" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "According to previous work BIBREF17 , BIBREF18 , BIBREF1 , antecedents are resolved by different categories of information for different mentions. For example, the Stanford system BIBREF1 uses string-matching sieves to link two mentions with similar text and precise-construct sieve to link two mentions which satisfy special syntactic or semantic relations such as apposition or acronym. Motivated by this, we introduce resolution mode variables $\\Pi = \\lbrace \\pi _1, \\ldots , \\pi _n\\rbrace $ , where for each mention $j$ the variable $\\pi _j \\in \\lbrace str, prec, attr\\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\\Pi $ is deterministic when $D$ is given (i.e. $P(\\Pi |D)$ is a point distribution). We determine $\\pi _j$ for each mention $m_j$ in the following way:", "$\\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 .", "$\\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve.", "$\\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions." ], "highlighted_evidence": [ "Motivated by this, we introduce resolution mode variables $\\Pi = \\lbrace \\pi _1, \\ldots , \\pi _n\\rbrace $ , where for each mention $j$ the variable $\\pi _j \\in \\lbrace str, prec, attr\\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\\Pi $ is deterministic when $D$ is given (i.e. $P(\\Pi |D)$ is a point distribution). We determine $\\pi _j$ for each mention $m_j$ in the following way:\n\n$\\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 .\n\n$\\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve.\n\n$\\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions." ] } ], "annotation_id": [ "f4cf4054065d62aef6d53f8571b081345695a0b6" ], "worker_id": [ "f840a836eee0180d2c976457f8b3052d8e78050c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Variables in the set {str, prec, attr} indicating in which mode the mention should be resolved.", "evidence": [ "According to previous work BIBREF17 , BIBREF18 , BIBREF1 , antecedents are resolved by different categories of information for different mentions. For example, the Stanford system BIBREF1 uses string-matching sieves to link two mentions with similar text and precise-construct sieve to link two mentions which satisfy special syntactic or semantic relations such as apposition or acronym. Motivated by this, we introduce resolution mode variables $\\Pi = \\lbrace \\pi _1, \\ldots , \\pi _n\\rbrace $ , where for each mention $j$ the variable $\\pi _j \\in \\lbrace str, prec, attr\\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\\Pi $ is deterministic when $D$ is given (i.e. $P(\\Pi |D)$ is a point distribution). We determine $\\pi _j$ for each mention $m_j$ in the following way:", "$\\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 .", "$\\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve.", "$\\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions." ], "highlighted_evidence": [ "Motivated by this, we introduce resolution mode variables $\\Pi = \\lbrace \\pi _1, \\ldots , \\pi _n\\rbrace $ , where for each mention $j$ the variable $\\pi _j \\in \\lbrace str, prec, attr\\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\\Pi $ is deterministic when $D$ is given (i.e. $P(\\Pi |D)$ is a point distribution). We determine $\\pi _j$ for each mention $m_j$ in the following way:\n\n$\\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 .\n\n$\\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve.\n\n$\\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions." ] } ], "annotation_id": [ "cfe30b450534f64f88a0f4a8eb5ec6c9697074e1" ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "No, supervised models perform better for this task.", "evidence": [ "To make a thorough empirical comparison with previous studies, Table 3 (below the dashed line) also shows the results of some state-of-the-art supervised coreference resolution systems — IMS: the second best system in the CoNLL 2012 shared task BIBREF28 ; Latent-Tree: the latent tree model BIBREF29 obtaining the best results in the shared task; Berkeley: the Berkeley system with the final feature set BIBREF12 ; LaSO: the structured perceptron system with non-local features BIBREF30 ; Latent-Strc: the latent structure system BIBREF31 ; Model-Stack: the entity-centric system with model stacking BIBREF32 ; and Non-Linear: the non-linear mention-ranking model with feature representations BIBREF33 . Our unsupervised ranking model outperforms the supervised IMS system by 1.02% on the CoNLL F1 score, and achieves competitive performance with the latent tree model. Moreover, our approach considerably narrows the gap to other supervised systems listed in Table 3 ." ], "highlighted_evidence": [ "Our unsupervised ranking model outperforms the supervised IMS system by 1.02% on the CoNLL F1 score, and achieves competitive performance with the latent tree model. Moreover, our approach considerably narrows the gap to other supervised systems listed in Table 3 ." ] } ], "annotation_id": [ "0d325f52efb19aff203c0364700f5b861a17176a" ], "worker_id": [ "c7d4a630661cd719ea504dba56393f78278b296b" ] } ] }
{ "caption": [ "Table 1: Feature set for representing a mention under different resolution modes. The Distance feature is for parameter q, while all other features are for parameter t.", "Table 2: Corpora statistics. “ON-Dev” and “ON-Test” are the development and testing sets of the OntoNotes corpus.", "Table 3: F1 scores of different evaluation metrics for our model, together with two deterministic systems and one unsupervised system as baseline (above the dashed line) and seven supervised systems (below the dashed line) for comparison on CoNLL 2012 development and test datasets." ], "file": [ "3-Table1-1.png", "3-Table2-1.png", "4-Table3-1.png" ] }
1709.10217
The First Evaluation of Chinese Human-Computer Dialogue Technology
In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology. We detail the evaluation scheme, tasks, metrics and how to collect and annotate the data for training, developing and test. The evaluation includes two tasks, namely user intent classification and online testing of task-oriented dialogue. To consider the different sources of the data for training and developing, the first task can also be divided into two sub tasks. Both the two tasks are coming from the real problems when using the applications developed by industry. The evaluation data is provided by the iFLYTEK Corporation. Meanwhile, in this paper, we publish the evaluation results to present the current performance of the participants in the two tasks of Chinese human-computer dialogue technology. Moreover, we analyze the existing problems of human-computer dialogue as well as the evaluation scheme itself.
{ "section_name": [ "Introduction", "The First Evaluation of Chinese Human-Computer Dialogue Technology", "Task 1: User Intent Classification", "Task 2: Online Testing of Task-oriented Dialogue", "Evaluation Data", "Evaluation Results", "Conclusion", "Acknowledgements" ], "paragraphs": [ [ "Recently, human-computer dialogue has been emerged as a hot topic, which has attracted the attention of both academia and industry. In research, the natural language understanding (NLU), dialogue management (DM) and natural language generation (NLG) have been promoted by the technologies of big data and deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . Following the development of machine reading comprehension BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , the NLU technology has made great progress. The development of DM technology is from rule-based approach and supervised learning based approach to reinforcement learning based approach BIBREF15 . The NLG technology is through pattern-based approach, sentence planning approach and end-to-end deep learning approach BIBREF16 , BIBREF17 , BIBREF18 . In application, there are massive products that are based on the technology of human-computer dialogue, such as Apple Siri, Amazon Echo, Microsoft Cortana, Facebook Messenger and Google Allo etc.", "Although the blooming of human-computer dialogue technology in both academia and industry, how to evaluate a dialogue system, especially an open domain chit-chat system, is still an open question. Figure FIGREF6 presents a brief comparison of the open domain chit-chat system and the task-oriented dialogue system.", "From Figure FIGREF6 , we can see that it is quite different between the open domain chit-chat system and the task-oriented dialogue system. For the open domain chit-chat system, as it has no exact goal in a conversation, given an input message, the responses can be various. For example, for the input message “How is it going today?”, the responses can be “I'm fine!”, “Not bad.”, “I feel so depressed!”, “What a bad day!”, etc. There may be infinite number of responses for an open domain messages. Hence, it is difficult to construct a gold standard (usually a reference set) to evaluate a response which is generated by an open domain chit-chat system. For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue.", "To promote the development of the evaluation technology for dialogue systems, especially considering the language characteristics of Chinese, we organize the first evaluation of Chinese human-computer dialogue technology. In this paper, we will present the evaluation scheme and the released corpus in detail.", "The rest of this paper is as follows. In Section 2, we will briefly introduce the first evaluation of Chinese human-computer dialogue technology, which includes the descriptions and the evaluation metrics of the two tasks. We then present the evaluation data and final results in Section 3 and 4 respectively, following the conclusion and acknowledgements in the last two sections." ], [ "The First Evaluation of Chinese Human-Computer Dialogue Technology includes two tasks, namely user intent classification and online testing of task-oriented dialogue." ], [ "In using of human-computer dialogue based applications, human may have various intent, for example, chit-chatting, asking questions, booking air tickets, inquiring weather, etc. Therefore, after receiving an input message (text or ASR result) from a user, the first step is to classify the user intent into a specific domain for further processing. Table TABREF7 shows an example of user intent with category information.", "In task 1, there are two top categories, namely, chit-chat and task-oriented dialogue. The task-oriented dialogue also includes 30 sub categories. In this evaluation, we only consider to classify the user intent in single utterance.", "It is worth noting that besides the released data for training and developing, we also allow to collect external data for training and developing. To considering that, the task 1 is indeed includes two sub tasks. One is a closed evaluation, in which only the released data can be used for training and developing. The other is an open evaluation that allow to collect external data for training and developing. For task 1, we use F1-score as evaluation metric." ], [ "For the task-oriented dialogue systems, the best way for evaluation is to use the online human-computer dialogue. After finishing an online human-computer dialogue with a dialogue system, the human then manually evaluate the system by using the metrics of user satisfaction degree, dialogue fluency, etc. Therefore, in the task 2, we use an online testing of task-oriented dialogue for dialogue systems. For a human tester, we will give a complete intent with an initial sentence, which is used to start the online human-computer dialogue. Table TABREF12 shows an example of the task-oriented human-computer dialogue. Here “U” and “R” denote user and robot respectively. The complete intent is as following:", "“查询明天从哈尔滨到北京的晚间软卧火车票,上下铺均可。", "Inquire the soft berth ticket at tomorrow evening, from Harbin to Beijing, either upper or lower berth is okay.”", "In task 2, there are three categories. They are “air tickets”, “train tickets” and “hotel”. Correspondingly, there are three type of tasks. All the tasks are in the scope of the three categories. However, a complete user intent may include more than one task. For example, a user may first inquiring the air tickets. However, due to the high price, the user decide to buy a train tickets. Furthermore, the user may also need to book a hotel room at the destination.", "We use manual evaluation for task 2. For each system and each complete user intent, the initial sentence, which is used to start the dialogue, is the same. The tester then begin to converse to each system. A dialogue is finished if the system successfully returns the information which the user inquires or the number of dialogue turns is larger than 30 for a single task. For building the dialogue systems of participants, we release an example set of complete user intent and three data files of flight, train and hotel in JSON format. There are five evaluation metrics for task 2 as following.", "Task completion ratio: The number of completed tasks divided by the number of total tasks.", "User satisfaction degree: There are five scores -2, -1, 0, 1, 2, which denote very dissatisfied, dissatisfied, neutral, satisfied and very satisfied, respectively.", "Response fluency: There are three scores -1, 0, 1, which indicate nonfluency, neutral, fluency.", "Number of dialogue turns: The number of utterances in a task-completed dialogue.", "Guidance ability for out of scope input: There are two scores 0, 1, which represent able to guide or unable to guide.", "For the number of dialogue turns, we have a penalty rule that for a dialogue task, if the system cannot return the result (or accomplish the task) in 30 turns, the dialogue task is end by force. Meanwhile, if a system cannot accomplish a task in less than 30 dialogue turns, the number of dialogue turns is set to 30." ], [ "In the evaluation, all the data for training, developing and test is provided by the iFLYTEK Corporation.", "For task 1, as the descriptions in Section SECREF10 , the two top categories are chit-chat (chat in Table TABREF13 ) and task-oriented dialogue. Meanwhile, the task-oriented dialogue also includes 30 sub categories. Actually, the task 1 is a 31 categories classification task. In task 1, besides the data we released for training and developing, we also allow the participants to extend the training and developing corpus. Hence, there are two sub tasks for the task 1. One is closed test, which means the participants can only use the released data for training and developing. The other is open test, which allows the participants to explore external corpus for training and developing. Note that there is a same test set for both the closed test and the open test.", "For task 2, we release 11 examples of the complete user intent and 3 data file, which includes about one month of flight, hotel and train information, for participants to build their dialogue systems. The current date for online test is set to April 18, 2017. If the tester says “today”, the systems developed by the participants should understand that he/she indicates the date of April 18, 2017." ], [ "There are 74 participants who are signing up the evaluation. The final number of participants is 28 and the number of submitted systems is 43. Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively. Due to the space limitation, we only present the top 5 results of task 1. We will add the complete lists of the evaluation results in the version of full paper.", "Note that for task 2, there are 7 submitted systems. However, only 4 systems can provide correct results or be connected in a right way at the test phase. Therefore, Table TABREF16 shows the complete results of the task 2." ], [ "In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology. In detail, we first present the two tasks of the evaluation as well as the evaluation metrics. We then describe the released data for evaluation. Finally, we also show the evaluation results of the two tasks. As the evaluation data is provided by the iFLYTEK Corporation from their real online applications, we believe that the released data will further promote the research of human-computer dialogue and fill the blank of the data on the two tasks." ], [ "We would like to thank the Social Media Processing (SMP) committee of Chinese Information Processing Society of China. We thank all the participants of the first evaluation of Chinese human-computer dialogue technology. We also thank the testers from the voice resource department of the iFLYTEK Corporation for their effort to the online real-time human-computer dialogue test and offline dialogue evaluation. We thank Lingzhi Li, Yangzi Zhang, Jiaqi Zhu and Xiaoming Shi from the research center for social computing and information retrieval for their support on the data annotation, establishing the system testing environment and the communication to the participants and help connect their systems to the testing environment." ] ] }
{ "question": [ "What problems are found with the evaluation scheme?", "How is the data annotated?", "What collection steps do they mention?", "How many intents were classified?", "What was the result of the highest performing system?", "What metrics are used in the evaluation?" ], "question_id": [ "b8b588ca1e876b3094ae561a875dd949c8965b2e", "2ec640e6b4f1ebc158d13ee6589778b4c08a04c8", "ab0bb4d0a9796416d3d7ceba0ba9ab50c964e9d6", "0460019eb2186aef835f7852fc445b037bd43bb7", "96c09ece36a992762860cde4c110f1653c110d96", "a9cc4b17063711c8606b8fc1c5eaf057b317a0c9" ], "nlp_background": [ "", "", "", "", "", "" ], "topic_background": [ "", "", "", "", "", "" ], "paper_read": [ "", "", "", "", "", "" ], "search_query": [ "", "", "", "", "", "" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue" ], "yes_no": null, "free_form_answer": "", "evidence": [ "From Figure FIGREF6 , we can see that it is quite different between the open domain chit-chat system and the task-oriented dialogue system. For the open domain chit-chat system, as it has no exact goal in a conversation, given an input message, the responses can be various. For example, for the input message “How is it going today?”, the responses can be “I'm fine!”, “Not bad.”, “I feel so depressed!”, “What a bad day!”, etc. There may be infinite number of responses for an open domain messages. Hence, it is difficult to construct a gold standard (usually a reference set) to evaluate a response which is generated by an open domain chit-chat system. For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue." ], "highlighted_evidence": [ "For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue." ] } ], "annotation_id": [ "38e82d8bcf6c074c9c9690831b23216b9e65f5e8" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "9db4359f2b369a8c04c24e66e99cfcf8d9a8b0c2" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "6227c4c03516328f445fb939a101273c7ca1450d" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "two" ], "yes_no": null, "free_form_answer": "", "evidence": [ "In task 1, there are two top categories, namely, chit-chat and task-oriented dialogue. The task-oriented dialogue also includes 30 sub categories. In this evaluation, we only consider to classify the user intent in single utterance." ], "highlighted_evidence": [ "In task 1, there are two top categories, namely, chit-chat and task-oriented dialogue." ] } ], "annotation_id": [ "785eb17b1dacacf3f1abf57eb7ab48225281bd10" ], "worker_id": [ "c1018a31c3272ce74964a3280069f62f314a1a58" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "For task 1 best F1 score was 0.9391 on closed and 0.9414 on open test.\nFor task2 best result had: Ratio 0.3175 , Satisfaction 64.53, Fluency 0, Turns -1 and Guide 2", "evidence": [ "There are 74 participants who are signing up the evaluation. The final number of participants is 28 and the number of submitted systems is 43. Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively. Due to the space limitation, we only present the top 5 results of task 1. We will add the complete lists of the evaluation results in the version of full paper.", "Note that for task 2, there are 7 submitted systems. However, only 4 systems can provide correct results or be connected in a right way at the test phase. Therefore, Table TABREF16 shows the complete results of the task 2.", "FLOAT SELECTED: Table 4: Top 5 results of the closed test of the task 1.", "FLOAT SELECTED: Table 5: Top 5 results of the open test of the task 1.", "FLOAT SELECTED: Table 6: The results of the task 2. Ratio, Satisfaction, Fluency, Turns and Guide indicate the task completion ratio, user satisfaction degree, response fluency, number of dialogue turns and guidance ability for out of scope input respectively." ], "highlighted_evidence": [ "Table TABREF14 and TABREF15 show the evaluation results of the closed test and open test of the task 1 respectively.", "Therefore, Table TABREF16 shows the complete results of the task 2.", "FLOAT SELECTED: Table 4: Top 5 results of the closed test of the task 1.", "FLOAT SELECTED: Table 5: Top 5 results of the open test of the task 1.", "FLOAT SELECTED: Table 6: The results of the task 2. Ratio, Satisfaction, Fluency, Turns and Guide indicate the task completion ratio, user satisfaction degree, response fluency, number of dialogue turns and guidance ability for out of scope input respectively." ] } ], "annotation_id": [ "0d7edc2e80198c1a663b10d64a1cb930426b3f41" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "For task 1, we use F1-score", "Task completion ratio", "User satisfaction degree", "Response fluency", "Number of dialogue turns", "Guidance ability for out of scope input" ], "yes_no": null, "free_form_answer": "", "evidence": [ "It is worth noting that besides the released data for training and developing, we also allow to collect external data for training and developing. To considering that, the task 1 is indeed includes two sub tasks. One is a closed evaluation, in which only the released data can be used for training and developing. The other is an open evaluation that allow to collect external data for training and developing. For task 1, we use F1-score as evaluation metric.", "We use manual evaluation for task 2. For each system and each complete user intent, the initial sentence, which is used to start the dialogue, is the same. The tester then begin to converse to each system. A dialogue is finished if the system successfully returns the information which the user inquires or the number of dialogue turns is larger than 30 for a single task. For building the dialogue systems of participants, we release an example set of complete user intent and three data files of flight, train and hotel in JSON format. There are five evaluation metrics for task 2 as following.", "Task completion ratio: The number of completed tasks divided by the number of total tasks.", "User satisfaction degree: There are five scores -2, -1, 0, 1, 2, which denote very dissatisfied, dissatisfied, neutral, satisfied and very satisfied, respectively.", "Response fluency: There are three scores -1, 0, 1, which indicate nonfluency, neutral, fluency.", "Number of dialogue turns: The number of utterances in a task-completed dialogue.", "Guidance ability for out of scope input: There are two scores 0, 1, which represent able to guide or unable to guide." ], "highlighted_evidence": [ "For task 1, we use F1-score as evaluation metric.", "We use manual evaluation for task 2.", "There are five evaluation metrics for task 2 as following.\n\nTask completion ratio: The number of completed tasks divided by the number of total tasks.\n\nUser satisfaction degree: There are five scores -2, -1, 0, 1, 2, which denote very dissatisfied, dissatisfied, neutral, satisfied and very satisfied, respectively.\n\nResponse fluency: There are three scores -1, 0, 1, which indicate nonfluency, neutral, fluency.\n\nNumber of dialogue turns: The number of utterances in a task-completed dialogue.\n\nGuidance ability for out of scope input: There are two scores 0, 1, which represent able to guide or unable to guide." ] } ], "annotation_id": [ "4cd97a7c1f31679ac84b6c96290bfbac66d90c41" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: A brief comparison of the open domain chit-chat system and the task-oriented dialogue system.", "Table 1: An example of user intent with category information.", "Table 2: An example of the task-oriented human-computer dialogue.", "Table 3: The statistics of the released data for task 1.", "Table 4: Top 5 results of the closed test of the task 1.", "Table 5: Top 5 results of the open test of the task 1.", "Table 6: The results of the task 2. Ratio, Satisfaction, Fluency, Turns and Guide indicate the task completion ratio, user satisfaction degree, response fluency, number of dialogue turns and guidance ability for out of scope input respectively." ], "file": [ "1-Figure1-1.png", "2-Table1-1.png", "2-Table2-1.png", "4-Table3-1.png", "4-Table4-1.png", "4-Table5-1.png", "5-Table6-1.png" ] }
1901.02262
Multi-style Generative Reading Comprehension
This study tackles generative reading comprehension (RC), which consists of answering questions based on textual evidence and natural language generation (NLG). We propose a multi-style abstractive summarization model for question answering, called Masque. The proposed model has two key characteristics. First, unlike most studies on RC that have focused on extracting an answer span from the provided passages, our model instead focuses on generating a summary from the question and multiple passages. This serves to cover various answer styles required for real-world applications. Second, whereas previous studies built a specific model for each answer style because of the difficulty of acquiring one general model, our approach learns multi-style answers within a model to improve the NLG capability for all styles involved. This also enables our model to give an answer in the target style. Experiments show that our model achieves state-of-the-art performance on the Q&A task and the Q&A + NLG task of MS MARCO 2.1 and the summary task of NarrativeQA. We observe that the transfer of the style-independent NLG capability to the target style is the key to its success.
{ "section_name": [ "Introduction", "Problem Formulation", "Proposed Model", "Question-Passages Reader", "Passage Ranker", "Answer Possibility Classifier", "Answer Sentence Decoder", "Loss Function", "Setup", "Results", "Conclusion" ], "paragraphs": [ [ "Question answering has been a long-standing research problem. Recently, reading comprehension (RC), a challenge to answer a question given textual evidence provided in a document set, has received much attention. Here, current mainstream studies have treated RC as a process of extracting an answer span from one passage BIBREF0 , BIBREF1 or multiple passages BIBREF2 , which is usually done by predicting the start and end positions of the answer BIBREF3 , BIBREF4 .", "The demand for answering questions in natural language is increasing rapidly, and this has led to the development of smart devices such as Siri and Alexa. However, in comparison with answer span extraction, the natural language generation (NLG) ability for RC has been less studied. While datasets such as MS MARCO BIBREF5 have been proposed for providing abstractive answers in natural language, the state-of-the-art methods BIBREF6 , BIBREF7 are based on answer span extraction, even for the datasets. Generative models such as S-Net BIBREF8 suffer from a dearth of training data to cover open-domain questions.", "Moreover, to satisfy various information needs, intelligent agents should be capable of answering one question in multiple styles, such as concise phrases that do not contain the context of the question and well-formed sentences that make sense even without the context of the question. These capabilities complement each other; however, the methods used in previous studies cannot utilize and control different answer styles within a model.", "In this study, we propose a generative model, called Masque, for multi-passage RC. On the MS MARCO 2.1 dataset, Masque achieves state-of-the-art performance on the dataset's two tasks, Q&A and NLG, with different answer styles. The main contributions of this study are that our model enables the following two abilities." ], [ "The task considered in this paper, is defined as:", "Problem 1 Given a question with $J$ words $x^q = \\lbrace x^q_1, \\ldots , x^q_J\\rbrace $ , a set of $K$ passages, where each $k$ -th passage is composed of $L$ words $x^{p_k} = \\lbrace x^{p_k}_1, \\ldots , x^{p_k}_{L}\\rbrace $ , and an answer style $s$ , an RC system outputs an answer $y = \\lbrace y_1, \\ldots , y_T \\rbrace $ conditioned on the style.", "In short, for inference, given a set of 3-tuples $(x^q, \\lbrace x^{p_k}\\rbrace , s)$ , the system predicts $P(y)$ . The training data is a set of 6-tuples: $(x^q, \\lbrace x^{p_k}\\rbrace , s, y, a, \\lbrace r^{p_k}\\rbrace )$ , where $a$ is 1 if the question is answerable with the provided passages and 0 otherwise, and $r^{p_k}$ is 1 if the $k$ -th passage is required to formulate the answer and 0 otherwise." ], [ "Our proposed model, Masque, is based on multi-source abstractive summarization; the answer our model generates can be viewed as a summary from the question and multiple passages. It is also style-controllable; one model can generate the answer with the target style.", "Masque directly models the conditional probability $p(y|x^q, \\lbrace x^{p_k}\\rbrace , s)$ . In addition to multi-style learning, it considers passage ranking and answer possibility classification together as multi-task learning in order to improve accuracy. Figure 2 shows the model architecture. It consists of the following modules.", " 1 The question-passages reader (§ \"Question-Passages Reader\" ) models interactions between the question and passages.", " 2 The passage ranker (§ \"Passage Ranker\" ) finds relevant passages to the question.", " 3 The answer possibility classifier (§ \"Answer Possibility Classifier\" ) identifies answerable questions.", " 4 The answer sentence decoder (§ \"Answer Sentence Decoder\" ) outputs a sequence of words conditioned on the style." ], [ "Given a question and passages, the question-passages reader matches them so that the interactions among the question (passage) words conditioned on the passages (question) can be captured.", "Let $x^q$ and $x^{p_k}$ represent one-hot vectors of words in the question and $k$ -th passage. First, this layer projects each of the one-hot vectors (of size $V$ ) into a $d_\\mathrm {word}$ -dimensional continuous vector space with a pre-trained weight matrix $W^e \\in \\mathbb {R}^{d_\\mathrm {word} \\times V}$ such as GloVe BIBREF15 . Next, it uses contextualized word representations, ELMo BIBREF16 , which is a character-level two-layer bidirectional language model pre-trained on a large-scale corpus. ELMo representations allow our model to use morphological clues to form robust representations for out-of-vocabulary words unseen in training. Then, the concatenation of the word and contextualized embedding vectors is passed to a two-layer highway network BIBREF17 that is shared for the question and passages.", "This layer uses a stack of Transformer blocks, which are shared for the question and passages, on top of the embeddings provided by the word embedding layer. The input of the first block is immediately mapped to a $d$ -dimensional vector by a linear transformation. The outputs of this layer are sequences of $d$ -dimensional vectors: $E^{p_k} \\in \\mathbb {R}^{d \\times L}$ for the $k$ -th passage and $E^q \\in \\mathbb {R}^{d \\times J}$ for the question.", "It consists of two sub-layers: a self-attention layer and a position-wise feed-forward network. For the self-attention layer, we adopt the multi-head attention mechanism defined in BIBREF12 . The feed-forward network consists of two linear transformations with a GELU BIBREF18 activation in between, following OpenAI GPT BIBREF19 . Each sub-layer is placed inside a residual block BIBREF20 . For an input $x$ and a given sub-layer function $f$ , the output is $\\mathrm {LayerNorm}(f(x)+x)$ , where $\\mathrm {LayerNorm}$ indicates the layer normalization proposed in BIBREF21 . To facilitate these residual connections, all sub-layers produce outputs of dimension $d$ . Note that our model does not use any position embeddings because ELMo gives the positional information of the words in each sequence.", "This layer fuses information from the passages to the question as well as from the question to the passages in a dual mechanism.", "It first computes a similarity matrix $U^{p_k} \\in \\mathbb {R}^{L{\\times }J}$ between the question and $k$ -th passage, as is done in BIBREF22 , where ", "$$U^{p_k}_{lj} = {w^a}^\\top [ E^{p_k}_l; E^q_j; E^{p_k}_l \\odot E^q_j ]$$ (Eq. 15) ", " indicates the similarity between the $l$ -th word of the $k$ -th passage and the $j$ -th question word. $w^a \\in \\mathbb {R}^{3d}$ are learnable parameters. The $\\odot $ operator denotes the Hadamard product, and the $[;]$ operator means vector concatenation across the rows. Next, it obtains the row and column normalized similarity matrices $A^{p_k} = \\mathrm {softmax}_j({U^{p_k}}^\\top ) \\in \\mathbb {R}^{J\\times L}$ and $B^{p_k} = \\mathrm {softmax}_{l}(U^{p_k}) \\in \\mathbb {R}^{L \\times J}$ . We use DCN BIBREF23 as the dual attention mechanism to obtain question-to-passage representations $G^{q \\rightarrow p_k} \\in \\mathbb {R}^{5d \\times L}$ : ", "$$\\nonumber [E^{p_k}; \\bar{A}^{p_k}; \\bar{\\bar{A}}^{p_k}; E^{p_k} \\odot \\bar{A}^{p_k}; E^{p_k} \\odot \\bar{\\bar{A}}^{p_k}]$$ (Eq. 16) ", " and passage-to-question ones $G^{p \\rightarrow q} \\in \\mathbb {R}^{5d \\times J}$ : ", "$$\\begin{split}\n\\nonumber & [ E^{q} ; \\max _k(\\bar{B}^{p_k}); \\max _k(\\bar{\\bar{B}}^{p_k}); \\\\\n&\\hspace{10.0pt} E^{q} \\odot \\max _k(\\bar{B}^{p_k}); E^{q} \\odot \\max _k(\\bar{\\bar{B}}^{p_k}) ] \\mathrm {\\ \\ where}\n\\end{split}\\\\\n\\nonumber &\\bar{A}^{p_k} = E^q A^{p_k}\\in \\mathbb {R}^{d \\times L}, \\ \\bar{B}^{p_k} = E^{p_k} B^{p_k} \\in \\mathbb {R}^{d \\times J} \\\\\n\\nonumber &\\bar{\\bar{A}}^{p_k} = \\bar{B}^{p_k} A^{p_k} \\in \\mathbb {R}^{d \\times L}, \\ \\bar{\\bar{B}}^{p_k} = \\bar{A}^{p_k} B^{p_k} \\in \\mathbb {R}^{d \\times J}.$$ (Eq. 17) ", "This layer uses a stack of Transformer encoder blocks for question representations and obtains $M^q \\in \\mathbb {R}^{d \\times J}$ from $G^{p \\rightarrow q}$ . It also uses an another stack for passage representations and obtains $M^{p_k} \\in \\mathbb {R}^{d \\times L}$ from $G^{q \\rightarrow p_k}$ for each $k$ -th passage. The outputs of this layer, $M^q$ and $\\lbrace M^{p_k}\\rbrace $ , are passed on to the answer sentence decoder; the $\\lbrace M^{p_k}\\rbrace $ are also passed on to the passage ranker and answer possibility classifier." ], [ "The passage ranker maps the output of the modeling layer, $\\lbrace M^{p_k}\\rbrace $ , to the relevance score of each passage. To obtain a fixed-dimensional pooled representation of each passage sequence, this layer takes the output for the first passage word, $M^{p_k}_1$ , which corresponds to the beginning-of-sentence token. It calculates the relevance of each $k$ -th passage to the question as: ", "$$\\beta ^{p_k} = \\mathrm {sigmoid}({w^r}^\\top M^{p_k}_1),$$ (Eq. 20) ", " where $w^r \\in \\mathbb {R}^{d}$ are learnable parameters." ], [ "The answer possibility classifier maps the output of the modeling layer, $\\lbrace M^{p_k}\\rbrace $ , to the probability of the answer possibility. The classifier takes the output for the first word, $M^{p_k}_1$ , for all passages and concatenates them to obtain a fixed-dimensional representation. It calculates the answer possibility to the question as: ", "$$P(a) = \\mathrm {sigmoid}({w^c}^\\top [M^{p_1}_1; \\ldots ; M^{p_K}_1]),$$ (Eq. 22) ", " where $w^c \\in \\mathbb {R}^{Kd}$ are learnable parameters." ], [ "Given the outputs provided by the reader, the decoder generates a sequence of answer words one element at a time. It is auto-regressive BIBREF24 , consuming the previously generated words as additional input at each decoding step.", "Let $y = \\lbrace y_1, \\ldots , y_{T}\\rbrace $ represent one-hot vectors of words in the answer. This layer has the same components as the word embedding layer of the question-passages reader, except that it uses a unidirectional ELMo in order to ensure that the predictions for position $t$ depend only on the known outputs at positions less than $t$ .", "Moreover, to be able to make use of multiple answer styles within a single system, our model introduces an artificial token corresponding to the target style at the beginning of the answer sentence ( $y_1$ ), like BIBREF14 . At test time, the user can specify the first token to control the answer styles. This modification does not require any changes to the model architecture. Note that introducing the tokens on the decoder side prevents the passage ranker and answer possibility classifier from depending on the answer style.", "This layer uses a stack of Transformer decoder blocks on top of the embeddings provided by the word embedding layer. The input is immediately mapped to a $d$ -dimensional vector by a linear transformation, and the output of this layer is a sequence of $d$ -dimensional vectors: $\\lbrace s_1, \\ldots , s_T\\rbrace $ .", "In addition to the encoder block, this block consists of second and third sub-layers after the self-attention block and before the feed-forward network, as shown in Figure 2 . As in BIBREF12 , the self-attention sub-layer uses a sub-sequent mask to prevent positions from attending to subsequent positions. The second and third sub-layers perform the multi-head attention over $M^q$ and $M^{p_\\mathrm {all}}$ , respectively. The $M^{p_\\mathrm {all}}$ is the concatenated outputs of the encoder stack for the passages, ", "$$M^{p_\\mathrm {all}} = [M^{p_1}, \\ldots , M^{p_K}] \\in \\mathbb {R}^{d \\times KL}.$$ (Eq. 27) ", " The $[,]$ operator means vector concatenation across the columns. This attention for the concatenated passages enables our model to produce attention weights that are comparable between passages.", "Our extended mechanism allows both words to be generated from a fixed vocabulary and words to be copied from both the question and multiple passages. Figure 3 shows the overview.", "Let the extended vocabulary, $V_\\mathrm {ext}$ , be the union of the common words (a small subset of the full vocabulary, $V$ , defined by the reader-side word embedding matrix) and all words appearing in the input question and passages. $P^v$ denotes the probability distribution of the $t$ -th answer word, $y_t$ , over the extended vocabulary. It is defined as: ", "$$P^v(y_t) =\\mathrm {softmax}({W^2}^\\top (W^1 s_t + b^1)),$$ (Eq. 31) ", " where the output embedding $W^2 \\in \\mathbb {R}^{d_\\mathrm {word} \\times V_\\mathrm {ext}}$ is tied with the corresponding part of the input embedding BIBREF25 , and $W^1 \\in \\mathbb {R}^{d_\\mathrm {word} \\times d}$ and $b^1 \\in \\mathbb {R}^{d_\\mathrm {word}}$ are learnable parameters. $P^v(y_t)$ is zero if $y_t$ is an out-of-vocabulary word for $V$ .", "The copy mechanism used in the original pointer-generator is based on the attention weights of a single-layer attentional RNN decoder BIBREF9 . The attention weights in our decoder stack are the intermediate outputs in multi-head attentions and are not suitable for the copy mechanism. Therefore, our model also uses additive attentions for the question and multiple passages on top of the decoder stack.", "The layer takes $s_t$ as the query and outputs $\\alpha ^q_t \\in \\mathbb {R}^J$ ( $\\alpha ^p_t \\in \\mathbb {R}^{KL}$ ) as the attention weights and $c^q_t \\in \\mathbb {R}^d$ ( $c^p_t \\in \\mathbb {R}^d$ ) as the context vectors for the question (passages): ", "$$e^q_j &= {w^q}^\\top \\tanh (W^{qm} M_j^q + W^{qs} s_t +b^q), \\\\\n\\alpha ^q_t &= \\mathrm {softmax}(e^q), \\\\\nc^q_t &= \\textstyle \\sum _j \\alpha ^q_{tj} M_j^q, \\\\\ne^{p_k}_l &= {w^p}^\\top \\tanh (W^{pm} M_l^{p_k} + W^{ps} s_t +b^p), \\\\\n\\alpha ^p_t &= \\mathrm {softmax}([e^{p_1}; \\ldots ; e^{p_K}]), \\\\\nc^p_t &= \\textstyle \\sum _{l} \\alpha ^p_{tl} M^{p_\\mathrm {all}}_{l},$$ (Eq. 33) ", " where $w^q$ , $w^p \\in \\mathbb {R}^d$ , $W^{qm}$ , $W^{qs}$ , $W^{pm}$ , $W^{ps} \\in \\mathbb {R}^{d \\times d}$ , and $b^q$ , $b^p \\in \\mathbb {R}^d$ are learnable parameters.", " $P^q$ and $P^p$ are the copy distributions over the extended vocabulary, defined as: ", "$$P^q(y_t) &= \\textstyle \\sum _{j: x^q_j = y_t} \\alpha ^q_{tj}, \\\\\nP^p(y_t) &= \\textstyle \\sum _{l: x^{p_{k(l)}}_{l} = y_t} \\alpha ^p_{tl},$$ (Eq. 34) ", " where $k(l)$ means the passage index corresponding to the $l$ -th word in the concatenated passages.", "The final distribution of the $t$ -th answer word, $y_t$ , is defined as a mixture of the three distributions: ", "$$P(y_t) = \\lambda ^v P^v(y_t) + \\lambda ^q P^q(y_t) + \\lambda ^p P^p(y_t),$$ (Eq. 36) ", " where the mixture weights are given by ", "$$\\lambda ^v, \\lambda ^q, \\lambda ^p = \\mathrm {softmax}(W^m [s_t; c^q_t; c^p_t] + b^m).$$ (Eq. 37) ", " $W^m \\in \\mathbb {R}^{3 \\times 3d}$ , $b^m \\in \\mathbb {R}^3$ are learnable parameters.", "In order not to use words in irrelevant passages, our model introduces the concept of combined attention BIBREF26 . While the original technique combines the word and sentence level attentions, our model combines the passage-level relevance $\\beta ^{p_k}$ and word-level attentions $\\alpha ^p_t$ by using simple scalar multiplication and re-normalization. The updated word attention is: ", "$$\\alpha ^p_{tl} & := \\frac{\\alpha ^p_{tl} \\beta ^{p_{k(l)} }}{\\sum _{l^{\\prime }} \\alpha ^p_{tl^{\\prime }} \\beta ^{p_{k(l^{\\prime })}}}.$$ (Eq. 39) " ], [ "We define the training loss as the sum of losses in ", "$$L(\\theta ) = L_\\mathrm {dec} + \\gamma _\\mathrm {rank} L_\\mathrm {rank} + \\gamma _\\mathrm {cls} L_\\mathrm {cls}$$ (Eq. 41) ", " where $\\theta $ is the set of all learnable parameters, and $\\gamma _\\mathrm {rank}$ and $\\gamma _\\mathrm {cls}$ are balancing parameters.", "The loss of the decoder, $L_\\mathrm {dec}$ , is the negative log likelihood of the whole target answer sentence averaged over $N_\\mathrm {able}$ answerable examples: ", "$$L_\\mathrm {dec} = - \\frac{1}{N_\\mathrm {able}}\\sum _{(a,y)\\in \\mathcal {D}} \\frac{a}{T} \\sum _t \\log P(y_{t}),$$ (Eq. 42) ", " where $\\mathcal {D}$ is the training dataset.", "The losses of the passage ranker, $L_\\mathrm {rank}$ , and the answer possibility classifier, $L_\\mathrm {cls}$ , are the binary cross entropy between the true and predicted values averaged over all $N$ examples: ", "$$L_\\mathrm {rank} = - \\frac{1}{NK} \\sum _k \\sum _{r^{p_k}\\in \\mathcal {D}}\n\\biggl (\n\\begin{split}\n&r^{p_k} \\log \\beta ^{p_k} + \\\\\n&(1-r^{p_k}) \\log (1-\\beta ^{p_k})\n\\end{split}\n\\biggr ),\\\\\nL_\\mathrm {cls} = - \\frac{1}{N} \\sum _{a \\in \\mathcal {D}}\n\\biggl (\n\\begin{split}\n&a \\log P(a) + \\\\\n&(1-a) \\log (1-P(a))\n\\end{split}\n\\biggr ).$$ (Eq. 43) " ], [ "We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\\subset $ ANS $\\subset $ ALL.", "We trained our model on a machine with eight NVIDIA P100 GPUs. Our model was jointly trained with the two answer styles in the ALL set for a total of eight epochs with a batch size of 80. The training took roughly six days. The ensemble model consists of six training runs with the identical architecture and hyperparameters. The hidden size $d$ was 304, and the number of attention heads was 8. The inner state size of the feed-forward networks was 256. The numbers of shared encoding blocks, modeling blocks for question, modeling blocks for passages, and decoder blocks were 3, 2, 5, and 8, respectively. We used the pre-trained uncased 300-dimensional GloVe BIBREF15 and the original 512-dimensional ELMo BIBREF16 . We used the spaCy tokenizer, and all words were lowercased except the input for ELMo. The number of common words in $V_\\mathrm {ext}$ was 5,000.", "We used the Adam optimization BIBREF27 with $\\beta _1 = 0.9$ , $\\beta _2 = 0.999$ , and $\\epsilon = 10^{-8}$ . Weights were initialized using $N(0, 0.02)$ , except that the biases of all the linear transformations were initialized with zero vectors. The learning rate was increased linearly from zero to $2.5 \\times 10^{-4}$ in the first 2,000 steps and annealed to 0 using a cosine schedule. All parameter gradients were clipped to a maximum norm of 1. An exponential moving average was applied to all trainable variables with a decay rate 0.9995. The balancing factors of joint learning, $\\lambda _\\mathrm {rank}$ and $\\lambda _\\mathrm {cls}$ , were set to 0.5 and 0.1.", "We used a modified version of the L $_2$ regularization proposed in BIBREF28 , with $w = 0.01$ . We additionally used a dropout BIBREF29 rate of 0.3 for all highway networks and residual and scaled dot-product attention operations in the multi-head attention mechanism. We also used one-sided label smoothing BIBREF30 for the passage relevance and answer possibility labels. We smoothed only the positive labels to 0.9." ], [ "Table 2 shows that our ensemble model, controlled with the NLG and Q&A styles, achieved state-of-the-art performance on the NLG and Q&A tasks in terms of Rouge-L. In particular, for the NLG task, our single model outperformed competing models in terms of both Rouge-L and Bleu-1. The capability of creating abstractive summaries from the question and passages contributed to its improvements over the state-of-the-art extractive approaches BIBREF6 , BIBREF7 .", "Table 3 shows the results of the ablation test for our model (controlled with the NLG style) on the well-formed answers of the WFA dev. set. Our model, which was trained with the ALL set consisting of the two styles, outperformed the model trained with the WFA set consisting of the single style. Multi-style learning allowed our model to improve NLG performance by also using non-sentence answers.", "Table 3 shows that our model outperformed the model that used RNNs and self-attentions instead of Transformer blocks as in MCAN BIBREF11 . Our deep Transformer decoder captured the interaction among the question, the passages, and the answer better than a single-layer LSTM decoder.", "Table 3 shows that our model (jointly trained with the passage ranker and answer possibility classifier) outperformed the model that did not use the ranker and classifier. The joint learning has a regularization effect on the question-passages reader.", "We also confirmed that the gold passage ranker, which can predict passage relevances perfectly, improves RC performance significantly. Passage re-ranking will be a key to developing a system that can outperform humans.", "Table 4 shows the passage re-ranking performance for the ten given passages on the ANS dev. set. Our ranker improved the initial ranking provided by Bing by a significant margin. Also, the ranker shares the question-passages reader with the answer decoder, and this sharing contributed to the improvements over the ranker trained without the answer decoder. This result is similar to those reported in BIBREF33 . Moreover, the joint learning with the answer possibility classifier and multiple answer styles, which enables our model to learn from a larger number of data, improved the re-ranking.", "Figure 4 shows the precision-recall curve of answer possibility classification on the ALL dev. set, where the positive class is the answerable data. Our model identified the answerable questions well. The maximum $F_1$ score was 0.7893. This is the first report on answer possibility classification with MS MARCO 2.1.", "Figure 5 shows the lengths of the answers generated by our model, which are broken down by answer style and query type. The generated answers were relatively shorter than the reference answers but well controlled with the target style in every query type.", "Also, we should note that our model does not guarantee the consistency in terms of meaning across the answer styles. We randomly selected 100 questions and compared the answers our model generated with the NLG and Q&A styles. The consistency ratio was 0.81, where major errors were due to copying words from different parts of the passages and generating different words, especially yes/no, from a fixed vocabulary.", "Appendix \"Reading Comprehension Examples generated by Masque from MS MARCO 2.1\" shows examples of generated answers. We found (d) style errors; (e) yes/no classification errors; (f) copy errors with respect to numerical values; and (c,e) grammatical errors that were originally contained in the inputs." ], [ "We believe our study makes two contributions to the study of multi-passage RC with NLG. Our model enables 1) multi-source abstractive summarization based RC and 2) style-controllable RC. The key strength of our model is its high accuracy of generating abstractive summaries from the question and passages; our model achieved state-of-the-art performance in terms of Rouge-L on the Q&A and NLG tasks of MS MARCO 2.1 that have different answer styles BIBREF5 .", "The styles considered in this paper are only related to the context of the question in the answer sentence; our model will be promising for controlling other styles such as length and speaking styles. Future work will involve exploring the potential of hybrid models combining extractive and abstractive approaches and improving the passage re-ranking and answerable question identification." ] ] }
{ "question": [ "How do they measure the quality of summaries?", "Does their model also take the expected answer style as input?", "What do they mean by answer styles?", "Is there exactly one \"answer style\" per dataset?", "What are the baselines that Masque is compared against?", "What is the performance achieved on NarrativeQA?", "What is an \"answer style\"?" ], "question_id": [ "6ead576ee5813164684a8cdda36e6a8c180455d9", "0117aa1266a37b0d2ef429f1b0653b9dde3677fe", "5455b3cdcf426f4d5fc40bc11644a432fa7a5c8f", "6c80bc3ed6df228c8ca6e02c0a8a1c2889498688", "2d274c93901c193cf7ad227ab28b1436c5f410af", "e63bde5c7b154fbe990c3185e2626d13a1bad171", "cb8a6f5c29715619a137e21b54b29e9dd48dad7d" ], "nlp_background": [ "infinity", "infinity", "infinity", "infinity", "infinity", "infinity", "infinity" ], "topic_background": [ "familiar", "familiar", "familiar", "research", "research", "research", "research" ], "paper_read": [ "no", "no", "no", "no", "no", "no", "no" ], "search_query": [ "reading comprehension", "reading comprehension", "reading comprehension", "reading comprehension", "reading comprehension", "reading comprehension", "reading comprehension" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "Rouge-L", "Bleu-1" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Table 2 shows that our ensemble model, controlled with the NLG and Q&A styles, achieved state-of-the-art performance on the NLG and Q&A tasks in terms of Rouge-L. In particular, for the NLG task, our single model outperformed competing models in terms of both Rouge-L and Bleu-1. The capability of creating abstractive summaries from the question and passages contributed to its improvements over the state-of-the-art extractive approaches BIBREF6 , BIBREF7 ." ], "highlighted_evidence": [ "In particular, for the NLG task, our single model outperformed competing models in terms of both Rouge-L and Bleu-1." ] } ], "annotation_id": [ "0d82c8d3a311a9f695cae5bd50584efe3d67651c" ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "Moreover, to be able to make use of multiple answer styles within a single system, our model introduces an artificial token corresponding to the target style at the beginning of the answer sentence ( $y_1$ ), like BIBREF14 . At test time, the user can specify the first token to control the answer styles. This modification does not require any changes to the model architecture. Note that introducing the tokens on the decoder side prevents the passage ranker and answer possibility classifier from depending on the answer style." ], "highlighted_evidence": [ "Moreover, to be able to make use of multiple answer styles within a single system, our model introduces an artificial token corresponding to the target style at the beginning of the answer sentence ( $y_1$ ), like BIBREF14 . At test time, the user can specify the first token to control the answer styles." ] } ], "annotation_id": [ "522ec998f1f29f60ee09a84c6d9dc833d55f516d" ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "well-formed sentences vs concise answers", "evidence": [ "We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\\subset $ ANS $\\subset $ ALL." ], "highlighted_evidence": [ "The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question." ] } ], "annotation_id": [ "12db0a9ba3a68b18fe3f729a111881ea824c1e0d" ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\\subset $ ANS $\\subset $ ALL." ], "highlighted_evidence": [ "The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question." ] } ], "annotation_id": [ "547a1dfd18e1e0bd505d93780bde493332c5084a" ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "BiDAF, Deep Cascade QA, S-Net+CES2S, BERT+Multi-PGNet, Selector+CCG, VNET, DECAPROP, MHPGM+NOIC, ConZNet, RMR+A2D", "evidence": [ "FLOAT SELECTED: Table 2: Performance of our and competing models on the MS MARCO V2 leaderboard (4 March 2019). aSeo et al. (2017); bYan et al. (2019); cShao (unpublished), a variant of Tan et al. (2018); dLi (unpublished), a model using Devlin et al. (2018) and See et al. (2017); eQian (unpublished); fWu et al. (2018). Whether the competing models are ensemble models or not is unreported.", "FLOAT SELECTED: Table 5: Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). fResults on the NarrativeQA validation set." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 2: Performance of our and competing models on the MS MARCO V2 leaderboard (4 March 2019). aSeo et al. (2017); bYan et al. (2019); cShao (unpublished), a variant of Tan et al. (2018); dLi (unpublished), a model using Devlin et al. (2018) and See et al. (2017); eQian (unpublished); fWu et al. (2018). Whether the competing models are ensemble models or not is unreported.", "FLOAT SELECTED: Table 5: Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). fResults on the NarrativeQA validation set." ] } ], "annotation_id": [ "2d9168d8c9582e71772671fec99190636993f9bc" ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Bleu-1: 54.11, Bleu-4: 30.43, METEOR: 26.13, ROUGE-L: 59.87", "evidence": [ "FLOAT SELECTED: Table 5: Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). fResults on the NarrativeQA validation set." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 5: Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). fResults on the NarrativeQA validation set." ] } ], "annotation_id": [ "dacf7e1b0d991a27991f3094a5420d21280d2856" ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "well-formed sentences vs concise answers", "evidence": [ "We conducted experiments on the two tasks of MS MARCO 2.1 BIBREF5 . The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question. For instance, for the question “tablespoon in cup”, the answer in the Q&A task will be “16”, and the answer in the NLG task will be “There are 16 tablespoons in a cup.” In addition to the ALL dataset, we prepared two subsets (Table 1 ). The ANS set consists of answerable questions, and the WFA set consists of the answerable questions and well-formed answers, where WFA $\\subset $ ANS $\\subset $ ALL." ], "highlighted_evidence": [ "The answer styles considered in the experiments corresponded to the two tasks. The NLG task requires a well-formed answer that is an abstractive summary of the question and ten passages, averaging 16.6 words. The Q&A task also requires an abstractive answer but prefers a more concise answer than the NLG task, averaging 13.1 words, where many of the answers do not contain the context of the question." ] } ], "annotation_id": [ "18e57e4cefebf25073f74efc3da5763404b4ecd5" ], "worker_id": [ "18f4d5a2eb93a969d55361267e74aa0c4f6f82fe" ] } ] }
{ "caption": [ "Figure 1: Visualization of how our model generates an answer on MS MARCO. Given an answer style (top: NLG, bottom: Q&A), the model controls the mixture of three distributions for generating words from a vocabulary and copying words from the question and multiple passages at each decoding step.", "Figure 2: Masque model architecture.", "Figure 3: Multi-source pointer-generator mechanism. For each decoding step t, mixture weights λv, λq, λp for the probability of generating words from the vocabulary and copying words from the question and the passages are calculated. The three distributions are weighted and summed to obtain the final distribution.", "Table 1: Numbers of questions used in the experiments.", "Table 2: Performance of our and competing models on the MS MARCO V2 leaderboard (4 March 2019). aSeo et al. (2017); bYan et al. (2019); cShao (unpublished), a variant of Tan et al. (2018); dLi (unpublished), a model using Devlin et al. (2018) and See et al. (2017); eQian (unpublished); fWu et al. (2018). Whether the competing models are ensemble models or not is unreported.", "Table 3: Ablation test results on the NLG dev. set. The models were trained with the subset listed in “Train”.", "Table 4: Passage ranking results on the ANS dev. set.", "Figure 4: Precision-recall curve for answer possibility classification on the ALL dev. set.", "Figure 5: Lengths of answers generated by Masque broken down by the answer style and query type on the NLG dev. set. The error bars indicate standard errors.", "Table 5: Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). fResults on the NarrativeQA validation set." ], "file": [ "1-Figure1-1.png", "2-Figure2-1.png", "4-Figure3-1.png", "5-Table1-1.png", "6-Table2-1.png", "6-Table3-1.png", "6-Table4-1.png", "7-Figure4-1.png", "7-Figure5-1.png", "8-Table5-1.png" ] }
1908.04917
A Cascade Sequence-to-Sequence Model for Chinese Mandarin Lip Reading
Lip reading aims at decoding texts from the movement of a speaker's mouth. In recent years, lip reading methods have made great progress for English, at both word-level and sentence-level. Unlike English, however, Chinese Mandarin is a tone-based language and relies on pitches to distinguish lexical or grammatical meaning, which significantly increases the ambiguity for the lip reading task. In this paper, we propose a Cascade Sequence-to-Sequence Model for Chinese Mandarin (CSSMCM) lip reading, which explicitly models tones when predicting sentence. Tones are modeled based on visual information and syntactic structure, and are used to predict sentence along with visual information and syntactic structure. In order to evaluate CSSMCM, a dataset called CMLR (Chinese Mandarin Lip Reading) is collected and released, consisting of over 100,000 natural sentences from China Network Television website. When trained on CMLR dataset, the proposed CSSMCM surpasses the performance of state-of-the-art lip reading frameworks, which confirms the effectiveness of explicit modeling of tones for Chinese Mandarin lip reading.
{ "section_name": [ "Introduction", "The Proposed Method", "Pinyin Prediction Sub-network", "Tone Prediction Sub-network", "Character Prediction Sub-network", "CSSMCM Architecture", "Training Strategy", "Dataset", "Implementation Details", "Compared Methods and Evaluation Protocol", "Results", "Attention Visualisation", "Summary and Extension" ], "paragraphs": [ [ "Lip reading, also known as visual speech recognition, aims to predict the sentence being spoken, given a silent video of a talking face. In noisy environments, where speech recognition is difficult, visual speech recognition offers an alternative way to understand speech. Besides, lip reading has practical potential in improved hearing aids, security, and silent dictation in public spaces. Lip reading is essentially a difficult problem, as most lip reading actuations, besides the lips and sometimes tongue and teeth, are latent and ambiguous. Several seemingly identical lip movements can produce different words.", "Thanks to the recent development of deep learning, English-based lip reading methods have made great progress, at both word-level BIBREF0 , BIBREF1 and sentence-level BIBREF2 , BIBREF3 . However, as the language of the most number of speakers, there is only a little work for Chinese Mandarin lip reading in the multimedia community. Yang et al. BIBREF4 present a naturally-distributed large-scale benchmark for Chinese Mandarin lip-reading in the wild, named LRW-1000, which contains 1,000 classes with 718,018 samples from more than 2,000 individual speakers. Each class corresponds to the syllables of a Mandarin word composed of one or several Chinese characters. However, they perform only word classification for Chinese Mandarin lip reading but not at the complete sentence level. LipCH-Net BIBREF5 is the first paper aiming for sentence-level Chinese Mandarin lip reading. LipCH-Net is a two-step end-to-end architecture, in which two deep neural network models are employed to perform the recognition of Picture-to-Pinyin (mouth motion pictures to pronunciations) and the recognition of Pinyin-to-Hanzi (pronunciations to texts) respectively. Then a joint optimization is performed to improve the overall performance.", "Belong to two different language families, English and Chinese Mandarin have many differences. The most significant one might be that: Chinese Mandarin is a tone language, while English is not. The tone is the use of pitch in language to distinguish lexical or grammatical meaning - that is, to distinguish or to inflect words . Even two words look the same on the face when pronounced, they can have different tones, thus have different meanings. For example, even though \"UTF8gbsn练习\" (which means practice) and \"UTF8gbsn联系\" (which means contact) have different meanings, but they have the same mouth movement. This increases ambiguity when lip reading. So the tone is an important factor for Chinese Mandarin lip reading.", "Based on the above considerations, in this paper, we present CSSMCM, a sentence-level Chinese Mandarin lip reading network, which contains three sub-networks. Same as BIBREF5 , in the first sub-network, pinyin sequence is predicted from the video. Different from BIBREF5 , which predicts pinyin characters from video, pinyin is taken as a whole in CSSMCM, also known as syllables. As we know, Mandarin Chinese is a syllable-based language and syllables are their logical unit of pronunciation. Compared with pinyin characters, syllables are a longer linguistic unit, and can reduce the difficulty of syllable choices in the decoder by sequence-to-sequence attention-based models BIBREF6 . Chen et al. BIBREF7 find that there might be a relationship between the production of lexical tones and the visible movements of the neck, head, and mouth. Motivated by this observation, in the second sub-network, both video and pinyin sequence is used as input to predict tone. Then in the third sub-network, video, pinyin, and tone sequence work together to predict the Chinese character sequence. At last, three sub-networks are jointly finetuned to improve overall performance.", "As there is no public sentence-level Chinese Mandarin lip reading dataset, we collect a new Chinese Mandarin Lip Reading dataset called CMLR based on China Network Television broadcasts containing talking faces together with subtitles of what is said.", "In summary, our major contributions are as follows." ], [ "In this section, we present CSSMCM, a lip reading model for Chinese Mandarin. As mention in Section SECREF1 , pinyin and tone are both important for Chinese Mandarin lip reading. Pinyin represents how to pronounce a Chinese character and is related to mouth movement. Tone can alleviate the ambiguity of visemes (several speech sounds that look the same) to some extent and can be inferred from visible movements. Based on this, the lip reading task is defined as follow: DISPLAYFORM0 ", "The meaning of these symbols is given in Table TABREF5 .", "As shown in Equation ( EQREF6 ), the whole problem is divided into three parts, which corresponds to pinyin prediction, tone prediction, and character prediction separately. Each part will be described in detail below." ], [ "The pinyin prediction sub-network transforms video sequence into pinyin sequence, which corresponds to INLINEFORM0 in Equation ( EQREF6 ). This sub-network is based on the sequence-to-sequence architecture with attention mechanism BIBREF8 . We name the encoder and decoder the video encoder and pinyin decoder, for the encoder process video sequence, and the decoder predicts pinyin sequence. The input video sequence is first fed into the VGG model BIBREF9 to extract visual feature. The output of conv5 of VGG is appended with global average pooling BIBREF10 to get the 512-dim feature vector. Then the 512-dim feature vector is fed into video encoder. The video encoder can be denoted as: DISPLAYFORM0 ", "When predicting pinyin sequence, at each timestep INLINEFORM0 , video encoder outputs are attended to calculate a context vector INLINEFORM1 : DISPLAYFORM0 DISPLAYFORM1 " ], [ "As shown in Equation ( EQREF6 ), tone prediction sub-network ( INLINEFORM0 ) takes video and pinyin sequence as inputs and predict corresponding tone sequence. This problem is modeled as a sequence-to-sequence learning problem too. The corresponding model architecture is shown in Figure FIGREF8 .", "In order to take both video and pinyin information into consideration when producing tone, a dual attention mechanism BIBREF3 is employed. Two independent attention mechanisms are used for video and pinyin sequence. Video context vectors INLINEFORM0 and pinyin context vectors INLINEFORM1 are fused when predicting a tone character at each decoder step.", "The video encoder is the same as in Section SECREF7 and the pinyin encoder is: DISPLAYFORM0 ", "The tone decoder takes both video encoder outputs and pinyin encoder outputs to calculate context vector, and then predicts tones: DISPLAYFORM0 DISPLAYFORM1 " ], [ "The character prediction sub-network corresponds to INLINEFORM0 in Equation ( EQREF6 ). It considers all the pinyin sequence, tone sequence and video sequence when predicting Chinese character. Similarly, we also use attention based sequence-to-sequence architecture to model this equation. Here the attention mechanism is modified into triplet attention mechanism: DISPLAYFORM0 DISPLAYFORM1 ", "For the following needs, the formula of tone encoder is also listed as follows: DISPLAYFORM0 " ], [ "The architecture of the proposed approach is demonstrated in Figure FIGREF32 . For better display, the three attention mechanisms are not shown in the figure. During the training of CSSMCM, the outputs of pinyin decoder are fed into pinyin encoder, the outputs of tone decoder into tone encoder: DISPLAYFORM0 DISPLAYFORM1 ", "We replace Equation ( EQREF14 ) with Equation ( EQREF28 ), Equation ( EQREF26 ) with Equation ( EQREF29 ). Then, the three sub-networks are jointly trained and the overall loss function is defined as follows: DISPLAYFORM0 ", "where INLINEFORM0 and INLINEFORM1 stand for loss of pinyin prediction sub-network, tone prediction sub-network and character prediction sub-network respectively, as defined below. DISPLAYFORM0 " ], [ "To accelerate training and reduce overfitting, curriculum learning BIBREF3 is employed. The sentences are grouped into subsets according to the length of less than 11, 12-17, 18-23, more than 24 Chinese characters. Scheduled sampling proposed by BIBREF11 is used to eliminate the discrepancy between training and inference. At the training stage, the sampling rate from the previous output is selected from 0.7 to 1. Greedy decoder is used for fast decoding." ], [ "In this section, a three-stage pipeline for generating the Chinese Mandarin Lip Reading (CMLR) dataset is described, which includes video pre-processing, text acquisition, and data generation. This three-stage pipeline is similar to the method mentioned in BIBREF3 , but considering the characteristics of our Chinese Mandarin dataset, we have optimized some steps and parts to generate a better quality lip reading dataset. The three-stage pipeline is detailed below.", "Video Pre-processing. First, national news program \"News Broadcast\" recorded between June 2009 and June 2018 is obtained from China Network Television website. Then, the HOG-based face detection method is performed BIBREF12 , followed by an open source platform for face recognition and alignment. The video clip set of eleven different hosts who broadcast the news is captured. During the face detection step, using frame skipping can improve efficiency while ensuring the program quality.", "Text Acquisition. Since there is no subtitle or text annotation in the original \"News Broadcast\" program, FFmpeg tools are used to extract the corresponding audio track from the video clip set. Then through the iFLYTEK ASR, the corresponding text annotation of the video clip set is obtained. However, there is some noise in these text annotation. English letters, Arabic numerals, and rare punctuation are deleted to get a more pure Chinese Mandarin lip reading dataset.", "Data Generation. The text annotation acquired in the previous step also contains timestamp information. Therefore, video clip set is intercepted according to these timestamp information, and then the corresponding word, phrase, or sentence video segment of the text annotation are obtained. Since the text timestamp information may have a few uncertain errors, some adjustments are made to the start frame and the end frame when intercepting the video segment. It is worth noting that through experiments, we found that using OpenCV can capture clearer video segment than the FFmpeg tools.", "Through the three-stage pipeline mentioned above, we can obtain the Chinese Mandarin Lip Reading (CMLR) dataset containing more than 100,000 sentences, 25,000 phrases, 3,500 characters. The dataset is randomly divided into training set, validation set, and test set in a ratio of 7:1:2. Details are listed in Table TABREF37 ." ], [ "The input images are 64 INLINEFORM0 128 in dimension. Lip frames are transformed into gray-scale, and the VGG network takes every 5 lip frames as an input, moving 2 frames at each timestep. For all sub-networks, a two-layer bi-direction GRU BIBREF13 with a cell size of 256 is used for the encoder and a two-layer uni-direction GRU with a cell size of 512 for the decoder. For character and pinyin vocabulary, we keep characters and pinyin that appear more than 20 times. [sos], [eos] and [pad] are also included in these three vocabularies. The final vocabulary size is 371 for pinyin prediction sub-network, 8 for tone prediction sub-network (four tones plus a neutral tone), and 1,779 for character prediction sub-network.", "The initial learning rate was 0.0001 and decreased by 50% every time the training error did not improve for 4 epochs. CSSMCM is implemented using pytorch library and trained on a Quadro 64C P5000 with 16GB memory. The total end-to-end model was trained for around 12 days." ], [ "WAS: The architecture used in BIBREF3 without the audio input. The decoder output Chinese character at each timestep. Others keep unchanged to the original implementation.", "LipCH-Net-seq: For a fair comparison, we use sequence-to-sequence with attention framework to replace the Connectionist temporal classification (CTC) loss BIBREF14 used in LipCH-Net BIBREF5 when converting picture to pinyin.", "CSSMCM-w/o video: To evaluate the necessity of video information when predicting tone, the video stream is removed when predicting tone and Chinese characters. In other word, video is only used when predicting the pinyin sequence. The tone is predicted from the pinyin sequence. Tone information and pinyin information work together to predict Chinese character.", "We tried to implement the Lipnet architecture BIBREF2 to predict Chinese character at each timestep. However, the model did not converge. The possible reasons are due to the way CTC loss works and the difference between English and Chinese Mandarin. Compared to English, which only contains 26 characters, Chinese Mandarin contains thousands of Chinese characters. When CTC calculates loss, it first adds blank between every character in a sentence, that causes the number of the blank label is far more than any other Chinese character. Thus, when Lipnet starts training, it predicts only the blank label. After a certain epoch, \"UTF8gbsn的\" character will occasionally appear until the learning rate decays to close to zero.", "For all experiments, Character Error Rate (CER) and Pinyin Error Rate (PER) are used as evaluation metrics. CER is defined as INLINEFORM0 , where INLINEFORM1 is the number of substitutions, INLINEFORM2 is the number of deletions, INLINEFORM3 is the number of insertions to get from the reference to the hypothesis and INLINEFORM4 is the number of words in the reference. PER is calculated in the same way as CER. Tone Error Rate (TER) is also included when analyzing CSSMCM, which is calculated in the same way as above." ], [ "Table TABREF40 shows a detailed comparison between various sub-network of different methods. Comparing P2T and VP2T, VP2T considers video information when predicting the pinyin sequence and achieves a lower error rate. This verifies the conjecture of BIBREF7 that the generation of tones is related to the motion of the head. In terms of overall performance, CSSMCM exceeds all the other architecture on the CMLR dataset and achieves 32.48% character error rate. It is worth noting that CSSMCM-w/o video achieves the worst result (42.23% CER) even though its sub-networks perform well when trained separately. This may be due to the lack of visual information to support, and the accumulation of errors. CSSMCM using tone information performs better compared to LipCH-Net-seq, which does not use tone information. The comparison results show that tone is important when lip reading, and when predicting tone, visual information should be considered.", "Table TABREF41 shows some generated sentences from different methods. CSSMCM-w/o video architecture is not included due to its relatively lower performance. These are sentences other methods fail to predict but CSSMCM succeeds. The phrase \"UTF8gbsn实惠\" (which means affordable) in the first example sentence, has a tone of 2, 4 and its corresponding pinyin are shi, hui. WAS predicts it as \"UTF8gbsn事会\" (which means opportunity). Although the pinyin prediction is correct, the tone is wrong. LipCH-Net-seq predicts \"UTF8gbsn实惠\" as \"UTF8gbsn吃贵\" (not a word), which have the same finals \"ui\" and the corresponding mouth shapes are the same. It's the same in the second example. \"UTF8gbsn前, 天, 年\" have the same finals and mouth shapes, but the tone is different.", "These show that when predicting characters with the same lip shape but different tones, other methods are often unable to predict correctly. However, CSSMCM can leverage the tone information to predict successfully.", "Apart from the above results, Table TABREF42 also lists some failure cases of CSSMCM. The characters that CSSMCM predicts wrong are usually homophones or characters with the same final as the ground truth. In the first example, \"UTF8gbsn价\" and \"UTF8gbsn下\" have the same final, ia, while \"UTF8gbsn一\" and \"UTF8gbsn医\" are homophones in the second example. Unlike English, if one character in an English word is predicted wrong, the understanding of the transcriptions has little effect. However, if there is a character predicted wrong in Chinese words, it will greatly affect the understandability of transcriptions. In the second example, CSSMCM mispredicts \"UTF8gbsn医学\" ( which means medical) to \"UTF8gbsn一水\" (which means all). Although their first characters are pronounced the same, the meaning of the sentence changed from Now with the progress of medical science and technology in our country to It is now with the footsteps of China's Yishui Technology." ], [ "Figure FIGREF44 (a) and Figure FIGREF44 (b) visualise the alignment of video frames and Chinese characters predicted by CSSMCM and WAS respectively. The ground truth sequence is \"UTF8gbsn同时他还向媒体表示\". Comparing Figure FIGREF44 (a) with Figure FIGREF44 (b), the diagonal trend of the video attention map got by CSSMCM is more obvious. The video attention is more focused where WAS predicts wrong, i.e. the area corresponding to \"UTF8gbsn还向\". Although WAS mistakenly predicts the \"UTF8gbsn媒体\" as \"UTF8gbsn么体\", the \"UTF8gbsn媒体\" and the \"UTF8gbsn么体\" have the same mouth shape, so the attention concentrates on the correct frame.", "It's interesting to mention that in Figure FIGREF47 , when predicting the INLINEFORM0 -th character, attention is concentrated on the INLINEFORM1 -th tone. This may be because attention is applied to the outputs of the encoder, which actually includes all the information from the previous INLINEFORM2 timesteps. The attention to the tone of INLINEFORM3 -th timestep serves as the language model, which reduces the options for generating the character at INLINEFORM4 -th timestep, making prediction more accurate." ], [ "In this paper, we propose the CSSMCM, a Cascade Sequence-to-Sequence Model for Chinese Mandarin lip reading. CSSMCM is designed to predicting pinyin sequence, tone sequence, and Chinese character sequence one by one. When predicting tone sequence, a dual attention mechanism is used to consider video sequence and pinyin sequence at the same time. When predicting the Chinese character sequence, a triplet attention mechanism is proposed to take all the video sequence, pinyin sequence, and tone sequence information into consideration. CSSMCM consistently outperforms other lip reading architectures on the proposed CMLR dataset.", "Lip reading and speech recognition are very similar. In Chinese Mandarin speech recognition, there have been kinds of different acoustic representations like syllable initial/final approach, syllable initial/final with tone approach, syllable approach, syllable with tone approach, preme/toneme approach BIBREF15 and Chinese Character approach BIBREF16 . In this paper, the Chinese character is chosen as the output unit. However, we find that the wrongly predicted characters severely affect the understandability of transcriptions. Using larger output units, like Chinese words, maybe can alleviate this problem." ] ] }
{ "question": [ "What was the previous state of the art model for this task?", "What syntactic structure is used to model tones?", "What visual information characterizes tones?" ], "question_id": [ "8a7bd9579d2783bfa81e055a7a6ebc3935da9d20", "27b01883ed947b457d3bab0c66de26c0736e4f90", "9714cb7203c18a0c53805f6c889f2e20b4cab5dd" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "search_query": [ "", "", "" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "WAS", "LipCH-Net-seq", "CSSMCM-w/o video" ], "yes_no": null, "free_form_answer": "", "evidence": [ "WAS: The architecture used in BIBREF3 without the audio input. The decoder output Chinese character at each timestep. Others keep unchanged to the original implementation.", "LipCH-Net-seq: For a fair comparison, we use sequence-to-sequence with attention framework to replace the Connectionist temporal classification (CTC) loss BIBREF14 used in LipCH-Net BIBREF5 when converting picture to pinyin.", "CSSMCM-w/o video: To evaluate the necessity of video information when predicting tone, the video stream is removed when predicting tone and Chinese characters. In other word, video is only used when predicting the pinyin sequence. The tone is predicted from the pinyin sequence. Tone information and pinyin information work together to predict Chinese character." ], "highlighted_evidence": [ "WAS: The architecture used in BIBREF3 without the audio input. The decoder output Chinese character at each timestep. Others keep unchanged to the original implementation.", "LipCH-Net-seq: For a fair comparison, we use sequence-to-sequence with attention framework to replace the Connectionist temporal classification (CTC) loss BIBREF14 used in LipCH-Net BIBREF5 when converting picture to pinyin.", "CSSMCM-w/o video: To evaluate the necessity of video information when predicting tone, the video stream is removed when predicting tone and Chinese characters. In other word, video is only used when predicting the pinyin sequence. The tone is predicted from the pinyin sequence. Tone information and pinyin information work together to predict Chinese character." ] } ], "annotation_id": [ "1ece262c376fd7ac6c0835bd051a95d4e766b9e9" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "syllables" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Based on the above considerations, in this paper, we present CSSMCM, a sentence-level Chinese Mandarin lip reading network, which contains three sub-networks. Same as BIBREF5 , in the first sub-network, pinyin sequence is predicted from the video. Different from BIBREF5 , which predicts pinyin characters from video, pinyin is taken as a whole in CSSMCM, also known as syllables. As we know, Mandarin Chinese is a syllable-based language and syllables are their logical unit of pronunciation. Compared with pinyin characters, syllables are a longer linguistic unit, and can reduce the difficulty of syllable choices in the decoder by sequence-to-sequence attention-based models BIBREF6 . Chen et al. BIBREF7 find that there might be a relationship between the production of lexical tones and the visible movements of the neck, head, and mouth. Motivated by this observation, in the second sub-network, both video and pinyin sequence is used as input to predict tone. Then in the third sub-network, video, pinyin, and tone sequence work together to predict the Chinese character sequence. At last, three sub-networks are jointly finetuned to improve overall performance." ], "highlighted_evidence": [ "Same as BIBREF5 , in the first sub-network, pinyin sequence is predicted from the video. Different from BIBREF5 , which predicts pinyin characters from video, pinyin is taken as a whole in CSSMCM, also known as syllables. As we know, Mandarin Chinese is a syllable-based language and syllables are their logical unit of pronunciation." ] } ], "annotation_id": [ "e798667550c806520945f7eda429883125402810" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "video sequence is first fed into the VGG model BIBREF9 to extract visual feature" ], "yes_no": null, "free_form_answer": "", "evidence": [ "As shown in Equation ( EQREF6 ), tone prediction sub-network ( INLINEFORM0 ) takes video and pinyin sequence as inputs and predict corresponding tone sequence. This problem is modeled as a sequence-to-sequence learning problem too. The corresponding model architecture is shown in Figure FIGREF8 .", "In order to take both video and pinyin information into consideration when producing tone, a dual attention mechanism BIBREF3 is employed. Two independent attention mechanisms are used for video and pinyin sequence. Video context vectors INLINEFORM0 and pinyin context vectors INLINEFORM1 are fused when predicting a tone character at each decoder step.", "The video encoder is the same as in Section SECREF7 and the pinyin encoder is: DISPLAYFORM0", "The pinyin prediction sub-network transforms video sequence into pinyin sequence, which corresponds to INLINEFORM0 in Equation ( EQREF6 ). This sub-network is based on the sequence-to-sequence architecture with attention mechanism BIBREF8 . We name the encoder and decoder the video encoder and pinyin decoder, for the encoder process video sequence, and the decoder predicts pinyin sequence. The input video sequence is first fed into the VGG model BIBREF9 to extract visual feature. The output of conv5 of VGG is appended with global average pooling BIBREF10 to get the 512-dim feature vector. Then the 512-dim feature vector is fed into video encoder. The video encoder can be denoted as: DISPLAYFORM0" ], "highlighted_evidence": [ "As shown in Equation ( EQREF6 ), tone prediction sub-network ( INLINEFORM0 ) takes video and pinyin sequence as inputs and predict corresponding tone sequence.", "Video context vectors INLINEFORM0 and pinyin context vectors INLINEFORM1 are fused when predicting a tone character at each decoder step.", "The video encoder is the same as in Section SECREF7 and the pinyin encoder is: DISPLAYFORM0", "The input video sequence is first fed into the VGG model BIBREF9 to extract visual feature. The output of conv5 of VGG is appended with global average pooling BIBREF10 to get the 512-dim feature vector. Then the 512-dim feature vector is fed into video encoder." ] } ], "annotation_id": [ "0dab82770f65dbc8ab1a57f6d8f4b17689b2d489" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Fig. 1. The tone prediction sub-network.", "Table 1. Symbol Definition", "Fig. 2. The character prediction sub-network.", "Fig. 3. The overall of the CSSMCM network. The attention module is omitted for sake of simplicity.", "Table 2. The CMLR dataset. Division of training, validation and test data; and the number of sentences, phrases and characters of each partition.", "Table 3. The detailed comparison between CSSMCM and other methods on the CMLR dataset. V, P, T, C stand for video, pinyin, tone and character. V2P stands for the transformation from video sequence to pinyin sequence. VP2T represents the input are video and pinyin sequence and the output is sequence of tone. OVERALL means to combine the sub-networks and make a joint optimization.", "Table 4. Examples of sentences that CSSMCM correctly predicts while other methods do not. The pinyin and tone sequence corresponding to the Chinese character sentence are also displayed together. GT stands for ground truth.", "Table 5. Failure cases of CSSMCM.", "Fig. 4. Video-to-text alignment using CSSMCM (a) and WAS (b).", "Fig. 5. Aligenment between output characters and predicted tone sequences using CSSMCM." ], "file": [ "2-Figure1-1.png", "2-Table1-1.png", "3-Figure2-1.png", "3-Figure3-1.png", "4-Table2-1.png", "4-Table3-1.png", "5-Table4-1.png", "5-Table5-1.png", "6-Figure4-1.png", "6-Figure5-1.png" ] }
1906.03338
Dissecting Content and Context in Argumentative Relation Analysis
When assessing relations between argumentative units (e.g., support or attack), computational systems often exploit disclosing indicators or markers that are not part of elementary argumentative units (EAUs) themselves, but are gained from their context (position in paragraph, preceding tokens, etc.). We show that this dependency is much stronger than previously assumed. In fact, we show that by completely masking the EAU text spans and only feeding information from their context, a competitive system may function even better. We argue that an argument analysis system that relies more on discourse context than the argument's content is unsafe, since it can easily be tricked. To alleviate this issue, we separate argumentative units from their context such that the system is forced to model and rely on an EAU's content. We show that the resulting classification system is more robust, and argue that such models are better suited for predicting argumentative relations across documents.
{ "section_name": [ "Introduction", "Related Work", "Argumentative Relation Prediction: Models and Features", "Models", "Feature implementation", "Results", "Discussion", "Conclusion", "Acknowledgments" ], "paragraphs": [ [ "In recent years we have witnessed a great surge in activity in the area of computational argument analysis (e.g. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 ), and the emergence of dedicated venues such as the ACL Argument Mining workshop series starting in 2014 BIBREF4 .", "Argumentative relation classification is a sub-task of argument analysis that aims to determine relations between argumentative units A and B, for example, A supports B; A attacks B. Consider the following argumentative units (1) and (2), given the topic (0) “Marijuana should be legalized”:", "This example is modeled in Figure FIGREF3 .", "It is clear that (1) has a negative stance towards the topic and (2) has a positive stance towards the topic. Moreover, we can say that (2) attacks (1). In discourse, such a relation is often made explicit through discourse markers: (1). However, (2); On the one hand (1), on the other (2); (1), although (2); Admittedly, (2); etc. In the absence of such markers we must determine this relation by assessing the semantics of the individual argumentative units, including (often implicit) world knowledge about how they are related to each other.", "In this work, we show that argumentative relation classifiers – when provided with textual context surrounding an argumentative unit's span – are very prone to neglect the actual textual content of the EAU span. Instead they heavily rely on contextual markers, such as conjunctions or adverbials, as a basis for prediction. We argue that a system's capacity of predicting the correct relation based on the argumentative units' content is important in many circumstances, e.g., when an argumentative debate crosses document boundaries. For example, the prohibition of marijuana debate extends across populations and countries – argumentative units for this debate can be recovered from thousands of documents scattered across the world wide web. As a consequence, argumentative relation classification systems should not be (immensely) dependent on contextual clues – in the discussed cross-document setting these clues may even be misleading for such a system, since source and target arguments can be embedded in different textual contexts (e.g., when (1) and (2) stem from different documents it is easy to imagine a textual context where (2) is not introduced by however but instead by an `inverse' form such as e.g. moreover)." ], [ "It is well-known that the rhetorical and argumentative structure of texts bear great similarities. For example, BIBREF5 , BIBREF6 , BIBREF0 observe that elementary discourse units (EDUs) in RST BIBREF7 share great similarity with elementary argumentative units (EAUs) in argumentation analysis. BIBREF8 experiment with a modified version of the Microtext corpus BIBREF9 , which is an extensively annotated albeit small corpus. Similar to us, they separate argumentative units from discursive contextual markers. While BIBREF8 conduct a human evaluation to investigate the separation of Logos and Pathos aspects of arguments, our work investigates how (de-)contextualization of argumentative units affects automatic argumentative relation classification models." ], [ "In this section, we describe different formulations of the argumentative relation classification task and describe features used by our replicated model. In order to test our hypotheses, we propose to group all features into three distinct types." ], [ "Now, we introduce a classification of three different prediction models used in the argumentative relation prediction literature. We will inspect all of them and show that all can suffer from severe issues when focusing (too much) on the context.", "The model INLINEFORM0 adopts a discourse parsing view on argumentative relation prediction and predicts one outgoing edge for an argumentative unit (one-outgoing edge). Model INLINEFORM1 assumes a connected graph with argumentative units and is tasked with predicting edge labels for unit tuples (labeling relations in a graph). Finally, a model INLINEFORM2 is given two (possibly) unrelated argumentative units and is tasked with predicting connections as well as edge labels (joint edge prediction and labeling).", " BIBREF13 divide the task into relation prediction INLINEFORM0 and relation class assignment INLINEFORM1 : DISPLAYFORM0 ", " DISPLAYFORM0 ", "which the authors describe as argumentative relation identification ( INLINEFORM0 ) and stance detection ( INLINEFORM1 ). In their experiments, INLINEFORM2 , i.e., no distinction is made between features that access only the argument content (EAU span) or only the EAU's embedding context, and some features also consider both (e.g., discourse features). This model adopts a parsing view on argumentative relation classification: every unit is allowed to have only one type of outgoing relation (this follows trivially from the fact that INLINEFORM3 has only one input). Applying such a model to argumentative attack and support relations might impose unrealistic constraints on the resulting argumentation graph: A given premise might in fact attack or support several other premises. The approach may suffice for the case of student argumentative essays, where EAUs are well-framed in a discourse structure, but seems overly restrictive for many other scenarios.", "Another way of framing the task, is to learn a function DISPLAYFORM0 ", "Here, an argumentative unit is allowed to be in a attack or support relation to multiple other EAUs. Yet, both INLINEFORM0 and INLINEFORM1 assume that inputs are already linked and only the class of the link is unknown.", "Thus, we might also model the task in a three-class classification setting to learn a more general function that performs relation prediction and classification jointly (see also, e.g., BIBREF10 ): DISPLAYFORM0 ", "The model described by Eq. EQREF22 is the most general one: not only does it assume a graph view on argumentative units and their relations (as does Eq. EQREF20 ); in model formulation (Eq. EQREF22 ), an argumentative unit can have no or multiple support or attack relations. It naturally allows for cases where an argumentative unit INLINEFORM0 (supports INLINEFORM1 INLINEFORM2 attacks INLINEFORM3 INLINEFORM4 is-unrelated-to INLINEFORM5 ). Given a set of EAUs mined from different documents, this model enables us to construct a full-fledged argumentation graph." ], [ "Our feature implementation follows the feature descriptions for Stance recognition and link identification in BIBREF13 . These features and variations of them have been used successfully in several successive works (cf. BIBREF1 , BIBREF16 , BIBREF15 ).", "For any model the features are indexed by INLINEFORM0 . We create a function INLINEFORM1 which maps from feature indices to feature types. In other words, INLINEFORM2 tells us, for any given feature, whether it is content-based ( INLINEFORM3 ), content-ignorant ( INLINEFORM4 ) or full access ( INLINEFORM5 ). The features for, e.g., the joint prediction model INLINEFORM6 of type INLINEFORM7 ( INLINEFORM8 ) can then simply be described as INLINEFORM9 . Recall that features computed on the basis of the EAU span are content-based ( INLINEFORM10 ), features from the EAU-surrounding text are content-ignorant ( INLINEFORM11 ) and features computed from both are denoted by full-access ( INLINEFORM12 ). Details on the extraction of features are provided below.", "These consist of boolean values indicating whether a certain word appears in the argumentative source or target EAU or both (and separately, their contexts). More precisely, for any classification instance we extract uni-grams from within the span of the EAU (if INLINEFORM0 ) or solely from the sentence-context surrounding the EAUs (if INLINEFORM1 ). Words which occur in both bags are only visible in the full-access setup INLINEFORM2 and are modeled as binary indicators.", "Such features consist of syntactic production rules extracted from constituency trees – they are modelled analogously to the lexical features as a bag of production rules. To make a clear division between features derived from the EAU embedding context and features derived from within the EAU span, we divide the constituency tree in two parts, as is illustrated in Figure FIGREF26 .", "If the EAU is embedded in a covering sentence, we cut the syntax tree at the corresponding edge ( in Figure FIGREF26 ). In this example, the content-ignorant ( INLINEFORM0 ) bag-of-word production rule representation includes the rules INLINEFORM1 and INLINEFORM2 . Analogously to the lexical features, the production rules are modeled as binary indicator features.", "These features describe shallow statistics such as the ratio of argumentative unit tokens compared to sentence tokens or the position of the argumentative unit in the paragraph. We set these features to zero for the content representation of the argumentative unit and replicate those features that allow us to treat the argumentative unit as a black-box. For example, in the content-based ( INLINEFORM0 ) system that has access only to the EAU, we can compute the #tokens in the EAU, but not the #tokens in EAU divided by #tokens in the sentence. The latter feature is only accessible in the full access system variants. Hence, in the content-based ( INLINEFORM1 ) system most of these statistics are set to zero since they cannot be computed by considering only the EAU span.", "For the content-based representation we retrieve only discourse relations that are confined within the span of the argumentative unit. In the very frequent case that discourse features cross the boundaries of embedding context and EAU span, we only take them into account for INLINEFORM0 .", "We use the element-wise sum of 300-dimensional pre-trained GloVe vectors BIBREF24 corresponding to the words within the EAU span ( INLINEFORM0 ) and the words of the EAU-surrounding context ( INLINEFORM1 ). Additionally, we compute the element-wise subtraction of the source EAU vector from the target EAU vector, with the aim of modelling directions in distributional space, similarly to BIBREF25 . Words with no corresponding pre-trained word vector and empty sequences (e.g., no preceding context available) are treated as a zero-vector.", "Tree-based sentiment annotations are sentiment scores assigned to nodes in constituency parse trees BIBREF26 . We represent these scores by a one-hot vector of dimension 5 (5 is very positive, 1 is very negative). We determine the contextual ( INLINEFORM0 ) sentiment by looking at the highest possible node of the context which does not contain the EAU (ADVP in Figure FIGREF26 ). The sentiment for an EAU span ( INLINEFORM1 ) is assigned to the highest possible node covering the EAU span which does not contain the context sub-tree (S in Figure FIGREF26 ). The full-access ( INLINEFORM2 ) score is assigned to the lowest possible node which covers both the EAU span and its surrounding context (S' in Figure FIGREF26 ). Next to the sentiment scores for the selected tree nodes and analogously to the word embeddings, we also calculate the element-wise subtraction of the one-hot sentiment source vectors from the one-hot sentiment target vectors. This results in three additional vectors corresponding to INLINEFORM3 , INLINEFORM4 and INLINEFORM5 difference vectors." ], [ "Our first step towards our main experiments is to replicate the competitive argumentative relation classifier of BIBREF13 , BIBREF1 . Hence, for comparison purposes, we first formulate the task exactly as it was done in this prior work, using the model formulation in Eq. EQREF17 , which determines the type of outgoing edge from a source (i.e., tree-like view).", "The results in Table TABREF38 confirm the results of BIBREF13 and suggest that we successfully replicated a large proportion of their features.", "The results for all three prediction settings (one outgoing edge: INLINEFORM0 , support/attack: INLINEFORM1 and support/attack/neither: INLINEFORM2 ) across all type variables ( INLINEFORM3 , INLINEFORM4 and INLINEFORM5 ) are displayed in Table TABREF39 . All models significantly outperform the majority baseline with respect to macro F1. Intriguingly, the content-ignorant models ( INLINEFORM6 ) always perform significantly better than the models which only have access to the EAUs' content ( INLINEFORM7 , INLINEFORM8 ). In the most general task formulation ( INLINEFORM9 ), we observe that INLINEFORM10 even significantly outperforms the model which has maximum access (seeing both EAU spans and surrounding contexts: INLINEFORM11 ).", "At first glance, the results of the purely EAU focused systems ( INLINEFORM0 ) are disappointing, since they fall far behind their competitors. On the other hand, their F1 scores are not devastatingly bad. The strong most-frequent-class baseline is significantly outperformed by the content-based ( INLINEFORM1 ) system, across all three prediction settings.", "In summary our findings are as follows: (i) models which see the EAU span (content-based, INLINEFORM0 ) are significantly outperformed by models that have no access to the span itself (content-ignorant, INLINEFORM1 ) across all settings; (ii) in two of three prediction settings ( INLINEFORM2 and INLINEFORM3 ), the model which only has access to the context even outperforms the model that has access to all information in the input. The fact that using features derived exclusively from the EAU embedding context ( INLINEFORM4 ) can lead to better results than using a full feature-system ( INLINEFORM5 ) suggests that some information from the EAU can even be harmful. Why this is the case, we cannot answer exactly. A plausible cause might be related to the smaller dimension of the feature space, which makes the SVM less likely to overfit. Still, this finding comes as a surprise and calls for further investigation in future work.", "A system for argumentative relation classification can be applied in one of two settings: single-document or cross-document, as illustrated in Figure FIGREF42 :", "in the first case (top), a system is tasked to classify EAUs that appear linearly in one document – here contextual clues can often highlight the relationship between two units. This is the setting we have been considering up to now. However, in the second scenario (bottom), we have moved away from the closed single-document setting and ask the system to classify two EAUs extracted from different document contexts. This setting applies, for instance, when we are mining arguments from multiple sources.", "In both cases, however, a system that relies more on contextual clues than on the content expressed in the EAUs is problematic: in the single-document setting, such a system will rely on discourse indicators – whether or not they are justified by content – and can thus easily be fooled.", "In the cross-document setting, discourse-based indicators – being inherently defined with respect to their internal document context – do not have a defined rhetorical function with respect to EAUs in a separate document and thus a system that has learned to rely on such markers within a single-document setting can be seriously misled.", "We believe that the cross-document setting should be an important goal in argumentation analysis, since it generalizes better to many debates of interest, where EAUs can be found scattered across thousands of documents. For example, for the topic of legalizing marijuana, EAUs may be mined from millions of documents and thus their relations may naturally extend across document boundaries. If a system learns to over-proportionally attend to the EAUs' surrounding contexts it is prone to making many errors.", "In what follows we are simulating the effects that an overly context-sensitive classifier could have in a cross-document setting, by modifying our experimental setting, and study the effects on the different model types: In one setup – we call it randomized-context – we systematically distort the context of our testing instances by exchanging the context in a randomized manner; in the other setting – called no-context, we are deleting the context around the ADUs to be classified. Randomized-context simulates an open world debate where argumentative units may occur in different contexts, sometimes with discourse markers indicating an opposite class. In other words, in this setting we want to examine effects when porting a context-sensitive system to a multi-document setting. For example, as seen in Figure FIGREF42 , the context of an argumentative unit may change from “However” to “Moreover” – which can happen naturally in open debates.", "The results are displayed in Figure FIGREF43 . In the standard setting (Figure FIGREF43 ), the models that have access to the context besides the content ( INLINEFORM0 ) and the models that are only allowed to access the context ( INLINEFORM1 ), always perform better than the content-based models ( INLINEFORM2 ) (bars above zero). However, when we randomly flip contexts of the test instances (Figure FIGREF43 ), or suppress them entirely (Figure FIGREF43 ), the opposite picture emerges: the content-based models always outperform the other models. For some classes (support, INLINEFORM3 ) the difference can exceed 50 F1 percentage points. These two studies, where testing examples are varied regarding their context (randomized-context or no-context) simulates what can be expected if we apply our systems for relation class assignment to EAUs stemming from heterogeneous sources. While the performances of a purely content-based model naturally stays stable, the performance of the other systems decrease notably – they perform worse than the content-based model.", "We calculate the ANOVA classification F scores of the features with respect to our three task formulations INLINEFORM0 and INLINEFORM1 . The F percentiles of features extracted from the EAU surrounding text ( INLINEFORM2 ) and features extracted from the EAU span ( INLINEFORM3 ), are displayed in Figure FIGREF50 .", "It clearly stands out that features obtained from the EAU surrounding context ( INLINEFORM0 ) are assigned much higher scores compared to features stemming from the EAU span ( INLINEFORM1 ). This holds true for all three task formulations and provides further evidence that models – when given the option – put a strong focus on contextual clues while neglecting the information provided by the EAU span itself." ], [ "While competitive systems for argumentative relation classification are considered to be robust, our experiments have shown that despite confidence-inspiring scores on unseen testing data, such systems can easily be fooled – they can deliver strong performance scores although the classifier does not have access to the content of the EAUs. In this respect, we have provided evidence that there is a danger in case models focus too much on rhetorical indicators, in detriment of the context. Thus, the following question arises: How can we prevent argumentation models from modeling arguments or argumentative units and their relations in overly naïve ways? A simple and intuitive way is to dissect EAUs from their surrounding document context. Models trained on data that is restricted to the EAUs' content will be forced to focus on the content of EAUs. We believe that this will enhance the robustness of such models and allows them to generalize to cross-document argument relation classification. The corpus of student essays makes such transformations straightforward: only the EAUs were annotated (e.g., “However, INLINEFORM0 A INLINEFORM1 ”). If annotations extend over the EAUs (e.g., only full sentences are annotated, “ INLINEFORM2 However, A INLINEFORM3 ”), such transformations could be performed automatically after a discourse parsing step. When inspecting the student essays corpus, we further observed that an EAU mining step should involve coreference resolution to better capture relations between EAUs that involve anaphors (e.g., “Exercising makes you feel better” and “It INLINEFORM4 increases endorphin levels”).", "Thus, in order to conduct real-world end-to-end argumentation relation mining for a given topic, we envision a system that addresses three steps: (i) mining of EAUs and (ii) replacement of pronouns in EAUs with referenced entities (e.g., INLINEFORM0 ). Finally (iii), given the cross product of mined EAUs we can apply a model of type INLINEFORM1 to construct a full-fledged argumentation graph, possibly spanning multiple documents. We have shown that in order to properly perform step (iii), we need stronger models that are able to better model EAU contents. Hence, we encourage the argumentation community to test their systems on a decontextualized version of the student essays, including the proposed – and possibly further extended – testing setups, to challenge the semantic representation and reasoning capacities of argument analysis models. This will lead to more realistic performance estimates and increased robustness of systems when addressing desirable multi-document tasks." ], [ "We have shown that systems which put too much focus on discourse information may be easily fooled – an issue which has severe implications when systems are applied to cross-document argumentative relation classification tasks. The strong reliance on contextual clues is also problematic in single-document contexts, where systems can run a risk of assigning relation labels relying on contextual and rhetorical effects – instead of focusing on content. Hence, we propose that researchers test their argumentative relation classification systems on two alternative versions of the StudentEssay data that reflect different access levels. (i) EAU-span only, where systems only see the EAU spans and (ii) context-only, where systems can only see the EAU-surrounding context. These complementary settings will (i) challenge the semantic capacities of a system, and (ii) unveil the extent to which a system is focusing on the discourse context when making decisions. We will offer our testing environments to the research community through a platform that provides datasets and scripts and a table to trace the results of content-based systems." ], [ "This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant no. GRK 1994/1 and by the Leibniz ScienceCampus “Empirical Linguistics and Computational Language Modeling”, supported by the Leibniz Association under grant no. SAS-2015-IDS-LWC and by the Ministry of Science, Research, and Art of Baden-Württemberg. " ] ] }
{ "question": [ "Do they report results only on English data?", "How do they demonstrate the robustness of their results?", "What baseline and classification systems are used in experiments?", "How are the EAU text spans annotated?", "How are elementary argumentative units defined?" ], "question_id": [ "a22b900fcd76c3d36b5679691982dc6e9a3d34bf", "fb2593de1f5cc632724e39d92e4dd82477f06ea1", "476d0b5579deb9199423bb843e584e606d606bc7", "eddabb24bc6de6451bcdaa7940f708e925010912", "f0946fb9df9839977f4d16c43476e4c2724ff772" ], "nlp_background": [ "five", "five", "five", "five", "five" ], "topic_background": [ "", "", "", "", "" ], "paper_read": [ "", "", "", "", "" ], "search_query": [ "", "", "", "", "" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37", "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "5bd1279173e673acdbf3c6fb54244548d0a580c2" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "performances of a purely content-based model naturally stays stable" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The results are displayed in Figure FIGREF43 . In the standard setting (Figure FIGREF43 ), the models that have access to the context besides the content ( INLINEFORM0 ) and the models that are only allowed to access the context ( INLINEFORM1 ), always perform better than the content-based models ( INLINEFORM2 ) (bars above zero). However, when we randomly flip contexts of the test instances (Figure FIGREF43 ), or suppress them entirely (Figure FIGREF43 ), the opposite picture emerges: the content-based models always outperform the other models. For some classes (support, INLINEFORM3 ) the difference can exceed 50 F1 percentage points. These two studies, where testing examples are varied regarding their context (randomized-context or no-context) simulates what can be expected if we apply our systems for relation class assignment to EAUs stemming from heterogeneous sources. While the performances of a purely content-based model naturally stays stable, the performance of the other systems decrease notably – they perform worse than the content-based model." ], "highlighted_evidence": [ "While the performances of a purely content-based model naturally stays stable, the performance of the other systems decrease notably – they perform worse than the content-based model." ] } ], "annotation_id": [ "4495a8db2cca0ea3f8739bb39a50d3102f573607" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "BIBREF13", "majority baseline" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The results in Table TABREF38 confirm the results of BIBREF13 and suggest that we successfully replicated a large proportion of their features.", "The results for all three prediction settings (one outgoing edge: INLINEFORM0 , support/attack: INLINEFORM1 and support/attack/neither: INLINEFORM2 ) across all type variables ( INLINEFORM3 , INLINEFORM4 and INLINEFORM5 ) are displayed in Table TABREF39 . All models significantly outperform the majority baseline with respect to macro F1. Intriguingly, the content-ignorant models ( INLINEFORM6 ) always perform significantly better than the models which only have access to the EAUs' content ( INLINEFORM7 , INLINEFORM8 ). In the most general task formulation ( INLINEFORM9 ), we observe that INLINEFORM10 even significantly outperforms the model which has maximum access (seeing both EAU spans and surrounding contexts: INLINEFORM11 )." ], "highlighted_evidence": [ "The results in Table TABREF38 confirm the results of BIBREF13 and suggest that we successfully replicated a large proportion of their features.", "The results for all three prediction settings (one outgoing edge: INLINEFORM0 , support/attack: INLINEFORM1 and support/attack/neither: INLINEFORM2 ) across all type variables ( INLINEFORM3 , INLINEFORM4 and INLINEFORM5 ) are displayed in Table TABREF39 . All models significantly outperform the majority baseline with respect to macro F1." ] } ], "annotation_id": [ "f607b9d41b945da87473a2955ebb329d6fb80f51" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Answer with content missing: (Data and pre-processing section) The data is suited for our experiments because the annotators were explicitly asked to provide annotations on a clausal level.", "evidence": [ "Tree-based sentiment annotations are sentiment scores assigned to nodes in constituency parse trees BIBREF26 . We represent these scores by a one-hot vector of dimension 5 (5 is very positive, 1 is very negative). We determine the contextual ( INLINEFORM0 ) sentiment by looking at the highest possible node of the context which does not contain the EAU (ADVP in Figure FIGREF26 ). The sentiment for an EAU span ( INLINEFORM1 ) is assigned to the highest possible node covering the EAU span which does not contain the context sub-tree (S in Figure FIGREF26 ). The full-access ( INLINEFORM2 ) score is assigned to the lowest possible node which covers both the EAU span and its surrounding context (S' in Figure FIGREF26 ). Next to the sentiment scores for the selected tree nodes and analogously to the word embeddings, we also calculate the element-wise subtraction of the one-hot sentiment source vectors from the one-hot sentiment target vectors. This results in three additional vectors corresponding to INLINEFORM3 , INLINEFORM4 and INLINEFORM5 difference vectors.", "Results" ], "highlighted_evidence": [ "This results in three additional vectors corresponding to INLINEFORM3 , INLINEFORM4 and INLINEFORM5 difference vectors.\n\nResults" ] } ], "annotation_id": [ "0db6d0334e20d45c98db1f1c6092c84d70a5da30" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "367804869bfc09365b3a9eb9790561cb929a9047" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: A graph representation of a topic (node w/ dashed line), two argumentative premise units (nodes w/ solid line), premise-topic relations (positive or negative) and premise-premise relations (here: attacks).", "Figure 2: Production rule extraction from constituency parse for two different argumentative units.", "Table 1: Data set statistics.", "Table 2: Baseline system replication results.", "Table 3: Argumentative relation classification models h, f, g with different access to content and context; models of type CI (content-ignorant) have no access to the EAU span. †: significantly better than mfs baseline (p < 0.005); ‡ significantly better than content-based (p < 0.005).", "Figure 3: Single-document (top) vs. cross-document (bottom) argumentative relation classification. Black edge: gold label; purple edge: predicted label.", "Figure 4: Randomized-context test set: models are applied to testing instances with randomly flipped contexts. No-context test set: models can only access the EAU span of a testing instance. A bar below/above zero means that a system that can access context (content-ignorant CI or full-access FA) is worse/better than the content-based baseline CB that only has access to the EAU span (its performance is not affected by modified context, cf. Tab. 3).", "Figure 5: ANOVA F score percentiles for contentbased vs. content-ignorant features in the training data. A higher feature score suggests greater predictive capacity." ], "file": [ "1-Figure1-1.png", "4-Figure2-1.png", "5-Table1-1.png", "6-Table2-1.png", "6-Table3-1.png", "7-Figure3-1.png", "8-Figure4-1.png", "8-Figure5-1.png" ] }
1602.08741
Gibberish Semantics: How Good is Russian Twitter in Word Semantic Similarity Task?
The most studied and most successful language models were developed and evaluated mainly for English and other close European languages, such as French, German, etc. It is important to study applicability of these models to other languages. The use of vector space models for Russian was recently studied for multiple corpora, such as Wikipedia, RuWac, lib.ru. These models were evaluated against word semantic similarity task. For our knowledge Twitter was not considered as a corpus for this task, with this work we fill the gap. Results for vectors trained on Twitter corpus are comparable in accuracy with other single-corpus trained models, although the best performance is currently achieved by combination of multiple corpora.
{ "section_name": [ "Introduction", "Goals of this paper", "Previous work", "Data processing", "Acquiring data", "Corpus preprocessing", "Training the model", "Experimental results", "Properties of the data", "Determining optimal corpus size", "Determining optimal context size", "Some further observations", "Conclusion" ], "paragraphs": [ [ "Word semantic similarity task is an important part of contemporary NLP. It can be applied in many areas, like word sense disambiguation, information retrieval, information extraction and others. It has long history of improvements, starting with simple models, like bag-of-words (often weighted by TF-IDF score), continuing with more complex ones, like LSA BIBREF0 , which attempts to find “latent” meanings of words and phrases, and even more abstract models, like NNLM BIBREF1 . Latest results are based on neural network experience, but are far more simple: various versions of Word2Vec, Skip-gram and CBOW models BIBREF2 , which currently show the State-of-the-Art results and have proven success with morphologically complex languages like Russian BIBREF3 , BIBREF4 .", "These are corpus-based approaches, where one computes or trains the model from a large corpus. They usually consider some word context, like in bag-of-words, where model is simple count of how often can some word be seen in context of a word being described. This model anyhow does not use semantic information. A step in semantic direction was made by LSA, which requires SVD transformation of co-occurrence matrix and produces vectors with latent, unknown structure. However, this method is rather computationally expensive, and can rarely be applied to large corpora. Distributed language model was proposed, where every word is initially assigned a random fixed-size vector. During training semantically close vectors (or close by means of context) become closer to each other; as matter of closeness the cosine similarity is usually chosen. This trick enables usage of neural networks and other machine learning techniques, which easily deal with fixed-size real vectors, instead of large and sparse co-occurrence vectors.", "It is worth mentioning non-corpus based techniques to estimate word semantic similarity. They usually make use of knowledge databases, like WordNet, Wikipedia, Wiktionary and others BIBREF5 , BIBREF6 . It was shown that Wikipedia data can be used in graph-based methods BIBREF7 , and also in corpus-based ones. In this paper we are not focusing on non-corpus based techniques.", "In this paper we concentrate on usage of Russian Twitter stream as training corpus for Word2Vec model in semantic similarity task, and show results comparable with current (trained on a single corpus). This research is part of molva.spb.ru project, which is a trending topic detection engine for Russian Twitter. Thus the choice of language of interest is narrowed down to only Russian, although there is strong intuition that one can achieve similar results with other languages." ], [ "The primary goal of this paper is to prove usefulness of Russian Twitter stream as word semantic similarity resource. Twitter is a popular social network, or also called \"microblogging service\", which enables users to share and interact with short messages instantly and publicly (although private accounts are also available). Users all over the world generate hundreds of millions of tweets per day, all over the world, in many languages, generating enormous amount of verbal data.", "Traditional corpora for the word semantic similarity task are News, Wikipedia, electronic libraries and others (e.g. RUSSE workshop BIBREF4 ). It was shown that type of corpus used for training affects the resulting accuracy. Twitter is not usually considered, and intuition behind this is that probably every-day language is too simple and too occasional to produce good results. On the other hand, the real-time nature of this user message stream seems promising, as it may reveal what certain word means in this given moment.", "The other counter-argument against Twitter-as-Dataset is the policy of Twitter, which disallows publication of any dump of Twitter messages larger than 50K . However, this policy permits publication of Twitter IDs in any amount. Thus the secondary goal of this paper is to describe how to create this kind of dataset from scratch. We provide the sample of Twitter messages used, as well as set of Twitter IDs used during experiments ." ], [ "Semantic similarity and relatedness task received significant amount of attention. Several \"Gold standard\" datasets were produced to facilitate the evaluation of algorithms and models, including WordSim353 BIBREF8 , RG-65 BIBREF9 for English language and others. These datasets consist of several pairs of words, where each pair receives a score from human annotators. The score represents the similarity between two words, from 0% (not similar) to 100% (identical meaning, words are synonyms). Usually these scores are filled out by a number of human annotators, for instance, 13 in case of WordSim353 . The inter-annotator agreement is measured and the mean value is put into dataset.", "Until recent days there was no such dataset for Russian language. To mitigate this the “RUSSE: The First Workshop on Russian Semantic Similarity” BIBREF4 was conducted, producing RUSSE Human-Judgements evaluation dataset (we will refer to it as HJ-dataset). RUSSE dataset was constructed the following way. Firstly, datasets WordSim353, MC BIBREF10 and RG-65 were combined and translated. Then human judgements were obtained by crowdsourcing (using custom implementation). Final size of the dataset is 333 word pairs, it is available on-line.", "The RUSSE contest was followed by paper from its organizers BIBREF4 and several participators BIBREF3 , BIBREF11 , thus filling the gap in word semantic similarity task for Russian language. In this paper we evaluate a Word2Vec model, trained on Russian Twitter corpus against RUSSE HJ-dataset, and show results comparable to top results of other RUSSE competitors." ], [ "In this section we describe how we receive data from Twitter, how we filter it and how we feed it to the model." ], [ "Twitter provides well-documented API, which allows to request any information about Tweets, users and their profiles, with respect to rate limits. There is special type of API, called Streaming API, that provides a real-time stream of tweets. The key difference with regular API is that connection is kept alive as long as possible, and Tweets are sent in real-time to the client. There are three endpoints of Streaming API of our interest: “sample”, “filter” and “firehose”. The first one provides a sample (random subset) of the full Tweet stream. The second one allows to receive Tweets matching some search criteria: matching to one or more search keywords, produced by subset of users, or coming from certain geo location. The last one provides the full set of Tweets, although it is not available by default. In order to get Twitter “firehose” one can contact Twitter, or buy this stream from third-parties.", "In our case the simplest approach would be to use “sample” endpoint, but it provides Tweets in all possible languages from all over the World, while we are concerned only about one language (Russian). In order to use this endpoint we implemented filtering based on language. The filter is simple: if Tweet does not contain a substring of 3 or more cyrillic symbols, it is considered non-Russian. Although this approach keeps Tweets in Mongolian, Ukrainian and other slavic languages (because they use cyrillic alphabet), the total amount of false-positives in this case is negligible. To demonstrate this we conducted simple experiment: on a random sample of 200 tweets only 5 were in a language different from Russian. In order not to rely on Twitter language detection, we chose to proceed with this method of language-based filtering.", "However, the amount of Tweets received through “sample” endpoint was not satisfying. This is probably because “sample” endpoint always streams the same content to all its clients, and small portion of it comes in Russian language. In order to force mining of Tweets in Russian language, we chose \"filter\" endpoint, which requires some search query. We constructed heuristic query, containing some auxiliary words, specific to Russian language: conjunctions, pronouns, prepositions. The full list is as follows:", "russian я, у, к, в, по, на, ты, мы, до, на, она, он, и, да.", "We evaluated our search query on data obtained from “sample” endpoint, and 95% of Tweets matched it. We consider this coverage as reasonable and now on use “filter” endpoint with the query and language filtering described above. In this paper we work with Tweet stream acquired from 2015/07/21 till 2015/08/04. We refer to parts of the dataset by the day of acquisition: 2015/07/21, etc. Tweet IDs used in our experiments are listed on-line." ], [ "Corpus-based algorithms like BoW and Word2Vec require text to be tokenized, and sometimes to be stemmed as well. It is common practice to filter out Stop-Words (e.g. BIBREF11 ), but in this work we don’t use it. Morphological richness of Russian language forces us to use stemming, even though models like Word2Vec does not require it. In our experiments stemmed version performs significantly better than unstemmed, so we only report results of stemmed one. To do stemming we use Yandex Tomita Parser , which is an extractor of simple facts from text in Russian language. It is based on Yandex stemmer mystem BIBREF12 . It requires a set of grammar rules and facts (i.e. simple data structures) to be extracted. In this paper we use it with one simple rule:", "S -> Word interp (SimpleFact.Word);", "This rule tells parser to interpret each word it sees and return it back immediately. We use Tomita Parser as we find it more user-friendly than mystem. Tomita Parser performs following operations: sentence splitting, tokenization, stemming, removing punctuation marks, transforming words to lowercase. Each Tweet is transformed into one or several lines of tab-separated sequences of words (if there are several sentences or lines in a Tweet). Twitter-specific “Hashtags” and “User mentions” are treated by Tomita Parser as normal words, except that “@” and “#” symbols are stripped off.", "HJ-dataset contains non-lemmatized words. This is understandable, because the task of this dataset was oriented to human annotators. In several cases plural form is used (consider this pair: \"russianтигр, russianкошачьи\"). In order to compute similarity for those pairs, and having in mind that Twitter data is pre-stemmed, we have to stem HJ-dataset with same parser as well." ], [ "We use Word2Vec to obtain word vectors from Twitter corpus. In this model word vectors are initialized randomly for each unique word and are fed to a sort of neural network. Authors of Word2Vec propose two different models: Skip-gram and CBOW. The first one is trained to predict the context of the word given just the word vector itself. The second one is somewhat opposite: it is trained to predict the word vector given its context. In our study CBOW always performs worse than Skip-gram, hence we describe only results with Skip-gram model. Those models have several training parameters, namely: vector size, size of vocabulary (or minimal frequency of a word), context size, threshold of downsampling, amount of training epochs. We choose vector size based on size of corpus. We use “context size” as “number of tokens before or after current token”. In all experiments presented in this paper we use one training epoch.", "There are several implementations of Word2Vec available, including original C utility and a Python library gensim. We use the latter one as we find it more convenient. Output of Tomita Parser is fed directly line-by-line to the model. It produces the set of vectors, which we then query to obtain similarity between word vectors, in order to compute the correlation with HJ-dataset. To compute correlation we use Spearman coefficient, since it was used as accuracy measure in RUSSE BIBREF4 ." ], [ "In this section we describe properties of data obtained from Twitter, describe experiment protocols and results." ], [ "In order to train Word2Vec model for semantic similarity task we collected Twitter messages for 15 full days, from 2015/07/21 till 2015/08/04. Each day contains on average 3M of Tweets and 40M of tokens. All properties measured are shown in Table 1. Our first observation was that given one day of Twitter data we cannot estimate all of the words from HJ-dataset, because they appear too rarely. We fixed the frequency threshold on value of 40 occurrences per day and counted how many words from HJ-dataset are below this threshold.", "Our second observation was that words \"missing\" from HJ-dataset are different from day to day. This is not very surprising having in mind the dynamic nature of Twitter data. Thus estimation of word vectors is different from day to day. In order to estimate the fluctuation of this semantic measure, we conduct training of Word2Vec on each day in our corpus. We fix vector size to 300, context size to 5, downsampling threshold to 1e-3, and minimal word occurrence threshold (also called min-freq) to 40. The results are shown in Table 2. Mean Spearman correlation between daily Twitter splits and HJ-dataset is 0.36 with std.dev. of 0.04. Word pairs for missing words (infrequent ones) were excluded. We also create superset of all infrequent words, i.e. words having frequency below 40 in at least one daily split. This set contains 50 words and produces 76 \"infrequent word\" pairs (out of 333). Every pair containing at least one infrequent word was excluded. On that subset of HJ-dataset mean correlation is 0.29 with std.dev. of 0.03. We consider this to be reasonably stable result." ], [ "Word2Vec model was designed to be trained on large corpora. There are results of training it in reasonable time with corpus size of 1 billion of tokens BIBREF2 . It was mentioned that accuracy of estimated word vectors improves with size of corpus. Twitter provides an enormous amount of data, thus it is a perfect job for Word2Vec. We fix parameters for the model with following values: vector size of 300, min-freq of 40, context size of 5 and downsampling of 1e-3. We train our model subsequently with 1, 7 and 15 days of Twitter data (each starting with 07/21 and followed by subsequent days) . The largest corpus of 15 days contains 580M tokens. Results of training are shown in Table 3. In this experiment the best result belongs to 7-day corpus with 0.56 correlation with HJ-dataset, and 15-day corpus has a little less, 0.55. This can be explained by following: in order to achieve better results with Word2Vec one should increase both corpus and vector sizes. Indeed, training model with vector size of 600 on full Twitter corpus (15 days) shows the best result of 0.59. It is also worth noting that number of \"missing\" pairs is negligible in 7-days corpus: the only missing word (and pair) is \"russianйель\", Yale, the name of university in the USA. There are no \"missing\" words in 15-days corpus.", "Training the model on 15-days corpus took 8 hours on our machine with 2 cores and 4Gb of RAM. We have an intuition that further improvements are possible with larger corpus. Comparing our results to ones reported by RUSSE participants, we conclude that our best result of 0.598 is comparable to other results, as it (virtually) encloses the top-10 of results. However, best submission of RUSSE has huge gap in accuracy of 0.16, compared to our Twitter corpus. Having in mind that best results in RUSSE combine several corpora, it is reasonable to compare Twitter results to other single-corpus results. For convenience we replicate results for these corpora, originally presented in BIBREF4 , alongside with our result in Table 5. Given these considerations we conclude that with size of Twitter corpus of 500M one can achieve reasonably good results on task of word semantic similarity." ], [ "Authors of Word2Vec BIBREF2 and Paragraph Vector BIBREF13 advise to determine the optimal context size for each distinct training session. In our Twitter corpus average length of the sentence appears to be 9.8 with std.dev. of 4.9; it means that most of sentences have less than 20 tokens. This is one of peculiarities of Twitter data: Tweets are limited in size, hence sentences are short. Context size greater than 10 is redundant. We choose to train word vectors with 3 different context size values: 2, 5, 10. We make two rounds of training: first one, with Twitter data from days from 07/21 till 07/25, and second, from 07/26 till 07/30. Results of measuring correlation with HJ-dataset are shown in Table 4. According to these results context size of 5 is slightly better than others, but the difference is negligible compared to fluctuation between several attempts of training." ], [ "Vector space model is capable to give more information than just measure of semantic distance of two given words. It was shown that word vectors can have multiple degrees of similarity. In particular, it is possible to model simple relations, like \"country\"-\"capital city\", gender, syntactic relations with algebraic operations over these vectors. Authors of BIBREF2 propose to assess quality of these vectors on task of exact prediction of these word relations. However, word vectors learned from Twitter seem to perform poorly on this task. We don’t make systematic research on this subject here because it goes outside of the scope of the current paper, though it is an important direction of future studies.", "Twitter post often contains three special types of words: user mentions, hashtags and hyperlinks. It can be beneficial to filter them (consider as Stop-Words). In results presented in this paper, and in particular in Tables 3 and 4, we don’t filter such words. It is highly controversial if one should remove hashtags from analysis since they are often valid words or multiwords. It can also be beneficial, in some tasks, to estimate word vectors for a username. Hyperlinks in Twitter posts are mandatory shortened. It is not clear how to treat them: filter out completely, keep them or even un-short them. However, some of our experiments show that filtering of \"User Mentions\" and hyperlinks can improve accuracy on the word semantic relatedness task by 3-5%." ], [ "In this paper we investigated the use of Twitter corpus for training Word2Vec model for task of word semantic similarity. We described a method to obtain stream of Twitter messages and prepare them for training. We use HJ-dataset, which was created for RUSSE contest BIBREF4 to measure correlation between similarity of word vectors and human judgements on word pairs similarity. We achieve results comparable with results obtained while training Word2Vec on traditional corpora, like Wikipedia and Web pages BIBREF3 , BIBREF11 . This is especially important because Twitter data is highly dynamic, and traditional sources are mostly static (rarely change over time). Thus verbal data acquired from Twitter may be used to estimate word vectors for neologisms, or determine other changes in word semantic, as soon as they appear in human speech." ] ] }
{ "question": [ "Which Twitter corpus was used to train the word vectors?" ], "question_id": [ "e51d0c2c336f255e342b5f6c3cf2a13231789fed" ], "nlp_background": [ "five" ], "topic_background": [ "unfamiliar" ], "paper_read": [ "no" ], "search_query": [ "twitter" ], "question_writer": [ "e8b24c3133e0bec0a6465e1f13acd3a5ed816b37" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "They collected tweets in Russian language using a heuristic query specific to Russian", "evidence": [ "Twitter provides well-documented API, which allows to request any information about Tweets, users and their profiles, with respect to rate limits. There is special type of API, called Streaming API, that provides a real-time stream of tweets. The key difference with regular API is that connection is kept alive as long as possible, and Tweets are sent in real-time to the client. There are three endpoints of Streaming API of our interest: “sample”, “filter” and “firehose”. The first one provides a sample (random subset) of the full Tweet stream. The second one allows to receive Tweets matching some search criteria: matching to one or more search keywords, produced by subset of users, or coming from certain geo location. The last one provides the full set of Tweets, although it is not available by default. In order to get Twitter “firehose” one can contact Twitter, or buy this stream from third-parties.", "In our case the simplest approach would be to use “sample” endpoint, but it provides Tweets in all possible languages from all over the World, while we are concerned only about one language (Russian). In order to use this endpoint we implemented filtering based on language. The filter is simple: if Tweet does not contain a substring of 3 or more cyrillic symbols, it is considered non-Russian. Although this approach keeps Tweets in Mongolian, Ukrainian and other slavic languages (because they use cyrillic alphabet), the total amount of false-positives in this case is negligible. To demonstrate this we conducted simple experiment: on a random sample of 200 tweets only 5 were in a language different from Russian. In order not to rely on Twitter language detection, we chose to proceed with this method of language-based filtering.", "However, the amount of Tweets received through “sample” endpoint was not satisfying. This is probably because “sample” endpoint always streams the same content to all its clients, and small portion of it comes in Russian language. In order to force mining of Tweets in Russian language, we chose \"filter\" endpoint, which requires some search query. We constructed heuristic query, containing some auxiliary words, specific to Russian language: conjunctions, pronouns, prepositions. The full list is as follows:", "russian я, у, к, в, по, на, ты, мы, до, на, она, он, и, да." ], "highlighted_evidence": [ "Twitter provides well-documented API, which allows to request any information about Tweets, users and their profiles, with respect to rate limits. There is special type of API, called Streaming API, that provides a real-time stream of tweets. The key difference with regular API is that connection is kept alive as long as possible, and Tweets are sent in real-time to the client. There are three endpoints of Streaming API of our interest: “sample”, “filter” and “firehose”. The first one provides a sample (random subset) of the full Tweet stream. The second one allows to receive Tweets matching some search criteria: matching to one or more search keywords, produced by subset of users, or coming from certain geo location. The last one provides the full set of Tweets, although it is not available by default. In order to get Twitter “firehose” one can contact Twitter, or buy this stream from third-parties.\n\nIn our case the simplest approach would be to use “sample” endpoint, but it provides Tweets in all possible languages from all over the World, while we are concerned only about one language (Russian). In order to use this endpoint we implemented filtering based on language. The filter is simple: if Tweet does not contain a substring of 3 or more cyrillic symbols, it is considered non-Russian. Although this approach keeps Tweets in Mongolian, Ukrainian and other slavic languages (because they use cyrillic alphabet), the total amount of false-positives in this case is negligible. To demonstrate this we conducted simple experiment: on a random sample of 200 tweets only 5 were in a language different from Russian. In order not to rely on Twitter language detection, we chose to proceed with this method of language-based filtering.\n\nHowever, the amount of Tweets received through “sample” endpoint was not satisfying. This is probably because “sample” endpoint always streams the same content to all its clients, and small portion of it comes in Russian language. In order to force mining of Tweets in Russian language, we chose \"filter\" endpoint, which requires some search query. We constructed heuristic query, containing some auxiliary words, specific to Russian language: conjunctions, pronouns, prepositions. The full list is as follows:\n\nrussian я, у, к, в, по, на, ты, мы, до, на, она, он, и, да." ] } ], "annotation_id": [ "0e0ced62aefb27fde1a0ab5b1516b4455bf569bb" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Table 1. Properties of Twitter corpus (15 full days)", "Table 2. Properties of Twitter corpus (average on daily slices)", "Table 3. Properties of Twitter corpus (different size)", "Table 4. RSpearman for different context size", "Table 5. Comparison with current single-corpus trained results" ], "file": [ "6-Table1-1.png", "7-Table2-1.png", "7-Table3-1.png", "8-Table4-1.png", "8-Table5-1.png" ] }
1911.12579
A New Corpus for Low-Resourced Sindhi Language with Word Embeddings
Representing words and phrases into dense vectors of real numbers which encode semantic and syntactic properties is a vital constituent in natural language processing (NLP). The success of neural network (NN) models in NLP largely rely on such dense word representations learned on the large unlabeled corpus. Sindhi is one of the rich morphological language, spoken by large population in Pakistan and India lacks corpora which plays an essential role of a test-bed for generating word embeddings and developing language independent NLP systems. In this paper, a large corpus of more than 61 million words is developed for low-resourced Sindhi language for training neural word embeddings. The corpus is acquired from multiple web-resources using web-scrappy. Due to the unavailability of open source preprocessing tools for Sindhi, the prepossessing of such large corpus becomes a challenging problem specially cleaning of noisy data extracted from web resources. Therefore, a preprocessing pipeline is employed for the filtration of noisy text. Afterwards, the cleaned vocabulary is utilized for training Sindhi word embeddings with state-of-the-art GloVe, Skip-Gram (SG), and Continuous Bag of Words (CBoW) word2vec algorithms. The intrinsic evaluation approach of cosine similarity matrix and WordSim-353 are employed for the evaluation of generated Sindhi word embeddings. Moreover, we compare the proposed word embeddings with recently revealed Sindhi fastText (SdfastText) word representations. Our intrinsic evaluation results demonstrate the high quality of our generated Sindhi word embeddings using SG, CBoW, and GloVe as compare to SdfastText word representations.
{ "section_name": [ "Introduction", "Related work", "Methodology", "Methodology ::: Task description", "Methodology ::: Corpus acquisition", "Methodology ::: Preprocessing", "Methodology ::: Word embedding models", "Methodology ::: GloVe", "Methodology ::: Continuous bag-of-words", "Methodology ::: Skip gram", "Methodology ::: Hyperparameters ::: Sub-sampling", "Methodology ::: Hyperparameters ::: Dynamic context window", "Methodology ::: Hyperparameters ::: Sub-word model", "Methodology ::: Hyperparameters ::: Position-dependent weights", "Methodology ::: Hyperparameters ::: Shifted point-wise mutual information", "Methodology ::: Hyperparameters ::: Deleting rare words", "Methodology ::: Evaluation methods", "Methodology ::: Evaluation methods ::: Cosine similarity", "Methodology ::: Evaluation methods ::: WordSim353", "Statistical analysis of corpus", "Statistical analysis of corpus ::: Letter occurrences", "Statistical analysis of corpus ::: Letter n-grams frequency", "Statistical analysis of corpus ::: Word Frequencies", "Statistical analysis of corpus ::: Stop words", "Experiments and results", "Experiments and results ::: Hyperparameter optimization", "Word similarity comparison of Word Embeddings ::: Nearest neighboring words", "Word similarity comparison of Word Embeddings ::: Word pair relationship", "Word similarity comparison of Word Embeddings ::: Comparison with WordSim353", "Word similarity comparison of Word Embeddings ::: Visualization", "Discussion and future work", "Conclusion" ], "paragraphs": [ [ "Sindhi is a rich morphological, mutltiscript, and multidilectal language. It belongs to the Indo-Aryan language family BIBREF0, with significant cultural and historical background. Presently, it is recognized as is an official language BIBREF1 in Sindh province of Pakistan, also being taught as a compulsory subject in Schools and colleges. Sindhi is also recognized as one of the national languages in India. Ulhasnagar, Rajasthan, Gujarat, and Maharashtra are the largest Indian regions of Sindhi native speakers. It is also spoken in other countries except for Pakistan and India, where native Sindhi speakers have migrated, such as America, Canada, Hong Kong, British, Singapore, Tanzania, Philippines, Kenya, Uganda, and South, and East Africa. Sindhi has rich morphological structure BIBREF2 due to a large number of homogeneous words. Historically, it was written in multiple writing systems, which differ from each other in terms of orthography and morphology. The Persian-Arabic is the standard script of Sindhi, which was officially accepted in 1852 by the British government. However, the Sindhi-Devanagari is also a popular writing system in India being written in left to right direction like the Hindi language. Formerly, Khudabadi, Gujrati, Landa, Khojki, and Gurumukhi were also adopted as its writing systems. Even though, Sindhi has great historical and literal background, presently spoken by nearly 75 million people BIBREF1. The research on SNLP was coined in 2002, however, IT grabbed research attention after the development of its Unicode system BIBREF3. But still, Sindhi stands among the low-resourced languages due to the scarcity of core language processing resources of the raw and annotated corpus, which can be utilized for training robust word embeddings or the use of machine learning algorithms. Since the development of annotated datasets requires time and human resources.", "The Language Resources (LRs) are fundamental elements for the development of high quality NLP systems based on automatic or NN based approaches. The LRs include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. The development of such resources has received great research interest for the digitization of human languages BIBREF4. Many world languages are rich in such language processing resources integrated in their software tools including English BIBREF5 BIBREF6, Chinese BIBREF7 and other languages BIBREF8 BIBREF9. The Sindhi language lacks the basic computational resources BIBREF10 of a large text corpus, which can be utilized for training robust word embeddings and developing language independent NLP applications including semantic analysis, sentiment analysis, parts of the speech tagging, named entity recognition, machine translation BIBREF11, multitasking BIBREF12, BIBREF13. Presently Sindhi Persian-Arabic is frequently used for online communication, newspapers, public institutions in Pakistan, and India BIBREF1. But little work has been carried out for the development of LRs such as raw corpus BIBREF14, BIBREF15, annotated corpus BIBREF16, BIBREF17, BIBREF1, BIBREF18. In the best of our knowledge, Sindhi lacks the large unlabelled corpus which can be utilized for generating and evaluating word embeddings for Statistical Sindhi Language Processing (SSLP).", "One way to to break out this loop is to learn word embeddings from unlabelled corpora, which can be utilized to bootstrap other downstream NLP tasks. The word embedding is a new term of semantic vector space BIBREF19, distributed representations BIBREF20, and distributed semantic models. It is a language modeling approach BIBREF21 used for the mapping of words and phrases into $n$-dimensional dense vectors of real numbers that effectively capture the semantic and syntactic relationship with neighboring words in a geometric way BIBREF22 BIBREF23. Such as “Einstein” and “Scientist” would have greater similarity compared with “Einstein” and “doctor.” In this way, word embeddings accomplish the important linguistic concept of “a word is characterized by the company it keeps\". More recently NN based models yield state-of-the-art performance in multiple NLP tasks BIBREF24 BIBREF25 with the word embeddings. One of the advantages of such techniques is they use unsupervised approaches for learning representations and do not require annotated corpus which is rare for low-resourced Sindhi language. Such representions can be trained on large unannotated corpora, and then generated representations can be used in the NLP tasks which uses a small amount of labelled data.", "In this paper, we address the problems of corpus construction by collecting a large corpus of more than 61 million words from multiple web resources using the web-scrappy framework. After the collection of the corpus, we carefully preprocessed for the filtration of noisy text, e.g., the HTML tags and vocabulary of the English language. The statistical analysis is also presented for the letter, word frequencies and identification of stop-words. Finally, the corpus is utilized to generate Sindhi word embeddings using state-of-the-art GloVe BIBREF26 SG and CBoW BIBREF27 BIBREF20 BIBREF24 algorithms. The popular intrinsic evaluation method BIBREF20 BIBREF28 BIBREF29 of calculating cosine similarity between word vectors and WordSim353 BIBREF30 are employed to measure the performance of the learned Sindhi word embeddings. We translated English WordSim353 word pairs into Sindhi using bilingual English to Sindhi dictionary. The intrinsic approach typically involves a pre-selected set of query terms BIBREF23 and semantically related target words, which we refer to as query words. Furthermore, we also compare the proposed word embeddings with recently revealed Sindhi fastText (SdfastText) BIBREF25 word representations. To the best of our knowledge, this is the first comprehensive work on the development of large corpus and generating word embeddings along with systematic evaluation for low-resourced Sindhi Persian-Arabic. The synopsis of our novel contributions is listed as follows:", "We present a large corpus of more than 61 million words obtained from multiple web resources and reveal a list of Sindhi stop words.", "We develop a text cleaning pipeline for the preprocessing of the raw corpus.", "Generate word embeddings using GloVe, CBoW, and SG Word2Vec algorithms also evaluate and compare them using the intrinsic evaluation approaches of cosine similarity matrix and WordSim353.", "We are the first to evaluate SdfastText word representations and compare them with our proposed Sindhi word embeddings.", "The remaining sections of the paper are organized as; Section SECREF2 presents the literature survey regarding computational resources, Sindhi corpus construction, and word embedding models. Afterwards, Section SECREF3 presents the employed methodology, Section SECREF4 consist of statistical analysis of the developed corpus. Section SECREF5 present the experimental setup. The intrinsic evaluation results along with comparison are given in Section SECREF6. The discussion and future work are given in Section SECREF7, and lastly, Section SECREF8 presents the conclusion." ], [ "The natural language resources refer to a set of language data and descriptions BIBREF31 in machine readable form, used for building, improving, and evaluating NLP algorithms or softwares. Such resources include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. Many world languages are rich in such language processing resources integrated in the software tools including NLTK for English BIBREF5, Stanford CoreNLP BIBREF6, LTP for Chinese BIBREF7, TectoMT for German, Russian, Arabic BIBREF8 and multilingual toolkit BIBREF9. But Sindhi language is at an early stage for the development of such resources and software tools.", "The corpus construction for NLP mainly involves important steps of acquisition, preprocessing, and tokenization. Initially, BIBREF14 discussed the morphological structure and challenges concerned with the corpus development along with orthographical and morphological features in the Persian-Arabic script. The raw and annotated corpus BIBREF1 for Sindhi Persian-Arabic is a good supplement towards the development of resources, including raw and annotated datasets for parts of speech tagging, morphological analysis, transliteration between Sindhi Persian-Arabic and Sindhi-Devanagari, and machine translation system. But the corpus is acquired only form Wikipedia-dumps. A survey-based study BIBREF4 provides all the progress made in the Sindhi Natural Language Processing (SNLP) with the complete gist of adopted techniques, developed tools and available resources which show that work on resource development on Sindhi needs more sophisticated efforts. The raw corpus is utilized for word segmentation BIBREF32 of Sindhi Persian-Arabic. More recently, an initiative towards the development of resources is taken BIBREF16 by open sourcing annotated dataset of Sindhi Persian-Arabic obtained from news and social blogs. The existing and proposed work is presented in Table TABREF9 on the corpus development, word segmentation, and word embeddings, respectively.", "The power of word embeddings in NLP was empirically estimated by proposing a neural language model BIBREF21 and multitask learning BIBREF12, but recently usage of word embeddings in deep neural algorithms has become integral element BIBREF33 for performance acceleration in deep NLP applications. The CBoW and SG BIBREF27 BIBREF20 popular word2vec neural architectures yielded high quality vector representations in lower computational cost with integration of character-level learning on large corpora in terms of semantic and syntactic word similarity later extended BIBREF33 BIBREF24. Both approaches produce state-of-the-art accuracy with fast training performance, better representations of less frequent words and efficient representation of phrases as well. BIBREF34 proposed NN based approach for generating morphemic-level word embeddings, which surpassed all the existing embedding models in intrinsic evaluation. A count-based GloVe model BIBREF26 also yielded state-of-the-art results in an intrinsic evaluation and downstream NLP tasks.", "The performance of Word embeddings is evaluated using intrinsic BIBREF23 BIBREF29 and extrinsic evaluation BIBREF28 methods. The performance of word embeddings can be measured with intrinsic and extrinsic evaluation approaches. The intrinsic approach is used to measure the internal quality of word embeddings such as querying nearest neighboring words and calculating the semantic or syntactic similarity between similar word pairs. A method of direct comparison for intrinsic evaluation of word embeddings measures the neighborhood of a query word in vector space. The key advantage of that method is to reduce bias and create insight to find data-driven relevance judgment. An extrinsic evaluation approach is used to evaluate the performance in downstream NLP tasks, such as parts-of-speech tagging or named-entity recognition BIBREF23, but the Sindhi language lacks annotated corpus for such type of evaluation. Moreover, extrinsic evaluation is time consuming and difficult to interpret. Therefore, we opt intrinsic evaluation method BIBREF28 to get a quick insight into the quality of proposed Sindhi word embeddings by measuring the cosine distance between similar words and using WordSim353 dataset. A study reveals that the choice of optimized hyper-parameters BIBREF35 has a great impact on the quality of pretrained word embeddings as compare to desing a novel algorithm. Therefore, we optimized the hyperparameters for generating robust Sindhi word embeddings using CBoW, SG and GloVe models. The embedding visualization is also useful to visualize the similarity of word clusters. Therefore, we use t-SNE BIBREF36 dimensionality reduction algorithm for compressing high dimensional embedding into 2-dimensional $x$,$y$ coordinate pairs with PCA BIBREF37. The PCA is useful to combine input features by dropping the least important features while retaining the most valuable features." ], [ "This section presents the employed methodology in detail for corpus acquisition, preprocessing, statistical analysis, and generating Sindhi word embeddings." ], [ "We initiate this work from scratch by collecting large corpus from multiple web resources. After preprocessing and statistical analysis of the corpus, we generate Sindhi word embeddings with state-of-the-art CBoW, SG, and GloVe algorithms. The generated word embeddings are evaluated using the intrinsic evaluation approaches of cosine similarity between nearest neighbors, word pairs, and WordSim-353 for distributional semantic similarity. Moreover, we use t-SNE with PCA for the comparison of the distance between similar words via visualization." ], [ "The corpus is a collection of human language text BIBREF31 built with a specific purpose. However, the statistical analysis of the corpus provides quantitative, reusable data, and an opportunity to examine intuitions and ideas about language. Therefore, the corpus has great importance for the study of written language to examine the text. In fact, realizing the necessity of large text corpus for Sindhi, we started this research by collecting raw corpus from multiple web resource using web-scrappy framwork for extraction of news columns of daily Kawish and Awami Awaz Sindhi newspapers, Wikipedia dumps, short stories and sports news from Wichaar social blog, news from Focus Word press blog, historical writings, novels, stories, books from Sindh Salamat literary websites, novels, history and religious books from Sindhi Adabi Board and tweets regarding news and sports are collected from twitter." ], [ "The preprocessing of text corpus obtained from multiple web resources is a challenging task specially it becomes more complicated when working on low-resourced language like Sindhi due to the lack of open-source preprocessing tools such as NLTK BIBREF5 for English. Therefore, we design a preprocessing pipeline depicted in Figure FIGREF22 for the filtration of unwanted data and vocabulary of other languages such as English to prepare input for word embeddings. Whereas, the involved preprocessing steps are described in detail below the Figure FIGREF22. Moreover, we reveal the list of Sindhi stop words BIBREF38 which is labor intensive and requires human judgment as well. Hence, the most frequent and least important words are classified as stop words with the help of a Sindhi linguistic expert. The partial list of Sindhi stop words is given in TABREF61. We use python programming language for designing the preprocessing pipeline using regex and string functions.", "Input: The collected text documents were concatenated for the input in UTF-8 format.", "Replacement symbols: The punctuation marks of a full stop, hyphen, apostrophe, comma, quotation, and exclamation marks replaced with white space for authentic tokenization because without replacing these symbols with white space the words were found joined with their next or previous corresponding words.", "Filtration of noisy data: The text acquisition from web resources contain a huge amount of noisy data. Therefore, we filtered out unimportant data such as the rest of the punctuation marks, special characters, HTML tags, all types of numeric entities, email, and web addresses.", "Normalization: In this step, We tokenize the corpus then normalize to lower-case for the filtration of multiple white spaces, English vocabulary, and duplicate words. The stop words were only filtered out for preparing input for GloVe. However, the sub-sampling approach in CBoW and SG can discard most frequent or stop words automatically." ], [ "The NN based approaches have produced state-of-the-art performance in NLP with the usage of robust word embedings generated from the large unlabelled corpus. Therefore, word embeddings have become the main component for setting up new benchmarks in NLP using deep learning approaches. Most recently, the use cases of word embeddings are not only limited to boost statistical NLP applications but can also be used to develop language resources such as automatic construction of WordNet BIBREF39 using the unsupervised approach.", "The word embedding can be precisely defined as the encoding of vocabulary $V$ into $N$ and the word $w$ from $V$ to vector $\\overrightarrow{w} $ into $N$-dimensional embedding space. They can be broadly categorized into predictive and count based methods, being generated by employing co-occurrence statistics, NN algorithms, and probabilistic models. The GloVe BIBREF26 algorithm treats each word as a single entity in the corpus and generates a vector of each word. However, CBoW and SG BIBREF27 BIBREF20, later extended BIBREF33 BIBREF24, well-known as word2vec rely on simple two layered NN architecture which uses linear activation function in hidden layer and softmax in the output layer. The work2vec model treats each word as a bag-of-character n-gram." ], [ "The GloVe is a log-bilinear regression model BIBREF26 which combines two methods of local context window and global matrix factorization for training word embeddings of a given vocabulary in an unsupervised way. It weights the contexts using the harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\\frac{1}{4}$. The Glove’s implementation represents word $w \\in V_{w}$ and context $c \\in V_{c}$ in $D$-dimensional vectors $\\overrightarrow{w}$ and $\\overrightarrow{c}$ in a following way,", "Where, $b^{\\overrightarrow{w}}$ is row vector $\\left|V_{w}\\right|$ and $b^{\\overrightarrow{c}}$ is $\\left|V_{c}\\right|$ is column vector." ], [ "The standard CBoW is the inverse of SG BIBREF27 model, which predicts input word on behalf of the context. The length of input in the CBoW model depends on the setting of context window size which determines the distance to the left and right of the target word. Hence the context is a window that contain neighboring words such as by giving $w=\\left\\lbrace w_{1}, w_{2}, \\dots \\dots w_{t}\\right\\rbrace $ a sequence of words $T$, the objective of the CBoW is to maximize the probability of given neighboring words such as,", "Where, $c_{t}$ is context of $t^{\\text{th}}$ word for example with window $w_{t-c}, \\ldots w_{t-1}, w_{t+1}, \\ldots w_{t+c}$ of size $2 c$." ], [ "The SG model predicts surrounding words by giving input word BIBREF20 with training objective of learning good word embeddings that efficiently predict the neighboring words. The goal of skip-gram is to maximize average log-probability of words $w=\\left\\lbrace w_{1}, w_{2}, \\dots \\dots w_{t}\\right\\rbrace $ across the entire training corpus,", "Where, $c_{t}$ denotes the context of words indices set of nearby $w_{t}$ words in the training corpus." ], [ "Th sub-sampling BIBREF20 approach is useful to dilute most frequent or stop words, also accelerates learning rate, and increases accuracy for learning rare word vectors. Numerous words in English, e.g., ‘the’, ‘you’, ’that’ do not have more importance, but these words appear very frequently in the text. However, considering all the words equally would also lead to over-fitting problem of model parameters BIBREF24 on the frequent word embeddings and under-fitting on the rest. Therefore, it is useful to count the imbalance between rare and repeated words. The sub-sampling technique randomly removes most frequent words with some threshold $t$ and probability $p$ of words and frequency $f$ of words in the corpus.", "Where each word$w_{i}$ is discarded with computed probability in training phase, $f(w_i )$ is frequency of word $w_{i}$ and $t>0$ are parameters." ], [ "The traditional word embedding models usually use a fixed size of a context window. For instance, if the window size ws=6, then the target word apart from 6 tokens will be treated similarity as the next word. The scheme is used to assign more weight to closer words, as closer words are generally considered to be more important to the meaning of the target word. The CBoW, SG and GloVe models employ this weighting scheme. The GloVe model weights the contexts using a harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\\frac{1}{4}$. However, CBoW and SG implementation equally consider the contexts by dividing the ws with the distance from target word, e.g. ws=6 will weigh its context by $\\frac{6}{6} \\frac{5}{6} \\frac{4}{6} \\frac{3}{6} \\frac{2}{6} \\frac{1}{6}$." ], [ "The sub-word model BIBREF24 can learn the internal structure of words by sharing the character representations across words. In that way, the vector for each word is made of the sum of those character $n-gram$. Such as, a vector of a word “table” is a sum of $n-gram$ vectors by setting the letter $n-gram$ size $min=3$ to $max=6$ as, $<ta, tab, tabl, table, table>, abl, able, able>, ble, ble>, le>$, we can get all sub-words of \"table\" with minimum length of $minn=3$ and maximum length of $maxn=6$. The $<$ and $>$ symbols are used to separate prefix and suffix words from other character sequences. In this way, the sub-word model utilizes the principles of morphology, which improves the quality of infrequent word representations. In addition to character $n-grams$, the input word $w$ is also included in the set of character $n-gram$, to learn the representation of each word. We obtain scoring function using a input dictionary of $n-grams$ with size $K$ by giving word $w$ , where $K_{w} \\subset \\lbrace 1, \\ldots , K\\rbrace $. A word representation $Z_{k}$ is associated to each $n-gram$ $Z$. Hence, each word is represented by the sum of character $n-gram$ representations, where, $s$ is the scoring function in the following equation," ], [ "The position-dependent weighting approach BIBREF40 is used to avoid direct encoding of representations for words and their positions which can lead to over-fitting problem. The approach learns positional representations in contextual word representations and used to reweight word embedding. Thus, it captures good contextual representations at lower computational cost,", "Where, $p$ is individual position in context window associated with $d_{p}$ vector. Afterwards the context vector reweighted by their positional vectors is average of context words. The relative positional set is $P$ in context window and $v_{C}$ is context vector of $w_{t}$ respectively." ], [ "The use sparse Shifted Positive Point-wise Mutual Information (SPPMI) BIBREF41 word-context matrix in learning word representations improves results on two word similarity tasks. The CBoW and SG have $k$ (number of negatives) BIBREF27 BIBREF20 hyperparameter, which affects the value that both models try to optimize for each $(w, c): P M I(w, c)-\\log k$. Parameter $k$ has two functions of better estimation of negative examples, and it performs as before observing the probability of positive examples (actual occurrence of $w,c$)." ], [ "Before creating a context window, the automatic deletion of rare words also leads to performance gain in CBoW, SG and GloVe models, which further increases the actual size of context windows." ], [ "The intrinsic evaluation is based on semantic similarity BIBREF23 in word embeddings. The word similarity measure approach states BIBREF35 that the words are similar if they appear in the similar context. We measure word similarity of proposed Sindhi word embeddings using dot product method and WordSim353." ], [ "The cosine similarity between two non-zero vectors is a popular measure that calculates the cosine of the angle between them which can be derived by using the Euclidean dot product method. The dot product is a multiplication of each component from both vectors added together. The result of a dot product between two vectors isn’t another vector but a single value or a scalar. The dot product for two vectors can be defined as: $\\overrightarrow{a}=\\left(a_{1}, a_{2}, a_{3}, \\dots , a_{n}\\right)$ and $\\overrightarrow{b}=\\left({b}_{1}, {b}_{2}, {b}_{3}, \\ldots , {b}_{n}\\right)$ where $a_{n}$ and $b_{n}$ are the components of the vector and $n$ is dimension of vectors such as,", "However, the cosine of two non-zero vectors can be derived by using the Euclidean dot product formula,", "Given $a_{i}$ two vectors of attributes $a$ and $b$, the cosine similarity, $\\cos ({\\theta })$, is represented using a dot product and magnitude as,", "where $a_{i}$ and $b_{i}$ are components of vector $\\overrightarrow{a}$ and $\\overrightarrow{b}$, respectively." ], [ "The WordSim353 BIBREF42 is popular for the evaluation of lexical similarity and relatedness. The similarity score is assigned with 13 to 16 human subjects with semantic relations BIBREF30 for 353 English noun pairs. Due to the lack of annotated datasets in the Sindhi language, we translated WordSim353 using English to Sindhi bilingual dictionary for the evaluation of our proposed Sindhi word embeddings and SdfastText. We use the Spearman correlation coefficient for the semantic and syntactic similarity comparison which is used to used to discover the strength of linear or nonlinear relationships if there are no repeated data values. A perfect Spearman’s correlation of $+1$ or $-1$ discovers the strength of a link between two sets of data (word-pairs) when observations are monotonically increasing or decreasing functions of each other in a following way,", "where $r_s$ is the rank correlation coefficient, $n$ denote the number of observations, and $d^i$ is the rank difference between $i^{th}$ observations." ], [ "The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens." ], [ "The frequency of letter occurrences in human language is not arbitrarily organized but follow some specific rules which enable us to describe some linguistic regularities. The Zipf’s law BIBREF43 suggests that if the frequency of letter or word occurrence ranked in descending order such as,", "Where, $F_{r}$ is the letter frequency of rth rank, $a$ and $b$ are parameters of input text. The comparative letter frequency in the corpus is the total number of occurrences of a letter divided by the total number of letters present in the corpus. The letter frequencies in our developed corpus are depicted in Figure FIGREF55; however, the corpus contains 187,620,276 total number of the character set. Sindhi Persian-Arabic alphabet consists of 52 letters but in the vocabulary 59 letters are detected, additional seven letters are modified uni-grams and standalone honorific symbols." ], [ "We denote the combination of letter occurrences in a word as n-grams, where each letter is a gram in a word. The letter n-gram frequency is carefully analyzed in order to find the length of words which is essential to develop NLP systems, including learning of word embeddings such as choosing the minimum or maximum length of sub-word for character-level representation learning BIBREF24. We calculate the letter n-grams in words along with their percentage in the developed corpus (see Table TABREF57). The bi-gram words are most frequent, mostly consists of stop words and secondly, 4-gram words have a higher frequency." ], [ "The word frequency count is an observation of word occurrences in the text. The commonly used words are considered to be with higher frequency, such as the word “the\" in English. Similarly, the frequency of rarely used words to be lower. Such frequencies can be calculated at character or word-level. We calculate word frequencies by counting a word $w$ occurrence in the corpus $c$, such as,", "Where the frequency of $w$ is the sum of every occurrence $k$ of $w$ in $c$." ], [ "The most frequent and least important words in NLP are often classified as stop words. The removal of such words can boost the performance of the NLP model BIBREF38, such as sentiment analysis and text classification. But the construction of such words list is time consuming and requires user decisions. Firstly, we determined Sindhi stop words by counting their term frequencies using Eq. DISPLAY_FORM59, and secondly, by analysing their grammatical status with the help of Sindhi linguistic expert because all the frequent words are not stop words (see Figure FIGREF62). After determining the importance of such words with the help of human judgment, we placed them in the list of stop words. The total number of detected stop words is 340 in our developed corpus. The partial list of most frequent Sindhi stop words is depicted in Table TABREF61 along with their frequency. The filtration of stop words is an essential preprocessing step for learning GloVe BIBREF26 word embeddings; therefore, we filtered out stop words for preparing input for the GloVe model. However, the sub-sampling approach BIBREF33 BIBREF24 is used to discard such most frequent words in CBoW and SG models." ], [ "Hyperparameter optimization BIBREF23is more important than designing a novel algorithm. We carefully choose to optimize the dictionary and algorithm-based parameters of CBoW, SG and GloVe algorithms. Hence, we conducted a large number of experiments for training and evaluation until the optimization of most suitable hyperparameters depicted in Table TABREF64 and discussed in Section SECREF63. The choice of optimized hyperparameters is based on The high cosine similarity score in retrieving nearest neighboring words, the semantic, syntactic similarity between word pairs, WordSim353, and visualization of the distance between twenty nearest neighbours using t-SNE respectively. All the experiments are conducted on GTX 1080-TITAN GPU." ], [ "The state-of-the-art SG, CBoW BIBREF27 BIBREF33 BIBREF20 BIBREF24 and Glove BIBREF26 word embedding algorithms are evaluated by parameter tuning for development of Sindhi word embeddings. These parameters can be categories into dictionary and algorithm based, respectively. The integration of character n-gram in learning word representations is an ideal method especially for rich morphological languages because this approach has the ability to compute rare and misspelled words. Sindhi is also a rich morphological language. Therefore more robust embeddings became possible to train with the hyperparameter optimization of SG, CBoW and GloVe algorithms. We tuned and evaluated the hyperparameters of three algorithms individually which are discussed as follows:", "Number of Epochs: Generally, more epochs on the corpus often produce better results but more epochs take long training time. Therefore, we evaluate 10, 20, 30 and 40 epochs for each word embedding model, and 40 epochs constantly produce good results.", "Learning rate (lr): We tried lr of $0.05$, $0.1$, and $0.25$, the optimal lr $(0.25)$ gives the better results for training all the embedding models.", "Dimensions ($D$): We evaluate and compare the quality of $100-D$, $200-D$, and $300-D$ using WordSim353 on different $ws$, and the optimal $300-D$ are evaluated with cosine similarity matrix for querying nearest neighboring words and calculating the similarity between word pairs. The embedding dimensions have little affect on the quality of the intrinsic evaluation process. However, the selection of embedding dimensions might have more impact on the accuracy in certain downstream NLP applications. The lower embedding dimensions are faster to train and evaluate.", "Character n-grams: The selection of minimum (minn) and the maximum (maxn) length of character $n-grams$ is an important parameter for learning character-level representations of words in CBoW and SG models. Therefore, the n-grams from $3-9$ were tested to analyse the impact on the accuracy of embedding. We optimized the length of character n-grams from $minn=2$ and $maxn=7$ by keeping in view the word frequencies depicted in Table TABREF57.", "Window size (ws): The large ws means considering more context words and similarly less ws means to limit the size of context words. By changing the size of the dynamic context window, we tried the ws of 3, 5, 7 the optimal ws=7 yield consistently better performance.", "Negative Sampling (NS): : The more negative examples yield better results, but more negatives take long training time. We tried 10, 20, and 30 negative examples for CBoW and SG. The best negative examples of 20 for CBoW and SG significantly yield better performance in average training time.", "Minimum word count (minw): We evaluated the range of minimum word counts from 1 to 8 and analyzed that the size of input vocabulary is decreasing at a large scale by ignoring more words similarly the vocabulary size was increasing by considering rare words. Therefore, by ignoring words with a frequency of less than 4 in CBoW, SG, and GloVe consistently yields better results with the vocabulary of 200,000 words.", "Loss function (ls): we use hierarchical softmax (hs) for CBoW, negative sampling (ns) for SG and default loss function for GloVe BIBREF26.", "The recommended verbosity level, number of buckets, sampling threshold, number of threads are used for training CBoW, SG BIBREF24, and GloVe BIBREF26." ], [ "The cosine similarity matrix BIBREF35 is a popular approach to compute the relationship between all embedding dimensions of their distinct relevance to query word. The words with similar context get high cosine similarity and geometrical relatedness to Euclidean distance, which is a common and primary method to measure the distance between a set of words and nearest neighbors. Each word contains the most similar top eight nearest neighboring words determined by the highest cosine similarity score using Eq. DISPLAY_FORM48. We present the English translation of both query and retrieved words also discuss with their English meaning for ease of relevance judgment between the query and retrieved words.To take a closer look at the semantic and syntactic relationship captured in the proposed word embeddings, Table TABREF74 shows the top eight nearest neighboring words of five different query words Friday, Spring, Cricket, Red, Scientist taken from the vocabulary. As the first query word Friday returns the names of days Saturday, Sunday, Monday, Tuesday, Wednesday, Thursday in an unordered sequence. The SdfastText returns five names of days Sunday, Thursday, Monday, Tuesday and Wednesday respectively. The GloVe model also returns five names of days. However, CBoW and SG gave six names of days except Wednesday along with different writing forms of query word Friday being written in the Sindhi language which shows that CBoW and SG return more relevant words as compare to SdfastText and GloVe. The CBoW returned Add and GloVe returns Honorary words which are little similar to the querry word but SdfastText resulted two irrelevant words Kameeso (N) which is a name (N) of person in Sindhi and Phrase is a combination of three Sindhi words which are not tokenized properly. Similarly, nearest neighbors of second query word Spring are retrieved accurately as names and seasons and semantically related to query word Spring by CBoW, SG and Glove but SdfastText returned four irrelevant words of Dilbahar (N), Pharase, Ashbahar (N) and Farzana (N) out of eight. The third query word is Cricket, the name of a popular game. The first retrieved word in CBoW is Kabadi (N) that is a popular national game in Pakistan. Including Kabadi (N) all the returned words by CBoW, SG and GloVe are related to Cricket game or names of other games. But the first word in SdfastText contains a punctuation mark in retrieved word Gone.Cricket that are two words joined with a punctuation mark (.), which shows the tokenization error in preprocessing step, sixth retrieved word Misspelled is a combination of three words not related to query word, and Played, Being played are also irrelevant and stop words. Moreover, fourth query word Red gave results that contain names of closely related to query word and different forms of query word written in the Sindhi language. The last returned word Unknown by SdfastText is irrelevant and not found in the Sindhi dictionary for translation. The last query word Scientist also contains semantically related words by CBoW, SG, and GloVe, but the first Urdu word given by SdfasText belongs to the Urdu language which means that the vocabulary may also contain words of other languages. Another unknown word returned by SdfastText does not have any meaning in the Sindhi dictionary. More interesting observations in the presented results are the diacritized words retrieved from our proposed word embeddings and The authentic tokenization in the preprocessing step presented in Figure FIGREF22. However, SdfastText has returned tri-gram words of Phrase in query words Friday, Spring, a Misspelled word in Cricket and Scientist query words. Hence, the overall performance of our proposed SG, CBoW, and GloVe demonstrate high semantic relatedness in retrieving the top eight nearest neighbor words." ], [ "Generally, closer words are considered more important to a word’s meaning. The word embeddings models have the ability to capture the lexical relations between words. Identifying such relationship that connects words is important in NLP applications. We measure that semantic relationship by calculating the dot product of two vectors using Eq. DISPLAY_FORM48. The high cosine similarity score denotes the closer words in the embedding matrix, while less cosine similarity score means the higher distance between word pairs. We present the cosine similarity score of different semantically or syntactically related word pairs taken from the vocabulary in Table TABREF77 along with English translation, which shows the average similarity of 0.632, 0.650, 0.591 yields by CBoW, SG and GloVe respectively. The SG model achieved a high average similarity score of 0.650 followed by CBoW with a 0.632 average similarity score. The GloVe also achieved a considerable average score of 0.591 respectively. However, the average similarity score of SdfastText is 0.388 and the word pair Microsoft-Bill Gates is not available in the vocabulary of SdfastText. This shows that along with performance, the vocabulary in SdfastText is also limited as compared to our proposed word embeddings.", "Moreover, the average semantic relatedness similarity score between countries and their capitals is shown in Table TABREF78 with English translation, where SG also yields the best average score of 0.663 followed by CBoW with 0.611 similarity score. The GloVe also yields better semantic relatedness of 0.576 and the SdfastText yield an average score of 0.391. The first query word China-Beijing is not available the vocabulary of SdfastText. However, the similarity score between Afghanistan-Kabul is lower in our proposed CBoW, SG, GloVe models because the word Kabul is the name of the capital of Afghanistan as well as it frequently appears as an adjective in Sindhi text which means able." ], [ "We evaluate the performance of our proposed word embeddings using the WordSim353 dataset by translation English word pairs to Sindhi. Due to vocabulary differences between English and Sindhi, we were unable to find the authentic meaning of six terms, so we left these terms untranslated. So our final Sindhi WordSim353 consists of 347 word pairs. Table TABREF80 shows the Spearman correlation results using Eq. DISPLAY_FORM51 on different dimensional embeddings on the translated WordSim353. The Table TABREF80 presents complete results with the different ws for CBoW, SG and GloVe in which the ws=7 subsequently yield better performance than ws of 3 and 5, respectively. The SG model outperforms CBoW and GloVe in semantic and syntactic similarity by achieving the performance of 0.629 with ws=7. In comparison with English BIBREF27 achieved the average semantic and syntactic similarity of 0.637, 0.656 with CBoW and SG, respectively. Therefore, despite the challenges in translation from English to Sindhi, our proposed Sindhi word embeddings have efficiently captured the semantic and syntactic relationship." ], [ "We use t-Distributed Stochastic Neighboring (t-SNE) dimensionality BIBREF36 reduction algorithm with PCA BIBREF37 for exploratory embeddings analysis in 2-dimensional map. The t-SNE is a non-linear dimensionality reduction algorithm for visualization of high dimensional datasets. It starts the probability calculation of similar word clusters in high-dimensional space and calculates the probability of similar points in the corresponding low-dimensional space. The purpose of t-SNE for visualization of word embeddings is to keep similar words close together in 2-dimensional $x,y$ coordinate pairs while maximizing the distance between dissimilar words. The t-SNE has a perplexity (PPL) tunable parameter used to balance the data points at both the local and global levels. We visualize the embeddings using PPL=20 on 5000-iterations of 300-D models. We use the same query words (see Table TABREF74) by retrieving the top 20 nearest neighboring word clusters for a better understanding of the distance between similar words. Every query word has a distinct color for the clear visualization of a similar group of words. The closer word clusters show the high similarity between the query and retrieved word clusters. The word clusters in SG (see Fig. FIGREF83) are closer to their group of semantically related words. Secondly, the CBoW model depicted in Fig. FIGREF82 and GloVe Fig. FIGREF84 also show the better cluster formation of words than SdfastText Fig. FIGREF85, respectively." ], [ "In this era of the information age, the existence of LRs plays a vital role in the digital survival of natural languages because the NLP tools are used to process a flow of un-structured data from disparate sources. It is imperative to mention that presently, Sindhi Persian-Arabic is frequently used in online communication, newspapers, public institutions in Pakistan and India. Due to the growing use of Sindhi on web platforms, the need for its LRs is also increasing for the development of language technology tools. But little work has been carried out for the development of resources which is not sufficient to design a language independent or machine learning algorithms. The present work is a first comprehensive initiative on resource development along with their evaluation for statistical Sindhi language processing. More recently, the NN based approaches have produced a state-of-the-art performance in NLP by exploiting unsupervised word embeddings learned from the large unlabelled corpus. Such word embeddings have also motivated the work on low-resourced languages. Our work mainly consists of novel contributions of resource development along with comprehensive evaluation for the utilization of NN based approaches in SNLP applications. The large corpus obtained from multiple web resources is utilized for the training of word embeddings using SG, CBoW and Glove models. The intrinsic evaluation along with comparative results demonstrates that the proposed Sindhi word embeddings have accurately captured the semantic information as compare to recently revealed SdfastText word vectors. The SG yield best results in nearest neighbors, word pair relationship and semantic similarity. The performance of CBoW is also close to SG in all the evaluation matrices. The GloVe also yields better word representations; however SG and CBoW models surpass the GloVe model in all evaluation matrices. Hyperparameter optimization is as important as designing a new algorithm. The choice of optimal parameters is a key aspect of performance gain in learning robust word embeddings. Moreover, We analysed that the size of the corpus and careful preprocessing steps have a large impact on the quality of word embeddings. However, in algorithmic perspective, the character-level learning approach in SG and CBoW improves the quality of representation learning, and overall window size, learning rate, number of epochs are the core parameters that largely influence the performance of word embeddings models. Ultimately, the new corpus of low-resourced Sindhi language, list of stop words and pretrained word embeddings along with empirical evaluation, will be a good supplement for future research in SSLP applications. In the future, we aim to use the corpus for annotation projects such as parts-of-speech tagging, named entity recognition. The proposed word embeddings will be refined further by creating custom benchmarks and the extrinsic evaluation approach will be employed for the performance analysis of proposed word embeddings. Moreover, we will also utilize the corpus using Bi-directional Encoder Representation Transformer BIBREF13 for learning deep contextualized Sindhi word representations. Furthermore, the generated word embeddings will be utilized for the automatic construction of Sindhi WordNet." ], [ "In this paper, we mainly present three novel contributions of large corpus development contains large vocabulary of more than 61 million tokens, 908,456 unique words. Secondly, the list of Sindhi stop words is constructed by finding their high frequency and least importance with the help of Sindhi linguistic expert. Thirdly, the unsupervised Sindhi word embeddings are generated using state-of-the-art CBoW, SG and GloVe algorithms and evaluated using popular intrinsic evaluation approaches of cosine similarity matrix and WordSim353 for the first time in Sindhi language processing. We translate English WordSim353 using the English-Sindhi bilingual dictionary, which will also be a good resource for the evaluation of Sindhi word embeddings. Moreover, the proposed word embeddings are also compared with recently revealed SdfastText word representations.", "Our empirical results demonstrate that our proposed Sindhi word embeddings have captured high semantic relatedness in nearest neighboring words, word pair relationship, country, and capital and WordSim353. The SG yields the best performance than CBoW and GloVe models subsequently. However, the performance of GloVe is low on the same vocabulary because of character-level learning of word representations and sub-sampling approaches in SG and CBoW. Our proposed Sindhi word embeddings have surpassed SdfastText in the intrinsic evaluation matrix. Also, the vocabulary of SdfastText is limited because they are trained on a small Wikipedia corpus of Sindhi Persian-Arabic. We will further investigate the extrinsic performance of proposed word embeddings on the Sindhi text classification task in the future. The proposed resources along with systematic evaluation will be a sophisticated addition to the computational resources for statistical Sindhi language processing." ] ] }
{ "question": [ "How does proposed word embeddings compare to Sindhi fastText word representations?", "Are trained word embeddings used for any other NLP task?", "How many uniue words are in the dataset?", "How is the data collected, which web resources were used?" ], "question_id": [ "5b6aec1b88c9832075cd343f59158078a91f3597", "a6717e334c53ebbb87e5ef878a77ef46866e3aed", "a1064307a19cd7add32163a70b6623278a557946", "8cb9006bcbd2f390aadc6b70d54ee98c674e45cc" ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Proposed SG model vs SINDHI FASTTEXT:\nAverage cosine similarity score: 0.650 vs 0.388\nAverage semantic relatedness similarity score between countries and their capitals: 0.663 vs 0.391", "evidence": [ "Generally, closer words are considered more important to a word’s meaning. The word embeddings models have the ability to capture the lexical relations between words. Identifying such relationship that connects words is important in NLP applications. We measure that semantic relationship by calculating the dot product of two vectors using Eq. DISPLAY_FORM48. The high cosine similarity score denotes the closer words in the embedding matrix, while less cosine similarity score means the higher distance between word pairs. We present the cosine similarity score of different semantically or syntactically related word pairs taken from the vocabulary in Table TABREF77 along with English translation, which shows the average similarity of 0.632, 0.650, 0.591 yields by CBoW, SG and GloVe respectively. The SG model achieved a high average similarity score of 0.650 followed by CBoW with a 0.632 average similarity score. The GloVe also achieved a considerable average score of 0.591 respectively. However, the average similarity score of SdfastText is 0.388 and the word pair Microsoft-Bill Gates is not available in the vocabulary of SdfastText. This shows that along with performance, the vocabulary in SdfastText is also limited as compared to our proposed word embeddings.", "Moreover, the average semantic relatedness similarity score between countries and their capitals is shown in Table TABREF78 with English translation, where SG also yields the best average score of 0.663 followed by CBoW with 0.611 similarity score. The GloVe also yields better semantic relatedness of 0.576 and the SdfastText yield an average score of 0.391. The first query word China-Beijing is not available the vocabulary of SdfastText. However, the similarity score between Afghanistan-Kabul is lower in our proposed CBoW, SG, GloVe models because the word Kabul is the name of the capital of Afghanistan as well as it frequently appears as an adjective in Sindhi text which means able." ], "highlighted_evidence": [ "The SG model achieved a high average similarity score of 0.650 followed by CBoW with a 0.632 average similarity score. The GloVe also achieved a considerable average score of 0.591 respectively. However, the average similarity score of SdfastText is 0.388 and the word pair Microsoft-Bill Gates is not available in the vocabulary of SdfastText.", "Moreover, the average semantic relatedness similarity score between countries and their capitals is shown in Table TABREF78 with English translation, where SG also yields the best average score of 0.663 followed by CBoW with 0.611 similarity score. The GloVe also yields better semantic relatedness of 0.576 and the SdfastText yield an average score of 0.391." ] } ], "annotation_id": [ "ff8fd9518421abfced12a1541e4f26b5185fc32c" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": false, "free_form_answer": "", "evidence": [ "In this era of the information age, the existence of LRs plays a vital role in the digital survival of natural languages because the NLP tools are used to process a flow of un-structured data from disparate sources. It is imperative to mention that presently, Sindhi Persian-Arabic is frequently used in online communication, newspapers, public institutions in Pakistan and India. Due to the growing use of Sindhi on web platforms, the need for its LRs is also increasing for the development of language technology tools. But little work has been carried out for the development of resources which is not sufficient to design a language independent or machine learning algorithms. The present work is a first comprehensive initiative on resource development along with their evaluation for statistical Sindhi language processing. More recently, the NN based approaches have produced a state-of-the-art performance in NLP by exploiting unsupervised word embeddings learned from the large unlabelled corpus. Such word embeddings have also motivated the work on low-resourced languages. Our work mainly consists of novel contributions of resource development along with comprehensive evaluation for the utilization of NN based approaches in SNLP applications. The large corpus obtained from multiple web resources is utilized for the training of word embeddings using SG, CBoW and Glove models. The intrinsic evaluation along with comparative results demonstrates that the proposed Sindhi word embeddings have accurately captured the semantic information as compare to recently revealed SdfastText word vectors. The SG yield best results in nearest neighbors, word pair relationship and semantic similarity. The performance of CBoW is also close to SG in all the evaluation matrices. The GloVe also yields better word representations; however SG and CBoW models surpass the GloVe model in all evaluation matrices. Hyperparameter optimization is as important as designing a new algorithm. The choice of optimal parameters is a key aspect of performance gain in learning robust word embeddings. Moreover, We analysed that the size of the corpus and careful preprocessing steps have a large impact on the quality of word embeddings. However, in algorithmic perspective, the character-level learning approach in SG and CBoW improves the quality of representation learning, and overall window size, learning rate, number of epochs are the core parameters that largely influence the performance of word embeddings models. Ultimately, the new corpus of low-resourced Sindhi language, list of stop words and pretrained word embeddings along with empirical evaluation, will be a good supplement for future research in SSLP applications. In the future, we aim to use the corpus for annotation projects such as parts-of-speech tagging, named entity recognition. The proposed word embeddings will be refined further by creating custom benchmarks and the extrinsic evaluation approach will be employed for the performance analysis of proposed word embeddings. Moreover, we will also utilize the corpus using Bi-directional Encoder Representation Transformer BIBREF13 for learning deep contextualized Sindhi word representations. Furthermore, the generated word embeddings will be utilized for the automatic construction of Sindhi WordNet." ], "highlighted_evidence": [ "In the future, we aim to use the corpus for annotation projects such as parts-of-speech tagging, named entity recognition.", "Furthermore, the generated word embeddings will be utilized for the automatic construction of Sindhi WordNet." ] } ], "annotation_id": [ "80d7f5da1461b4437290ddc0e2474bd1cd298e64" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "908456 unique words are available in collected corpus.", "evidence": [ "The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens.", "FLOAT SELECTED: Table 2: Complete statistics of collected corpus from multiple resources." ], "highlighted_evidence": [ "The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table TABREF52) with number of sentences, words and unique tokens.", "FLOAT SELECTED: Table 2: Complete statistics of collected corpus from multiple resources." ] } ], "annotation_id": [ "6d40c2912577783189a8fe21a2a3f6b5d1f11cea" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "daily Kawish and Awami Awaz Sindhi newspapers", "Wikipedia dumps", "short stories and sports news from Wichaar social blog", "news from Focus Word press blog", "historical writings, novels, stories, books from Sindh Salamat literary website", "novels, history and religious books from Sindhi Adabi Board", " tweets regarding news and sports are collected from twitter" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The corpus is a collection of human language text BIBREF31 built with a specific purpose. However, the statistical analysis of the corpus provides quantitative, reusable data, and an opportunity to examine intuitions and ideas about language. Therefore, the corpus has great importance for the study of written language to examine the text. In fact, realizing the necessity of large text corpus for Sindhi, we started this research by collecting raw corpus from multiple web resource using web-scrappy framwork for extraction of news columns of daily Kawish and Awami Awaz Sindhi newspapers, Wikipedia dumps, short stories and sports news from Wichaar social blog, news from Focus Word press blog, historical writings, novels, stories, books from Sindh Salamat literary websites, novels, history and religious books from Sindhi Adabi Board and tweets regarding news and sports are collected from twitter." ], "highlighted_evidence": [ "In fact, realizing the necessity of large text corpus for Sindhi, we started this research by collecting raw corpus from multiple web resource using web-scrappy framwork for extraction of news columns of daily Kawish and Awami Awaz Sindhi newspapers, Wikipedia dumps, short stories and sports news from Wichaar social blog, news from Focus Word press blog, historical writings, novels, stories, books from Sindh Salamat literary websites, novels, history and religious books from Sindhi Adabi Board and tweets regarding news and sports are collected from twitter." ] } ], "annotation_id": [ "0e1c5eb88cfe7910e0f9f0990a926496818ae6cb" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Table 1: Comparison of existing and proposed work on Sindhi corpus construction and word embeddings.", "Figure 1: Employed preprocessing pipeline for text cleaning", "Table 2: Complete statistics of collected corpus from multiple resources.", "Figure 2: Frequency distribution of letter occurrences", "Table 3: Length of letter n-grams in words, distinct words, frequency and percentage in corpus.", "Table 4: Partial list of most frequent Sindhi stop words along with frequency in the developed corpus.", "Figure 3: Most frequent words after filtration of stop words", "Table 5: Optimized parameters for CBoW, SG and GloVe models.", "Table 6: Eight nearest neighboring words of each query word with English translation.", "Table 7: Word pair relationship using cosine similarity (higher is better).", "Table 8: Cosine similarity score between country and capital.", "Table 9: Comparison of semantic and syntactic accuracy of proposed word embeddings using WordSim-353 dataset on 300−D embedding choosing various window size (ws).", "Figure 4: Visualization of Sindhi CBoW word embeddings", "Figure 5: Visualization of Sindhi SG word embeddings", "Figure 6: visualization of Sindhi GloVe word embeddings", "Figure 7: Visualization of SdfastText word embeddings" ], "file": [ "4-Table1-1.png", "5-Figure1-1.png", "8-Table2-1.png", "9-Figure2-1.png", "10-Table3-1.png", "11-Table4-1.png", "12-Figure3-1.png", "12-Table5-1.png", "14-Table6-1.png", "15-Table7-1.png", "15-Table8-1.png", "16-Table9-1.png", "17-Figure4-1.png", "17-Figure5-1.png", "17-Figure6-1.png", "18-Figure7-1.png" ] }
1908.10275
The Wiki Music dataset: A tool for computational analysis of popular music
Is it possible use algorithms to find trends in the history of popular music? And is it possible to predict the characteristics of future music genres? In order to answer these questions, we produced a hand-crafted dataset with the intent to put together features about style, psychology, sociology and typology, annotated by music genre and indexed by time and decade. We collected a list of popular genres by decade from Wikipedia and scored music genres based on Wikipedia descriptions. Using statistical and machine learning techniques, we find trends in the musical preferences and use time series forecasting to evaluate the prediction of future music genres.
{ "section_name": [ "Motivation, Background and Related Work", "Brief introduction to popular music", "Data Description", "Experiments", "Conclusion Acknowledgments and Future" ], "paragraphs": [ [ "Until recent times, the research in popular music was mostly bound to a non-computational approach BIBREF0 but the availability of new data, models and algorithms helped the rise of new research trends. Computational analysis of music structure BIBREF1 is focused on parsing and annotate patters in music files; computational music generation BIBREF2 trains systems able to generate songs with specific music styles; computational sociology of music analyzes databases annotated with metadata such as tempo, key, BPMs and similar (generally referred to as sonic features); even psychology of music use data to find new models.", "Recent papers in computational sociology investigated novelty in popular music, finding that artists who are highly culturally and geographically connected are more likely to create novel songs, especially when they span multiple genres, are women, or are in the early stages of their careers BIBREF3. Using the position in Billboard charts and the sonic features of more than 20K songs, it has been demonstrated that the songs exhibiting some degree of optimal differentiation in novelty are more likely to rise to the top of the charts BIBREF4. These findings offer very interesting perspectives on how popular culture impacts the competition of novel genres in cultural markets. Another problem addressed in this research field is the distinction between what is popular and what is significative to a musical context BIBREF5. Using a user-generated set of tags collected through an online music platform, it has been possible to compute a set of metrics, such as novelty, burst or duration, from a co-occurrence tag network relative to music albums, in order to find the tags that propagate more and the albums having a significative impact. Combining sonic features and topic extraction techniques from approximately 17K tracks, scholars demonstrate quantitative trends in harmonic and timbral properties that brought changes in music sound around 1964, 1983 and 1991 BIBREF6. Beside these research fields, there is a trend in the psychology of music that studies how the musical preferences are reflected in the dimensions of personality BIBREF7. From this kind of research emerged the MUSIC model BIBREF8, which found that genre preferences can be decomposed into five factors: Mellow (relaxed, slow, and romantic), Unpretentious, (easy, soft, well-known), Sophisticated (complex, intelligent or avant-garde), Intense (loud, aggressive, and tense) and Contemporary (catchy, rhythmic or danceable).", "Is it possible to find trends in the characteristics of the genres? And is it possible to predict the characteristics of future genres? To answer these questions, we produced a hand-crafted dataset with the intent to put together MUSIC, style and sonic features, annotated by music genre and indexed by time and decade. To do so, we collected a list of popular music genres by decade from Wikipedia and instructed annotators to score them. The paper is structured as follows: In section SECREF2 we provide a brief history of popular music, in section SECREF3 we describe the dataset and in section SECREF4 we provide the results of the experiments. In the end we draw some conclusions." ], [ "We define ”popular music” as the music which finds appeal out of culturally closed music groups, also thanks to its commercial nature. Non-popular music can be divided into three broad groups: classical music (produced and performed by experts with a specific education), folk/world music (produced and performed by traditional cultures), and utility music (such as hymns and military marches, not primarily intended for commercial purposes). Popular music is a great mean for spreading culture, and a perfect ground where cultural practices and industry processes combine. In particular the cultural processes select novelties, broadly represented by means of underground music genres, and the industry tries to monetize, making them commercially successful. In the following description we include almost all the genres that reach commercial success and few of the underground genres that are related to them.", "Arguably the beginning of popular music is in the USA between 1880s and 1890s with spirituals, work and shout chants BIBREF9, that we classify half-way between world music and popular music. The first real popular music genres in the 1900s were ragtime, pioneer of piano blues and jazz, and gospel, derived from religious chants of afro-american communities and pioneer of soul and RnB. The 1910s saw the birth of tin pan alley (simple pop songs for piano composed by professionals) and dixieland jazz, a spontaneous melting pot of ragtime, classical, afroamerican and haitian music BIBREF10. In the 1920s, blues and hillbilly country became popular. The former was born as a form of expression of black communities and outcasts, while the latter was a form of entertainment of the white rural communities. Tin pan alley piano composers soon commercialized tracks in the style of blues, generating boogie-woogie as a reaction, an underground and very aggressive piano blues played by black musicians. In Chicago and New York jazz became more sophisticated and spread to Europe, where gipsy jazz became popular soon after. Both in US and Europe, the 1930s were dominated by swing, the most popular form of jazz, which was at the same time danceable, melanchonic, catchy and intelligent. In the US the west swing, a mellow and easy type of country music, became popular thanks to western movies. The 1940s in the US saw a revival of dixieland jazz, the rise of be-bop (one of the most mellow and intelligent forms of jazz), the advent of crooners (male pop singers) and the establishment of back-to-the-roots types of country music such as bluegrass, a reaction against west swing, modernity and electric guitars. In the underground there was honky-tonk, a sad kind of country music that will influence folk rock. In the 1950s rock and roll was created by black communities with the electric fusion of blues, boogie-woogie and hillbilly and soon commercialized for large white audiences. Beside this, many things happened: urban blues forged its modern sound using electric guitars and harmonicas; cool jazz, played also by white people, launched a more commercial and clean style; gospel influenced both doo-wop, (a-cappella music performed by groups of black singers imitating crooners) and RnB, where black female singers played with a jazz or blues band. The 1960s saw an explosion of genres: countrypolitan, an electric and easy form of country music, became the most commercialized genre in the US; the first independent labels (in particular the Motown) turned doo-wop into well-arranged and hyper-produced soul music with a good commercial success BIBREF11; ska, a form of dance music with a very typical offbeat, became popular outside of Jamaica; garage (and also surf) rock arose as the first forms of independent commercial rock music, sometimes aggressive and sometimes easy; in the UK, beat popularized a new style of hyper-produced rock music that had a very big commercial success; blues rock emerged as the mix of the two genres; teenypop was created in order to sell records to younger audiences; independent movements like beat generation and hippies helped the rise of folk rock and psychedelic rock respectively BIBREF12; funk emerged from soul and jazz (while jazz turned into the extremely complex free jazz as a reaction against the commercial cool jazz, but remained underground). In the 1970s progressive rock turned psychedelia into a more complex form, independent radios contribute to its diffusion as well as the popularity of songwriters, an evolution of folk singers that proliferated from latin america (nueva canción) to western Europe. In the meanwhile, TV became a new channel for music marketing , exploited by glam rock, that emerged as a form of pop rock music with a fake trasgressive image and eclectic arrangements; fusion jazz begun to include funk and psychedelic elements; the disillusion due to the end of hippie movement left angry and frustrated masses listening to hard rock and blues rock, that included anti-religious symbols and merged into heavy metal. Then garage and independent rock, fueled by anger and frustration, was commercialized as punk rock at the end of the decade, while disco music (a catchy and hyper-danceable version of soul and RnB) was played in famous clubs and linked to sex and fun, gathering the LGBT communities. The poorest black communities, kept out from the disco clubs, begun to perform in house-parties, giving rise to old skool rap, whose sampled sounds and rhythmic vocals were a great novelty but remained underground. The real novelties popularized in this decade were ambient (a very intelligent commercial downtempo music derived from classical music), reggae (which mixed ska, rock and folk and from Jamaica conquered the UK) and above all synth electronica, a type of industrial experimental music that became popular for its new sound and style, bridging the gap between rock and electronic music. This will deeply change the sound of the following decades BIBREF13. The 1980s begun with the rise of synth pop and new wave. The former, also referred to as ”new romantics”, was a popular music that mixed catchy rhythms with simple melodies and synthetic sounds while the latter was an hipster mix of glam rock and post-punk with a positive view (as opposed to the depressive mood of the real post-punk), with minor influences from synth electronica and reggae. The music industry created also glam metal for the heavy metal audiences, that reacted with extreme forms like thrash metal; a similar story happened with punk audiences, that soon moved to extreme forms like hardcore, which remained underground but highlighted a serious tensions between industry and the audiences that wanted spontaneous genres BIBREF14. In the meanwhile discopop produced a very catchy, easy and danceable music mix of disco, funk and synthetic sounds, that greatly improved the quality of records, yielding to one of the best selling genres in the whole popular music history. In a similar way smooth jazz (a mix of mellow and easy melodies with synthetic rhythmical bases) and soft adult (a mellow and easy form of pop) obtained a good commercial success. Techno music emerged as a new form of danceable synthetic and funky genre and hard rap became popular both in black and white audiences, while electro (break dance at the time) and (pioneering) house music remained underground for their too much innovative sampled sounds. In the 1990s alternative/grunge rock solved the tension between commercial and spontaneous genres with a style of rock that was at the same time aggressive, intelligent and easy to listen to. The same happened with skatepunk (a fast, happy and commercial form of rock) and rap metal (a mix of the two genres) while britpop continued the tradition of pop rock initiated with beat. RnB evolved into new jack swing (a form of softer, rhythmical and easy funk) and techno split into the commercial eurodance (a mix of techno and disco music with synthetic sounds, manipulated RnB vocals and strong beats) and the subculture of rave (an extremely aggressive form of techno played in secret parties and later in clubs), which helped the creation of goa trance, that new hippie communities used for accompany drug trips BIBREF15. An intelligent and slow mix of electro and RnB became popular as trip hop while an aggressive and extremely fast form of electro with reggae influences became popular as jungle/DnB. By the end of the decade the most commercially successful genres were dancepop (a form of pop that included elements of funk, disco and eurodance in a sexy image) and gangsta rap/hip hop that reached its stereotypical form and became mainstream, while independent labels (that produced many subgenres from shoegaze/indie rock to electro and house) remained in the underground. In the underground -but in latin america- there was also reggaetón, a latin form of rap. The rise of free download and later social networks websites in 2000s opened new channels for independent genres, that allowed the rise of grime (a type of electro mixing DnB and rap), dubstep (a very intelligent and slow mix of techno, DnB and electro low-fi samples), indietronica (a broad genre mixing intelligent indie rock, electro and a lot of minor influences) and later nu disco (a revival of stylish funk and disco updated with electro and house sounds) BIBREF16. In the meanwhile there were popular commercial genres like garage rock revival (that updated rock and punk with danceable beats), emo rock/post grunge (aggressive, easy and even more catchy), urban breaks (a form of RnB with heavy electro and rap influences) and above all electropop (the evolution of dancepop, that included elements of electro/house and consolidated the image of seductive female singers, also aimed at the youngest audiences of teens). Among those genres epic trance (an euphoric, aggressive and easy form of melodic techno) emerged from the biggest dedicated festivals and became mainstream with over-payed DJ-superstars BIBREF17. In the underground remained various forms of nu jazz, hardcore techno, metal and house music. Then in 2010s finally euro EDM house music (a form of sample-based and heavily danceable mix of house and electro) came out of underground communities and, borrowing the figure of DJ-superstar from trance, reached commercial success, but left underground communities unsatisfied (they were mostly producing complex electro, a mix of dubstep and avant-garde house). Also drumstep (a faster and aggressive version of dubstep, influenced by EDM and techno) and trap music (a form of dark and heavy techno rap) emerged from underground and had good commercial success. Genres like indiefolk (a modern and eclectic folk rock with country influences) and nu prog rock (another eclectic, experimental and aggressive form of rock with many influences from electro, metal and rap) had moderate success. The availability of websites for user-generated contents such as Youtube helped to popularize genres like electro reggaetón (latin rap with new influences from reggae and electro), cloud rap (an eclectic and intelligent form of rap with electro influences) and JK-pop (a broad label that stands for Japanese and Korean pop, but emerged from all over the world with common features: Youtubers that produce easy and catchy pop music with heavy influences from electropop, discopop and eurodance) BIBREF18. Moreover, technologies helped the creation of mainstream genres such as tropical house (a very melodic, soft and easy form of house music singed in an modern RnB style). In the underground there are yet many minor genres, such as bro country (an easy form of country played by young and attractive guys and influenced by electro and rap), future hardstyle (a form of aggressive trance with easy vocals similar to tropical house) and afrobeat (a form of rap that is popular in western africa with influences from reggaetón and traditional african music).", "From this description we can highlight some general and recurrent tendencies, for example the fact that music industry converts spontaneous novelties into commercial success, but when its products leave audiences frustrated (it happened with west swing, glam metal, cool jazz, punk and many others), they generate reactions in underground cultures, that trigger a change into more aggressive versions of the genre. In general, underground and spontaneous genres are more complex and avant-garde. Another pattern is that media allowed more and more local underground genres to influence the mainstream ones, ending in a combinatorial explosion of possible new genres, most of which remain underground. We suggest that we need to quantify a set of cross-genre characteristics in order to compute with data science techniques some weaker but possibly significative patterns that cannot be observed with qualitative methods. In the next section we define a quantitative methodology and we annotate a dataset to perform experiments." ], [ "From the description of music genres provided above emerges that there is a limited number of super-genres and derivation lines BIBREF19, BIBREF20, as shown in figure FIGREF1.", "From a computational perspective, genres are classes and, although can be treated by machine learning algorithms, they do not include information about the relations between them. In order to formalize the relations between genres for computing purposes, we define a continuous genre scale from the most experimental and introverted super-genre to the most euphoric and inclusive one. We selected from Wikipedia the 77 genres that we mentioned in bold in the previous paragraph and asked to two independent raters to read the Wikipedia pages of the genres, listen to samples or artists of the genres (if they did not know already) and then annotate the following dimensions:", "genre features: genre scale (a score between 0 and 1 where 0=downtempo/industrial, 0.1=metal, 0.15=garage/punk/hardcore, 0.2=rock, 0.25=pop rock, 0.3=blues, 0.4=country, 0.5=pop/traditional, 0.55=gospel, 0.6=jazz, 0.65=latin, 0.7=RnB/soul/funk, 0.75=reggae/jamaican, 0.8=rap, 0.85=DnB, 0.9=electro/house, 0.95=EDM, 1=techno/trance) and category of the super-genre (as defined in figure FIGREF1) and influence variety 0.1=influence only from the same super-genre, 1=influences from all the supergenres", "perceived acoustic features: sound (0=acoustic, 0.35=amplified, 0.65=sampled/manipulated, 1=synthetic), vocal melody (1=melodic vocals, 0=rhythmical vocals/spoken words), vocal scream (1=screaming, 0=soft singing), vocal emotional (1=emotional vocals, 0=monotone vocals), virtuous (0.5=normal, 0=not technical at all, 1=very technical); richbass 1=the bass is loud and clear, 0=there is no bass sound; offbeat 1=the genre has a strong offbeat, 0=the genre has not offbeat", "time: decade (classes between 1900s and 2010s) and year representative of the time when the genre became meainstream", "place features: origin place 0=Australia, 0.025=west USA, 0.05=south USA, 0.075=north/east USA, 0.1=UK, 0.2=jamaica, 0.3=carribean, 0.4=latin america, 0.5=africa, 0.6=south EU, 0.65=north/east EU, 0.7=middle east, 0.8=India, 0.9=China/south asia, 1=Korea/north asia; place urban (0=the origin place is rural, 1=the origin place is urban), place poor (0=the origin place is poor, 1=the origin place is rich)", "media features: media mainstream (0=independent media, 1=mainstream media, 0.5=both), media live 0=sell recorded music, 1=sell live performance)", "emotion features: joy/sad (1=joy, 0=sad), anticipation/surprise (1=anticipation or already known, 0=surprise), anger/calm (1=anger, 0=calm).", "style features: novelty 0=derivative, 0.5=normal, 1=totally new characteristics and type retro 1=the genre is a revival, 0.5=normal, 0=the genre is not a revival, lyrics love/explicit 0.5=normal, 1=love lyrics, 0=explicit lyrics, style upbeat 1=extroverted and danceable, 0=introverted and depressive, style instrumental 1=totally instrumental, 0=totally singed, style eclecticism 1=includes many styles, 0=has a stereotypical style, style longsongs 0.5=radio format (3.30 minutes), 1=more than 6 minutes by average, 0=less than 1 minute by average; largebands 1=bands of 10 or more people, 0.1=just one musician; subculture 1=the audience one subculture or more, 0=the audience is the main culture; hedonism 1=the genre promotes hedonism, 0=the genre does not promote hedonism; protest 1=the genre promotes protest, 0=the genere does not promote protest; onlyblack 1=genere produced only by black communities, 0=genre produced only by white communities; ; 44beat 1=the genre has 4/4 beat, 0=the genre has other types of measures; outcasts 1=the audience is poor people, 0=the audience is rich people; dancing 1=the genre is for dancing, 0=the genre is for home listening; drugs 1=the audience use drugs, 0=the audience do not use drugs", "MUSIC features: mellow (1=slow and romantic, 0=fast and furious), sophisticated (1=culturally complex, 0=easy to understand), intense (1=aggressive and loud, 0=soft and relaxing), contemporary (1=rhythmical and catchy, 0=not rhythmical and old-fashioned), uncomplicated (1=simple and well-known, 0=strange and disgustive)", "We computed the agreement between the two annotators using Cronbach's alpha statistics BIBREF21. The average between all features is $\\alpha =0.793$, which is good. Among the most agreed features there are genre, place, sound and MUSIC features. In particular, the genre scale got an excellent $\\alpha =0.957$, meaning that the genre scale is a reliable measure. In the final annotation all the divergences between the two annotators were agreed upon and the scores were averaged or corrected accordingly. The final dataset is available to the scientific community." ], [ "What are the tendencies that confirm or disconfirm previous findings? We noticed very interesting remarks just from the distributions of the features, reported in figure FIGREF11.", "We can see that most of the popular music genres have a novelty score between 0.5 and 0.65, which is medium-high. This confirms the findings of previous work about the optimal level of innovation and acceptance. It is interesting to note that almost all the popular genres come from an urban context, where the connections between communities are more likely to create innovations. Moreover, we can see that the distribution of mainstream media is bi-modal: this means that an important percentage of genres are popularized by means of underground or new media. This happened many times in music history, from the the free radios to the web of the user-generated content. Crucially, popular music genres strongly tend to be perceived as technically virtuous.", "Why the sound changed from acoustic to synthetic during the last century? To answer this question we used a correlation analysis with the sound feature as target. It emerged that the change towards sampled and synthetic sound is correlated to dancing, to intensity/aggressiveness, to a larger drug usage and to a large variety of infleunces, while it is negatively correlated to large bands and mellow tones. In summary a more synthetic sound allowed a more intense and danceable music, reducing the number of musicians (in other words reducing costs for the industry).", "How the music taste of the audience of popular music changed in the last century? The trend lines of the MUSIC model features, reported in figure FIGREF12, reveal that audiences wanted products more and more contemporary, intense and a little bit novel or sophisticated, but less and less mellow and (surprisingly) unpretentious. In other words, the audiences of popular music are getting more demanding as the quality and variety of the music products increases.", "Is it possible to predict future genres by means of the genre scale? To answer this question we used time series forecasting. In particular, we exploited all the features in the years from 1900 to 2010 to train a predictive model of the scores from 2011 to 2018. As the year of the genre label is arbitrary, predicted scores and labels can be not aligned, thus MAE or RSME are not suitable evaluation metrics. As evaluation metric we defined average accuracy as $a=\\frac{\\sum count(|l-h|<0.1)}{count(t)} $, where the label (l) and the prediction (h) can be anywhere within the year serie (t). Table TABREF13, shows the results of the prediction of genre scale for the years 2011 to 2018 with different algorithms: linear regression (LR), Support Vector Machine (SVM), multi layer perceptron (MPL), nearest neighbors (IBk), and a meta classifier (stacking) with SVM+MLP+IBk.", "The results reveal that the forecasting of music genres is a non-linear problem, that IBk predicts the closest sequence to the annotated one and that a meta classifier with nearest neighborsBIBREF22 is the most accurate in the prediction. Deep Learning algorithms does not perform well in this case because the dataset is not large enough. Last remark: feature reduction (from 41 to 14) does not affect the results obtained with IBk and meta classifiers, indicating that there is no curse of dimensionality." ], [ "We annotated and presented a new dataset for the computational analysis of popular music. Our preliminary studies confirm previous findings (there is an optimal level of novelty to become popular and this is more likely to happen in urban contexts) and reveal that audiences tend to like contemporary and intense music experiences. We also performed a back test for the prediction of future music genres in a time series, that turned out to be a non-linear problem. For the future we would like to update the corpus with more features about audience types and commercial success. This work has also been inspired by Music Map." ] ] }
{ "question": [ "What trends are found in musical preferences?", "Which decades did they look at?", "How many genres did they collect from?" ], "question_id": [ "75043c17a2cddfce6578c3c0e18d4b7cf2f18933", "95bb3ea4ebc3f2174846e8d422abc076e1407d6a", "3ebdc15480250f130cf8f5ab82b0595e4d870e2f" ], "nlp_background": [ "", "", "" ], "topic_background": [ "", "", "" ], "paper_read": [ "", "", "" ], "search_query": [ "dataset", "dataset", "dataset" ], "question_writer": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4", "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [ "audiences wanted products more and more contemporary, intense and a little bit novel or sophisticated, but less and less mellow and (surprisingly) unpretentious" ], "yes_no": null, "free_form_answer": "", "evidence": [ "How the music taste of the audience of popular music changed in the last century? The trend lines of the MUSIC model features, reported in figure FIGREF12, reveal that audiences wanted products more and more contemporary, intense and a little bit novel or sophisticated, but less and less mellow and (surprisingly) unpretentious. In other words, the audiences of popular music are getting more demanding as the quality and variety of the music products increases." ], "highlighted_evidence": [ "How the music taste of the audience of popular music changed in the last century? The trend lines of the MUSIC model features, reported in figure FIGREF12, reveal that audiences wanted products more and more contemporary, intense and a little bit novel or sophisticated, but less and less mellow and (surprisingly) unpretentious. In other words, the audiences of popular music are getting more demanding as the quality and variety of the music products increases." ] } ], "annotation_id": [ "0e2f58feb4ba7235a2ae6bd7efabcbdd9af76130" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "between 1900s and 2010s" ], "yes_no": null, "free_form_answer": "", "evidence": [ "time: decade (classes between 1900s and 2010s) and year representative of the time when the genre became meainstream" ], "highlighted_evidence": [ "time: decade (classes between 1900s and 2010s) and year representative of the time when the genre became meainstream" ] } ], "annotation_id": [ "a80d4759b36ea33d0349b1ba76d68fc3c14a235c" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "77 genres" ], "yes_no": null, "free_form_answer": "", "evidence": [ "From the description of music genres provided above emerges that there is a limited number of super-genres and derivation lines BIBREF19, BIBREF20, as shown in figure FIGREF1.", "From a computational perspective, genres are classes and, although can be treated by machine learning algorithms, they do not include information about the relations between them. In order to formalize the relations between genres for computing purposes, we define a continuous genre scale from the most experimental and introverted super-genre to the most euphoric and inclusive one. We selected from Wikipedia the 77 genres that we mentioned in bold in the previous paragraph and asked to two independent raters to read the Wikipedia pages of the genres, listen to samples or artists of the genres (if they did not know already) and then annotate the following dimensions:" ], "highlighted_evidence": [ "From the description of music genres provided above emerges that there is a limited number of super-genres and derivation lines BIBREF19, BIBREF20, as shown in figure FIGREF1.\n\nFrom a computational perspective, genres are classes and, although can be treated by machine learning algorithms, they do not include information about the relations between them. In order to formalize the relations between genres for computing purposes, we define a continuous genre scale from the most experimental and introverted super-genre to the most euphoric and inclusive one. We selected from Wikipedia the 77 genres that we mentioned in bold in the previous paragraph and asked to two independent raters to read the Wikipedia pages of the genres, listen to samples or artists of the genres (if they did not know already) and then annotate the following dimensions:" ] } ], "annotation_id": [ "378a844d9b821fadd1738e9c6623a738f34e1b05" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Fig. 1. Distribution of genre derivation by super-genres and decade.", "Fig. 2. Distributions of some of the features annotated in the dataset.", "Fig. 3. Trend lines (dashed) of the MUSIC features from 1900.", "TABLE I. RESULTS. *=SCORES CONSIDERED FOR COMPUTING AVG ACCURACY" ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "5-Figure3-1.png", "5-TableI-1.png" ] }
2004.02929
An Annotated Corpus of Emerging Anglicisms in Spanish Newspaper Headlines
The extraction of anglicisms (lexical borrowings from English) is relevant both for lexicographic purposes and for NLP downstream tasks. We introduce a corpus of European Spanish newspaper headlines annotated with anglicisms and a baseline model for anglicism extraction. In this paper we present: (1) a corpus of 21,570 newspaper headlines written in European Spanish annotated with emergent anglicisms and (2) a conditional random field baseline model with handcrafted features for anglicism extraction. We present the newspaper headlines corpus, describe the annotation tagset and guidelines and introduce a CRF model that can serve as baseline for the task of detecting anglicisms. The presented work is a first step towards the creation of an anglicism extractor for Spanish newswire.
{ "section_name": [ "Introduction", "Related Work", "Anglicism: Scope of the Phenomenon", "Corpus description and annotation ::: Corpus description", "Corpus description and annotation ::: Corpus description ::: Main Corpus", "Corpus description and annotation ::: Corpus description ::: Supplemental Test Set", "Corpus description and annotation ::: Annotation guidelines", "Baseline Model", "Results", "Future Work", "Conclusions", "Acknowledgements", "Language Resource References" ], "paragraphs": [ [ "The study of English influence in the Spanish language has been a hot topic in Hispanic linguistics for decades, particularly concerning lexical borrowing or anglicisms BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6.", "Lexical borrowing is a phenomenon that affects all languages and constitutes a productive mechanism for word-formation, especially in the press. chesleypaulapredicting2010 estimated that a reader of French newspapers encountered a new lexical borrowing for every 1,000 words. In Chilean newspapers, lexical borrowings account for approximately 30% of neologisms, 80% of those corresponding to English loanwords BIBREF7.", "Detecting lexical borrowings is relevant both for lexicographic purposes and for NLP downstream tasks BIBREF8, BIBREF9. However, strategies to track and register lexical borrowings have traditionally relied on manual review of corpora.", "In this paper we present: (1) a corpus of newspaper headlines in European Spanish annotated with emerging anglicisms and (2) a CRF baseline model for anglicism automatic extraction in Spanish newswire." ], [ "Corpus-based studies of English borrowings in Spanish media have traditionally relied on manual evaluation of either previously compiled general corpora such as CREA BIBREF10, BIBREF11, BIBREF12, BIBREF13, either new tailor-made corpora designed to analyze specific genres, varieties or phenomena BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20.", "In terms of automatic detection of anglicisms, previous approaches in different languages have mostly depended on resource lookup (lexicon or corpus frequencies), character n-grams and pattern matching. alex-2008-comparing combined lexicon lookup and a search engine module that used the web as a corpus to detect English inclusions in a corpus of German texts and compared her results with a maxent Markov model. furiassi2007retrieval explored corpora lookup and character n-grams to extract false anglicisms from a corpus of Italian newspapers. andersen2012semi used dictionary lookup, regular expressions and lexicon-derived frequencies of character n-grams to detect anglicism candidates in the Norwegian Newspaper Corpus (NNC) BIBREF21, while losnegaard2012data explored a Machine Learning approach to anglicism detection in Norwegian by using TiMBL (Tilburg Memory-Based Learner, an implementation of a k-nearest neighbor classifier) with character trigrams as features. garley-hockenmaier-2012-beefmoves trained a maxent classifier with character n-gram and morphological features to identify anglicisms in German online communities. In Spanish, serigos2017using extracted anglicisms from a corpus of Argentinian newspapers by combining dictionary lookup (aided by TreeTagger and the NLTK lemmatizer) with automatic filtering of capitalized words and manual inspection. In serigos2017applying, a character n-gram module was added to estimate the probabilities of a word being English or Spanish. moreno2018configuracion used different pattern-matching filters and lexicon lookup to extract anglicism cadidates from a corpus of tweets in US Spanish.", "Work within the code-switching community has also dealt with language identification on multilingual corpora. Due to the nature of code-switching, these models have primarily focused on oral copora and social media datasets BIBREF22, BIBREF23, BIBREF24. In the last shared task of language identification in code-switched data BIBREF23, approaches to English-Spanish included CRFs models BIBREF25, BIBREF26, BIBREF27, BIBREF28, logistic regression BIBREF29 and LSTMs models BIBREF30, BIBREF31.", "The scope and nature of lexical borrowing is, however, somewhat different to that of code-switching. In fact, applying code-switching models to lexical borrowing detection has previously proved to be unsuccessful, as they tend to overestimate the number of anglicisms BIBREF32. In the next section we address the differences between both phenomena and set the scope of this project." ], [ "Linguistic borrowing can be defined as the transference of linguistic elements between two languages. Borrowing and code-switching have frequently been described as a continuum BIBREF33, with a fuzzy frontier between the two. As a result, a precise definition of what borrowing is remains elusive BIBREF34 and some authors prefer to talk about code-mixing in general BIBREF35 or “lone other-language incorporations\" BIBREF36.", "Lexical borrowing in particular involves the incorporation of single lexical units from one language into another language and is usually accompanied by morphological and phonological modification to conform with the patterns of the recipient language BIBREF37, BIBREF38. By definition, code-switches are not integrated into a recipient language, unlike established loanwords BIBREF39. While code-switches are usually fluent multiword interferences that normally comply with grammatical restrictions in both languages and that are produced by bilingual speakers in bilingual discourses, lexical borrowings are words used by monolingual individuals that eventually become lexicalized and assimilated as part of the recipient language lexicon until the knowledge of “foreign\" origin disappears BIBREF40.", "In terms of approaching the problem, automatic code-switching identification has been framed as a sequence modeling problem where every token receives a language ID label (as in a POS-tagging task). Borrowing detection, on the other hand, while it can also be transformed into a sequence labeling problem, is an extraction task, where only certain spans of texts will be labeled (in the fashion of a NER task).", "Various typologies have been proposed that aim to classify borrowings according to different criteria, both with a cross-linguistic perspective and also specifically aimed to characterize English inclusions in Spanish BIBREF34, BIBREF41, BIBREF42, BIBREF5. In this work, we will be focusing on unassimilated lexical borrowings (sometimes called foreignisms), i.e. words from English origin that are introduced into Spanish without any morphological or orthographic adaptation." ], [ "In this subsection we describe the characteristics of the corpus. We first introduce the main corpus, with the usual train/development/test split that was used to train, tune and evaluate the model. We then present an additional test set that was designed to assess the performance of the model on more naturalistic data." ], [ "The main corpus consists of a collection of monolingual newspaper headlines written in European Spanish. The corpus contains 16,553 headlines, which amounts to 244,114 tokens. Out of those 16,553 headlines, 1,109 contain at least one anglicism. The total number of anglicisms is 1,176 (most of them are a single word, although some of them were multiword expressions). The corpus was divided into training, development and test set. The proportions of headlines, tokens and anglicisms in each corpus split can be found in Table TABREF6.", "The headlines in this corpus come from the Spanish newspaper eldiario.es, a progressive online newspaper based in Spain. eldiario.es is one of the main national newspapers from Spain and, to the best of our knowledge, the only one that publishes its content under a Creative Commons license, which made it ideal for making the corpus publicly available.", "The headlines were extracted from the newspaper website through web scraping and range from September 2012 to January 2020. Only the following sections were included: economy, technology, lifestyle, music, TV and opinion. These sections were chosen as they were the most likely to contain anglicisms. The proportion of headlines with anglicisms per section can be found in Table TABREF7.", "Using headlines (instead of full articles) was beneficial for several reasons. First of all, annotating a headline is faster and easier than annotating a full article; this helps ensure that a wider variety of topics will be covered in the corpus. Secondly, anglicisms are abundant in headlines, because they are frequently used as a way of calling the attention of the reader BIBREF43. Finally, borrowings that make it to the headline are likely to be particularly salient or relevant, and therefore are good candidates for being extracted and tracked." ], [ "In addition to the usual train/development/test split we have just presented, a supplemental test set of 5,017 headlines was collected. The headlines included in this additional test set also belong to eldiario.es. These headlines were retrieved daily through RSS during February 2020 and included all sections from the newspaper. The headlines in the supplemental corpus therefore do not overlap in time with the main corpus and include more sections. The number of headlines, tokens and anglicisms in the supplemental test set can be found in Table TABREF6.", "The motivation behind this supplemental test set is to assess the model performance on more naturalistic data, as the headlines in the supplemental corpus (1) belong to the future of the main corpus and (2) come from a less borrowing-dense sample. This supplemental test set better mimics the real scenario that an actual anglicism extractor would face and can be used to assess how well the model generalizes to detect anglicisms in any section of the daily news, which is ultimately the aim of this project." ], [ "The term anglicism covers a wide range of linguistic phenomena. Following the typology proposed by gomez1997towards, we focused on direct, unadapted, emerging Anglicisms, i.e. lexical borrowings from the English language into Spanish that have recently been imported and that have still not been assimilated into Spanish. Other phenomena such as semantic calques, syntactic anglicisms, acronyms and proper names were considered beyond the scope of this annotation project.", "Lexical borrowings can be adapted (the spelling of the word is modified to comply with the phonological and orthographic patterns of the recipient language) or unadapted (the word preserves its original spelling). For this annotation task, adapted borrowings were ignored and only unadapted borrowings were annotated. Therefore, Spanish adaptations of anglicisms like fútbol (from football), mitin (from meeting) and such were not annotated as borrowings. Similarly, words derived from foreign lexemes that do not comply with Spanish orthotactics but that have been morphologically derived following the Spanish paradigm (hacktivista, hackear, shakespeariano) were not annotated either. However, pseudo-anglicisms (words that are formed as if they were English, but do not exist in English, such as footing or balconing) were annotated.", "Words that were not adapted but whose original spelling complies with graphophonological rules of Spanish (and are therefore unlikely to be ever adapted, such as web, internet, fan, club, videoclip) were annotated or not depending on how recent or emergent they were. After all, a word like club, that has been around in Spanish language for centuries, cannot be considered emergent anymore and, for this project, would not be as interesting to retrieve as real emerging anglicisms. The notion of emergent is, however, time-dependent and quite subjective: in order to determine which unadapted, graphophonologically acceptable borrowings were to be annotated, the online version of the Diccionario de la lengua española dle was consulted. This dictionary is compiled by the Royal Spanish Academy, a prescriptive institution on Spanish language. This decision was motivated by the fact that, if a borrowing was already registered by this dictionary (that has conservative approach to language change) and is considered assimilated (that is, the institution recommended no italics or quotation marks to write that word) then it could be inferred that the word was not emergent anymore.", "Although the previous guidelines covered most cases, they proved insufficient. Some anglicisms were unadapted (they preserved their original spelling), unacceptable according to the Spanish graphophonological rules, and yet did not satisfy the condition of being emergent. That was the case of words like jazz or whisky, words that do not comply with Spanish graphophonological rules but that were imported decades ago, cannot be considered emergent anymore and are unlikely to ever be adapted into the Spanish spelling system. To adjudicate these examples on those cases, the criterion of pragmatic markedness proposed by winter2012proposing (that distinguishes between catachrestic and non-catachrestic borrowing) was applied: if a borrowing was not adapted (i.e. its form remained exactly as it came from English) but referred to a particular invention or innovation that came via the English language, that was not perceived as new anymore and that had never competed with a Spanish equivalent, then it was ignored. This criteria proved to be extremely useful to deal with old unadapted anglicisms in the fields of music and food. Figure 1 summarizes the decision steps followed during the annotation process.", "The corpus was annotated by a native speaker of Spanish using Doccano doccano. The annotation tagset includes two labels: ENG, to annotate the English borrowings just described, and OTHER. This OTHER tag was used to tag lexical borrowings from languages other than English. After all, although English is today by far the most prevalent donor of borrowings, there are other languages that also provide new borrowings to Spanish. Furthermore, the tag OTHER allows to annotate borrowings such as première or tempeh, borrowings that etymologically do not come from English but that have entered the Spanish language via English influence, even when their spelling is very different to English borrowings. In general, we considered that having such a tag could also help assess how successful a classifier is detecting foreign borrowings in general in Spanish newswire (without having to create a label for every possible donor language, as the number of examples would be too sparse). In total, the training set contained 40 entities labeled as OTHER, the development set contained 14 and the test set contained 13. The supplemental test set contained 35 OTHER entities." ], [ "A baseline model for automatic extraction of anglicisms was created using the annotated corpus we just presented as training material. As mentioned in Section 3, the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data BIBREF23, BIBREF24.", "The model was built using pycrfsuite korobov2014python, the Python wrapper for crfsuite CRFsuite that implements CRF for labeling sequential data. It also used the Token and Span utilities from spaCy library honnibal2017spacy.", "The following handcrafted features were used for the model:", "Bias feature", "Token feature", "Uppercase feature (y/n)", "Titlecase feature (y/n)", "Character trigram feature", "Quotation feature (y/n)", "Word suffix feature (last three characters)", "POS tag (provided by spaCy utilities)", "Word shape (provided by spaCy utilities)", "Word embedding (see Table TABREF26)", "Given that anglicisms can be multiword expressions (such as best seller, big data) and that those units should be treated as one borrowing and not as two independent borrowings, we used multi-token BIO encoding to denote the boundaries of each span BIBREF44. A window of two tokens in each direction was set for the feature extractor. The algorithm used was gradient descent with the L-BFGS method.", "The model was tuned on the development set doing grid search; the hyperparameters considered were c1 (L1 regularization coefficient: $0.01$, $0.05$, $0.1$, $0.5$, $1.0$), c2 (L2 regularization coefficient: $0.01$, $0.05$, $0.1$, $0.5$, $1.0$), embedding scaling ($0.5$, $1.0$, $2.0$, $4.0$), and embedding type bojanowski2017enriching,josecanete20193255001,cardellinoSBWCE,grave2018learning,honnibal2017spacy,perezfasttext,perezglove (see Table TABREF26). The best results were obtained with c1 = $0.05$, c2 = $0.01$, scaling = $0.5$ and word2vec Spanish embeddings by cardellinoSBWCE. The threshold for the stopping criterion delta was selected through observing the loss during preliminary experiments (delta = $1\\mathrm {e}-3$).", "In order to assess the significance of the the handcrafted features, a feature ablation study was done on the tuned model, ablating one feature at a time and testing on the development set. Due to the scarcity of spans labeled with the OTHER tag on the development set (only 14) and given that the main purpose of the model is to detect anglicisms, the baseline model was run ignoring the OTHER tag both during tuning and the feature ablation experiments. Table TABREF27 displays the results on the development set with all features and for the different feature ablation runs. The results show that all features proposed for the baseline model contribute to the results, with the character trigram feature being the one that has the biggest impact on the feature ablation study." ], [ "The baseline model was then run on the test set and the supplemental test set with the set of features and hyperparameters mentioned on Section SECREF5 Table TABREF28 displays the results obtained. The model was run both with and without the OTHER tag. The metrics for ENG display the results obtained only for the spans labeled as anglicisms; the metrics for OTHER display the results obtained for any borrowing other than anglicisms. The metrics for BORROWING discard the type of label and consider correct any labeled span that has correct boundaries, regardless of the label type (so any type of borrowing, regardless if it is ENG or OTHER). In all cases, only full matches were considered correct and no credit was given to partial matching, i.e. if only fake in fake news was retrieved, it was considered wrong and no partial score was given.", "Results on all sets show an important difference between precision and recall, precision being significantly higher than recall. There is also a significant difference between the results obtained on development and test set (F1 = 89.60, F1 = 87.82) and the results on the supplemental test set (F1 = 71.49). The time difference between the supplemental test set and the development and test set (the headlines from the the supplemental test set being from a different time period to the training set) can probably explain these differences.", "Comparing the results with and without the OTHER tag, it seems that including it on the development and test set produces worse results (or they remain roughly the same, at best). However, the best precision result on the supplemental test was obtained when including the OTHER tag and considering both ENG and OTHER spans as BORROWING (precision = 87.62). This is caused by the fact that, while the development and test set were compiled from anglicism-rich newspaper sections (similar to the training set), the supplemental test set contained headlines from all the sections in the newspaper, and therefore included borrowings from other languages such as Catalan, Basque or French. When running the model without the OTHER tag on the supplemental test set, these non-English borrowings were labeled as anglicisms by the model (after all, their spelling does not resemble Spanish spelling), damaging the precision score. When the OTHER tag was included, these non-English borrowings got correctly labeled as OTHER, improving the precision score. This proves that, although the OTHER tag might be irrelevant or even damaging when testing on the development or test set, it can be useful when testing on more naturalistic data, such as the one in the supplemental test set.", "Concerning errors, two types of errors were recurrent among all sets: long titles of songs, films or series written in English were a source of false positives, as the model tended to mistake some of the uncapitalized words in the title for anglicisms (for example, it darker in “`You want it darker', la oscura y brillante despedida de Leonard Cohen\"). On the other hand, anglicisms that appear on the first position of the sentence (and were, therefore, capitalized) were consistently ignored (as the model probably assumed they were named entities) and produced a high number of false negatives (for example, vamping in “Vamping: la recurrente leyenda urbana de la luz azul `asesina'\").", "The results on Table TABREF28 cannot, however, be compared to the ones reported by previous work: the metric that we report is span F-measure, as the evaluation was done on span level (instead of token level) and credit was only given to full matches. Secondly, there was no Spanish tag assigned to non-borrowings, that means that no credit was given if a Spanish token was identified as such." ], [ "This is an on-going project. The corpus we have just presented is a first step towards the development of an extractor of emerging anglicisms in the Spanish press. Future work includes: assessing whether to keep the OTHER tag, improving the baseline model (particularly to improve recall), assessing the suitability and contribution of different sets of features and exploring different models. In terms of the corpus development, the training set is now closed and stable, but the test set could potentially be increased in order to have more and more diverse anglicisms." ], [ "In this paper we have presented a new corpus of 21,570 newspaper headlines written in European Spanish. The corpus is annotated with emergent anglicisms and, up to our very best knowledge, is the first corpus of this type to be released publicly. We have presented the annotation scope, tagset and guidelines, and we have introduced a CRF baseline model for anglicism extraction trained with the described corpus. The results obtained show that the the corpus and baseline model are appropriate for automatic anglicism extraction." ], [ "The author would like to thank Constantine Lignos for his feedback and advice on this project." ], [ "lrec" ] ] }
{ "question": [ "Does the paper mention other works proposing methods to detect anglicisms in Spanish?", "What is the performance of the CRF model on the task described?", "Does the paper motivate the use of CRF as the baseline model?", "What are the handcrafted features used?" ], "question_id": [ "bbc58b193c08ccb2a1e8235a36273785a3b375fb", "3c34187a248d179856b766e9534075da1aa5d1cf", "8bfbf78ea7fae0c0b8a510c9a8a49225bbdb5383", "97757a69d9fc28b260e68284fd300726fbe358d0" ], "nlp_background": [ "two", "two", "two", "two" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "Spanish", "Spanish", "Spanish", "Spanish" ], "question_writer": [ "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c", "486a870694ba60f1a1e7e4ec13e328164cd4b43c" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": true, "free_form_answer": "", "evidence": [ "In terms of automatic detection of anglicisms, previous approaches in different languages have mostly depended on resource lookup (lexicon or corpus frequencies), character n-grams and pattern matching. alex-2008-comparing combined lexicon lookup and a search engine module that used the web as a corpus to detect English inclusions in a corpus of German texts and compared her results with a maxent Markov model. furiassi2007retrieval explored corpora lookup and character n-grams to extract false anglicisms from a corpus of Italian newspapers. andersen2012semi used dictionary lookup, regular expressions and lexicon-derived frequencies of character n-grams to detect anglicism candidates in the Norwegian Newspaper Corpus (NNC) BIBREF21, while losnegaard2012data explored a Machine Learning approach to anglicism detection in Norwegian by using TiMBL (Tilburg Memory-Based Learner, an implementation of a k-nearest neighbor classifier) with character trigrams as features. garley-hockenmaier-2012-beefmoves trained a maxent classifier with character n-gram and morphological features to identify anglicisms in German online communities. In Spanish, serigos2017using extracted anglicisms from a corpus of Argentinian newspapers by combining dictionary lookup (aided by TreeTagger and the NLTK lemmatizer) with automatic filtering of capitalized words and manual inspection. In serigos2017applying, a character n-gram module was added to estimate the probabilities of a word being English or Spanish. moreno2018configuracion used different pattern-matching filters and lexicon lookup to extract anglicism cadidates from a corpus of tweets in US Spanish." ], "highlighted_evidence": [ "In Spanish, serigos2017using extracted anglicisms from a corpus of Argentinian newspapers by combining dictionary lookup (aided by TreeTagger and the NLTK lemmatizer) with automatic filtering of capitalized words and manual inspection. In serigos2017applying, a character n-gram module was added to estimate the probabilities of a word being English or Spanish. moreno2018configuracion used different pattern-matching filters and lexicon lookup to extract anglicism cadidates from a corpus of tweets in US Spanish." ] } ], "annotation_id": [ "6659120a160637bc0a918c06864ee3562ba1d6c3" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "the results obtained on development and test set (F1 = 89.60, F1 = 87.82) and the results on the supplemental test set (F1 = 71.49)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Results on all sets show an important difference between precision and recall, precision being significantly higher than recall. There is also a significant difference between the results obtained on development and test set (F1 = 89.60, F1 = 87.82) and the results on the supplemental test set (F1 = 71.49). The time difference between the supplemental test set and the development and test set (the headlines from the the supplemental test set being from a different time period to the training set) can probably explain these differences." ], "highlighted_evidence": [ "Results on all sets show an important difference between precision and recall, precision being significantly higher than recall. There is also a significant difference between the results obtained on development and test set (F1 = 89.60, F1 = 87.82) and the results on the supplemental test set (F1 = 71.49). The time difference between the supplemental test set and the development and test set (the headlines from the the supplemental test set being from a different time period to the training set) can probably explain these differences." ] } ], "annotation_id": [ "0e51b63912c1e226920969d4c5e4df421f0d4f5d" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data" ], "yes_no": null, "free_form_answer": "", "evidence": [ "A baseline model for automatic extraction of anglicisms was created using the annotated corpus we just presented as training material. As mentioned in Section 3, the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data BIBREF23, BIBREF24." ], "highlighted_evidence": [ "A baseline model for automatic extraction of anglicisms was created using the annotated corpus we just presented as training material. As mentioned in Section 3, the task of detecting anglicisms can be approached as a sequence labeling problem where only certain spans of texts will be labeled as anglicism (in a similar way to an NER task). The chosen model was conditional random field model (CRF), which was also the most popular model in both Shared Tasks on Language Identification for Code-Switched Data BIBREF23, BIBREF24." ] } ], "annotation_id": [ "2c6a14f49fb150117feff966cd46e24a3fbe290d" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "Bias feature", "Token feature", "Uppercase feature (y/n)", "Titlecase feature (y/n)", "Character trigram feature", "Quotation feature (y/n)", "Word suffix feature (last three characters)", "POS tag (provided by spaCy utilities)", "Word shape (provided by spaCy utilities)", "Word embedding (see Table TABREF26)" ], "yes_no": null, "free_form_answer": "", "evidence": [ "The following handcrafted features were used for the model:", "Bias feature", "Token feature", "Uppercase feature (y/n)", "Titlecase feature (y/n)", "Character trigram feature", "Quotation feature (y/n)", "Word suffix feature (last three characters)", "POS tag (provided by spaCy utilities)", "Word shape (provided by spaCy utilities)", "Word embedding (see Table TABREF26)" ], "highlighted_evidence": [ "The following handcrafted features were used for the model:\n\nBias feature\n\nToken feature\n\nUppercase feature (y/n)\n\nTitlecase feature (y/n)\n\nCharacter trigram feature\n\nQuotation feature (y/n)\n\nWord suffix feature (last three characters)\n\nPOS tag (provided by spaCy utilities)\n\nWord shape (provided by spaCy utilities)\n\nWord embedding (see Table TABREF26)" ] } ], "annotation_id": [ "df664b326a137ccc34b19edcd8e4102fe397628d" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] } ] }
{ "caption": [ "Table 1: Number of headlines, tokens and anglicisms per corpus subset.", "Table 2: Percentage of headlines with anglicisms per section.", "Figure 1: Decision steps to follow during the annotation process to decide whether to annotate a word as a borrowing.", "Table 3: Types of embeddings tried.", "Table 4: Ablation study results on the development test.", "Table 5: Results on test set and supplemental test set." ], "file": [ "2-Table1-1.png", "2-Table2-1.png", "4-Figure1-1.png", "4-Table3-1.png", "4-Table4-1.png", "5-Table5-1.png" ] }
1908.06809
Style Transfer for Texts: to Err is Human, but Error Margins Matter
This paper shows that standard assessment methodology for style transfer has several significant problems. First, the standard metrics for style accuracy and semantics preservation vary significantly on different re-runs. Therefore one has to report error margins for the obtained results. Second, starting with certain values of bilingual evaluation understudy (BLEU) between input and output and accuracy of the sentiment transfer the optimization of these two standard metrics diverge from the intuitive goal of the style transfer task. Finally, due to the nature of the task itself, there is a specific dependence between these two metrics that could be easily manipulated. Under these circumstances, we suggest taking BLEU between input and human-written reformulations into consideration for benchmarks. We also propose three new architectures that outperform state of the art in terms of this metric.
{ "section_name": [ "Introduction", "Related Work", "Style transfer", "Experiments", "Experiments ::: Error margins matter", "Experiments ::: Delete, duplicate and conquer", "Conclusion", "Supplemental Material" ], "paragraphs": [ [ "Deep generative models attract a lot of attention in recent years BIBREF0. Such methods as variational autoencoders BIBREF1 or generative adversarial networks BIBREF2 are successfully applied to a variety of machine vision problems including image generation BIBREF3, learning interpretable image representations BIBREF4 and style transfer for images BIBREF5. However, natural language generation is more challenging due to many reasons, such as the discrete nature of textual information BIBREF6, the absence of local information continuity and non-smooth disentangled representations BIBREF7. Due to these difficulties, text generation is mostly limited to specific narrow applications and is usually working in supervised settings.", "Content and style are deeply fused in natural language, but style transfer for texts is often addressed in the context of disentangled latent representations BIBREF6, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12. Intuitive understanding of this problem is apparent: if an input text has some attribute $A$, a system generates new text similar to the input on a given set of attributes with only one attribute $A$ changed to the target attribute $\\tilde{A}$. In the majority of previous works, style transfer is obtained through an encoder-decoder architecture with one or multiple style discriminators to learn disentangled representations. The encoder takes a sentence as an input and generates a style-independent content representation. The decoder then takes the content representation and the target style representation to generate the transformed sentence. In BIBREF13 authors question the quality and usability of the disentangled representations for texts and suggest an end-to-end approach to style transfer similar to an end-to-end machine translation.", "Contribution of this paper is three-fold: 1) we show that different style transfer architectures have varying results on test and that reporting error margins for various training re-runs of the same model is especially important for adequate assessment of the models accuracy, see Figure FIGREF1; 2) we show that BLEU BIBREF14 between input and output and accuracy of style transfer measured in terms of the accuracy of a pre-trained external style classifier can be manipulated and naturally diverge from the intuitive goal of the style transfer task starting from a certain threshold; 3) new architectures that perform style transfer using improved latent representations are shown to outperform state of the art in terms of BLEU between output and human-written reformulations." ], [ "Style of a text is a very general notion that is hard to define in rigorous terms BIBREF15. However, the style of a text can be characterized quantitatively BIBREF16; stylized texts could be generated if a system is trained on a dataset of stylistically similar texts BIBREF17; and author-style could be learned end-to-end BIBREF18, BIBREF19, BIBREF20. A majority of recent works on style transfer focus on the sentiment of text and use it as a target attribute. For example, in BIBREF21, BIBREF22, BIBREF23 estimate the quality of the style transfer with binary sentiment classifier trained on the corpora further used for the training of the style-transfer system. BIBREF24 and especially BIBREF9 generalize this ad-hoc approach defining a style as a set of arbitrary quantitively measurable categorial or continuous parameters. Such parameters could include the 'style of the time' BIBREF16, author-specific attributes (see BIBREF25 or BIBREF26 on 'shakespearization'), politeness BIBREF27, formality of speech BIBREF28, and gender or even political slant BIBREF29.", "A significant challenge associated with narrowly defined style-transfer problems is that finding a good solution for one aspect of a style does not guarantee that you can use the same solution for a different aspect of it. For example, BIBREF30 build a generative model for sentiment transfer with a retrieve-edit approach. In BIBREF21 a delete-retrieve model shows good results for sentiment transfer. However, it is hard to imagine that these retrieval approaches could be used, say, for the style of the time or formality, since in these cases the system is often expected to paraphrase a given sentence to achieve the target style.", "In BIBREF6 the authors propose a more general approach to the controlled text generation combining variational autoencoder (VAE) with an extended wake-sleep mechanism in which the sleep procedure updates both the generator and external classifier that assesses generated samples and feedbacks learning signals to the generator. Authors had concatenated labels for style with the text representation of the encoder and used this vector with \"hard-coded\" information about the sentiment of the output as the input of the decoder. This approach seems promising, and some other papers either extend it or use similar ideas. BIBREF8 applied a GAN to align the hidden representations of sentences from two corpora using an adversarial loss to decompose information about the form. In BIBREF31 model learns a smooth code space and can be used as a discrete GAN with the ability to generate coherent discrete outputs from continuous samples. Authors use two different generators for two different styles. In BIBREF9 an adversarial network is used to make sure that the output of the encoder does not have style representation. BIBREF6 also uses an adversarial component that ensures there is no stylistic information within the representation. BIBREF9 do not use a dedicated component that controls the semantic component of the latent representation. Such a component is proposed by BIBREF10 who demonstrate that decomposition of style and content could be improved with an auxiliary multi-task for label prediction and adversarial objective for bag-of-words prediction. BIBREF11 also introduces a dedicated component to control semantic aspects of latent representations and an adversarial-motivational training that includes a special motivational loss to encourage a better decomposition. Speaking about preservation of semantics one also has to mention works on paraphrase systems, see, for example BIBREF32, BIBREF33, BIBREF34. The methodology described in this paper could be extended to paraphrasing systems in terms of semantic preservation measurement, however, this is the matter of future work.", "BIBREF13 state that learning a latent representation, which is independent of the attributes specifying its style, is rarely attainable. There are other works on style transfer that are based on the ideas of neural machine translation with BIBREF35 and without parallel corpora BIBREF36 in line with BIBREF37 and BIBREF38.", "It is important to underline here that majority of the papers dedicated to style transfer for texts treat sentiment of a sentence as a stylistic rather than semantic attribute despite particular concerns BIBREF39. It is also crucial to mention that in line with BIBREF9 majority of the state of the art methods for style transfer use an external pre-trained classifier to measure the accuracy of the style transfer. BLEU computes the harmonic mean of precision of exact matching n-grams between a reference and a target sentence across the corpus. It is not sensitive to minute changes, but BLEU between input and output is often used as the coarse measure of the semantics preservation. For the corpora that have human written reformulations, BLEU between the output of the model and human text is used. These metrics are used alongside with a handful of others such as PINC (Paraphrase In N-gram Changes) score BIBREF35, POS distance BIBREF12, language fluency BIBREF10, etc. Figure FIGREF2 shows self-reported results of different models in terms of two most frequently measured performance metrics, namely, BLEU and Accuracy of the style transfer.", "This paper focuses on Yelp! reviews dataset that was lately enhanced with human written reformulations by BIBREF21. These are Yelp! reviews, where each short English review of a place is labeled as a negative or as a positive once. This paper studies three metrics that are most common in the field at the moment and questions to which extent can they be used for the performance assessment. These metrics are the accuracy of an external style classifier that is trained to measure the accuracy of the style transfer, BLEU between input and output of a system, and BLEU between output and human-written texts." ], [ "In this work we experiment with extensions of a model, described in BIBREF6, using Texar BIBREF40 framework. To generate plausible sentences with specific semantic and stylistic features every sentence is conditioned on a representation vector $z$ which is concatenated with a particular code $c$ that specifies desired attribute, see Figure FIGREF8. Under notation introduced in BIBREF6 the base autoencoder (AE) includes a conditional probabilistic encoder $E$ defined with parameters $\\theta _E$ to infer the latent representation $z$ given input $x$", "Generator $G$ defined with parameters $\\theta _G$ is a GRU-RNN for generating and output $\\hat{x}$ defined as a sequence of tokens $\\hat{x} = {\\hat{x}_1, ..., \\hat{x}_T}$ conditioned on the latent representation $z$ and a stylistic component $c$ that are concatenated and give rise to a generative distribution", "These encoder and generator form an AE with the following loss", "This standard reconstruction loss that drives the generator to produce realistic sentences is combined with two additional losses. The first discriminator provides extra learning signals which enforce the generator to produce coherent attributes that match the structured code in $c$. Since it is impossible to propagate gradients from the discriminator through the discrete sample $\\hat{x}$, we use a deterministic continuous approximation a \"soft\" generated sentence, denoted as $\\tilde{G} = \\tilde{G}_\\tau (z, c)$ with \"temperature\" $\\tau $ set to $\\tau \\rightarrow 0$ as training proceeds. The resulting “soft” generated sentence is fed into the discriminator to measure the fitness to the target attribute, leading to the following loss", "Finally, under the assumption that each structured attribute of generated sentences is controlled through the corresponding code in $c$ and is independent from $z$ one would like to control that other not explicitly modelled attributes do not entangle with $c$. This is addressed by the dedicated loss", "The training objective for the baseline, shown in Figure FIGREF8, is therefore a sum of the losses from Equations (DISPLAY_FORM4) – (DISPLAY_FORM6) defined as", "where $\\lambda _c$ and $\\lambda _z$ are balancing parameters.", "Let us propose two further extensions of this baseline architecture. To improve reproducibility of the research the code of the studied models is open. Both extensions aim to improve the quality of information decomposition within the latent representation. In the first one, shown in Figure FIGREF12, a special dedicated discriminator is added to the model to control that the latent representation does not contain stylistic information. The loss of this discriminator is defined as", "Here a discriminator denoted as $D_z$ is trying to predict code $c$ using representation $z$. Combining the loss defined by Equation (DISPLAY_FORM7) with the adversarial component defined in Equation (DISPLAY_FORM10) the following learning objective is formed", "where $\\mathcal {L}_{baseline}$ is a sum defined in Equation (DISPLAY_FORM7), $\\lambda _{D_z}$ is a balancing parameter.", "The second extension of the baseline architecture does not use an adversarial component $D_z$ that is trying to eradicate information on $c$ from component $z$. Instead, the system, shown in Figure FIGREF16 feeds the \"soft\" generated sentence $\\tilde{G}$ into encoder $E$ and checks how close is the representation $E(\\tilde{G} )$ to the original representation $z = E(x)$ in terms of the cosine distance. We further refer to it as shifted autoencoder or SAE. Ideally, both $E(\\tilde{G} (E(x), c))$ and $E(\\tilde{G} (E(x), \\bar{c}))$, where $\\bar{c}$ denotes an inverse style code, should be both equal to $E(x)$. The loss of the shifted autoencoder is", "where $\\lambda _{cos}$ and $\\lambda _{cos^{-}}$ are two balancing parameters, with two additional terms in the loss, namely, cosine distances between the softened output processed by the encoder and the encoded original input, defined as", "We also study a combination of both approaches described above, shown on Figure FIGREF17.", "In Section SECREF4 we describe a series of experiments that we have carried out for these architectures using Yelp! reviews dataset." ], [ "We have found that the baseline, as well as the proposed extensions, have noisy outcomes, when retrained from scratch, see Figure FIGREF1. Most of the papers mentioned in Section SECREF2 measure the performance of the methods proposed for the sentiment transfer with two metrics: accuracy of the external sentiment classifier measured on test data, and BLEU between the input and output that is regarded as a coarse metric for semantic similarity.", "In the first part of this section, we demonstrate that reporting error margins is essential for the performance assessment in terms that are prevalent in the field at the moment, i.e., BLEU between input and output and accuracy of the external sentiment classifier. In the second part, we also show that both of these two metrics after a certain threshold start to diverge from an intuitive goal of the style transfer and could be manipulated." ], [ "On Figure FIGREF1 one can see that the outcomes for every single rerun differ significantly. Namely, accuracy can change up to 5 percentage points, whereas BLEU can vary up to 8 points. This variance can be partially explained with the stochasticity incurred due to sampling from the latent variables. However, we show that results for state of the art models sometimes end up within error margins from one another, so one has to report the margins to compare the results rigorously. More importantly, one can see that there is an inherent trade-off between these two performance metrics. This trade-off is not only visible across models but is also present for the same retrained architecture. Therefore, improving one of the two metrics is not enough to confidently state that one system solves the style-transfer problem better than the other. One has to report error margins after several consecutive retrains and instead of comparing one of the two metrics has to talk about Pareto-like optimization that would show confident improvement of both.", "To put obtained results into perspective, we have retrained every model from scratch five times in a row. We have also retrained the models of BIBREF12 five times since their code is published online. Figure FIGREF19 shows the results of all models with error margins. It is also enhanced with other self-reported results on the same Yelp! review dataset for which no code was published.", "One can see that error margins of the models, for which several reruns could be performed, overlap significantly. In the next subsection, we carefully study BLEU and accuracy of the external classifier and discuss their aptness to measure style transfer performance." ], [ "One can argue that as there is an inevitable entanglement between semantics and stylistics in natural language, there is also an apparent entanglement between BLEU of input and output and accuracy estimation of the style. Indeed, the output that copies input gives maximal BLEU yet clearly fails in terms of the style transfer. On the other hand, a wholly rephrased sentence could provide a low BLEU between input and output but high accuracy. These two issues are not problematic when both BLEU between input and output and accuracy of the transfer are relatively low. However, since style transfer methods have significantly evolved in recent years, some state of the art methods are now sensitive to these issues. The trade-off between these two metrics can be seen in Figure FIGREF1 as well as in Figure FIGREF19.", "As we have mentioned above, the accuracy of an external classifier and BLEU between output and input are the most widely used methods to assess the performance of style transfer at this moment. However, both of these metrics can be manipulated in a relatively simple manner. One can extend the generative architecture with internal pre-trained classifier of style and then perform the following heuristic procedure:", "measure the style accuracy on the output for a given batch;", "choose the sentences that style classifier labels as incorrect;", "replace them with duplicates of sentences from the given batch that have correct style according to the internal classifier and show the highest BLEU with given inputs.", "This way One can replace all sentences that push measured accuracy down and boost reported accuracy to 100%. To see the effect that this manipulation has on the key performance metric we split all sentences with wrong style in 10 groups of equal size and replaces them with the best possible duplicates of the stylistically correct sentences group after group. The results of this process are shown in Figure FIGREF24.", "This result is disconcerting. Simply replacing part of the output with duplicates of the sentences that happen to have relatively high BLEU with given inputs allows to \"boost\" accuracy to 100% and \"improve\" BLEU. The change of BLEU during such manipulation stays within error margins of the architecture, but accuracy is significantly manipulated. What is even more disturbing is that BLEU between such manipulated output of the batch and human-written reformulations provided in BIBREF12 also grows. Figure FIGREF24 shows that for SAE but all four architectures described in Section SECREF3 demonstrate similar behavior.", "Our experiments show that though we can manipulate BLEU between output and human-written text, it tends to change monotonically. That might be because of the fact that this metric incorporates information on stylistics and semantics of the text at the same time, preserving inevitable entanglement that we have mentioned earlier. Despite being costly, human-written reformulations are needed for future experiments with style transfer. It seems that modern architectures have reached a certain level of complexity for which naive proxy metrics such as accuracy of an external classifier or BLEU between output and input are already not enough for performance estimation and should be combined with BLEU between output and human-written texts. As the quality of style transfer grows further one has to improve the human-written data sets: for example, one would like to have data sets similar to the ones used for machine translation with several reformulations of the same sentence.", "On Figure FIGREF25 one can see how new proposed architectures compare with another state of the art approaches in terms of BLEU between output and human-written reformulations." ], [ "Style transfer is not a rigorously defined NLP problem. Starting from definitions of style and semantics and finishing with metrics that could be used to evaluate the performance of a proposed system. There is a surge of recent contributions that work on this problem. This paper highlights several issues connected with this lack of rigor. First, it shows that the state of the art algorithms are inherently noisy on the two most widely accepted metrics, namely, BLEU between input and output and accuracy of the external style classifier. This noise can be partially attributed to the adversarial components that are often used in the state of the art architectures and partly due to certain methodological inconsistencies in the assessment of the performance. Second, it shows that reporting error margins of several consecutive retrains for the same model is crucial for the comparison of different architectures, since error margins for some of the models overlap significantly. Finally, it demonstrates that even BLEU on human-written reformulations can be manipulated in a relatively simple way." ], [ "Here are some examples characteristic for different systems. An output of a system follows the input. Here are some successful examples produced by the system with additional discriminator:", "it's not much like an actual irish pub, which is depressing. $\\rightarrow $ it's definitely much like an actual irish pub, which is grateful.", "i got a bagel breakfast sandwich and it was delicious! $\\rightarrow $ i got a bagel breakfast sandwich and it was disgusting!", "i love their flavored coffee. $\\rightarrow $ i dumb their flavored coffee.", "i got a bagel breakfast sandwich and it was delicious! $\\rightarrow $ i got a bagel breakfast sandwich and it was disgusting!", "i love their flavored coffee. $\\rightarrow $ i dumb their flavored coffee.", "nice selection of games to play. $\\rightarrow $ typical selection of games to play.", "i'm not a fan of huge chain restaurants. $\\rightarrow $ i'm definitely a fan of huge chain restaurants.", "Here are some examples of typical faulty reformulations:", "only now i'm really hungry, and really pissed off. $\\rightarrow $ kids now i'm really hungry, and really extraordinary off.", "what a waste of my time and theirs. $\\rightarrow $ what a wow. of my time and theirs.", "cooked to perfection and very flavorful. $\\rightarrow $ cooked to pain and very outdated.", "the beer was nice and cold! $\\rightarrow $ the beer was nice and consistant!", "corn bread was also good! $\\rightarrow $ corn bread was also unethical bagged", "Here are some successful examples produced by the SAE:", "our waitress was the best, very accommodating. $\\rightarrow $ our waitress was the worst, very accommodating.", "great food and awesome service! $\\rightarrow $ horrible food and nasty service!", "their sandwiches were really tasty. $\\rightarrow $ their sandwiches were really bland.", "i highly recommend the ahi tuna. $\\rightarrow $ i highly hated the ahi tuna.", "other than that, it's great! $\\rightarrow $ other than that, it's horrible!", "Here are some examples of typical faulty reformulations by SAE:", "good drinks, and good company. $\\rightarrow $ 9:30 drinks, and 9:30 company.", "like it's been in a fridge for a week. $\\rightarrow $ like it's been in a fridge for a true.", "save your money & your patience. $\\rightarrow $ save your smile & your patience.", "no call, no nothing. $\\rightarrow $ deliciously call, deliciously community.", "sounds good doesn't it? $\\rightarrow $ sounds good does keeps it talented", "Here are some successful examples produced by the SAE with additional discriminator:", "best green corn tamales around. $\\rightarrow $ worst green corn tamales around.", "she did the most amazing job. $\\rightarrow $ she did the most desperate job.", "very friendly staff and manager. $\\rightarrow $ very inconsistent staff and manager.", "even the water tasted horrible. $\\rightarrow $ even the water tasted great.", "go here, you will love it. $\\rightarrow $ go here, you will avoid it.", "Here are some examples of typical faulty reformulations by the SAE with additional discriminator:", "_num_ - _num_ % capacity at most , i was the only one in the pool. $\\rightarrow $ sweetness - stylish % fountains at most, i was the new one in the", "this is pretty darn good pizza! $\\rightarrow $ this is pretty darn unsafe pizza misleading", "enjoyed the dolly a lot. $\\rightarrow $ remove the shortage a lot.", "so, it went in the trash. $\\rightarrow $ so, it improved in the hooked.", "they are so fresh and yummy. $\\rightarrow $ they are so bland and yummy." ] ] }
{ "question": [ "What is state of the art method?", "By how much do proposed architectures autperform state-of-the-art?", "What are three new proposed architectures?", "How much does the standard metrics for style accuracy vary on different re-runs?" ], "question_id": [ "41830ebb8369a24d490e504b7cdeeeaa9b09fd9c", "4904ef32a8f84cf2f53b1532ccf7aa77273b3d19", "45b28a6ce2b0f1a8b703a3529fd1501f465f3fdf", "d6a27c41c81f12028529e97e255789ec2ba39eaa" ], "nlp_background": [ "zero", "zero", "zero", "zero" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no", "no" ], "search_query": [ "", "", "", "" ], "question_writer": [ "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c", "258ee4069f740c400c0049a2580945a1cc7f044c" ], "answers": [ { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "0e5353fd8bdcfa9e88ae4f56a6dd4a8ad4fa8b53" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": true, "extractive_spans": [], "yes_no": null, "free_form_answer": "", "evidence": [], "highlighted_evidence": [] } ], "annotation_id": [ "f9c8da0dbb3a584de3f589b365bf5e06be29e951" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "special dedicated discriminator is added to the model to control that the latent representation does not contain stylistic information", "shifted autoencoder or SAE", "combination of both approaches" ], "yes_no": null, "free_form_answer": "", "evidence": [ "Let us propose two further extensions of this baseline architecture. To improve reproducibility of the research the code of the studied models is open. Both extensions aim to improve the quality of information decomposition within the latent representation. In the first one, shown in Figure FIGREF12, a special dedicated discriminator is added to the model to control that the latent representation does not contain stylistic information. The loss of this discriminator is defined as", "The second extension of the baseline architecture does not use an adversarial component $D_z$ that is trying to eradicate information on $c$ from component $z$. Instead, the system, shown in Figure FIGREF16 feeds the \"soft\" generated sentence $\\tilde{G}$ into encoder $E$ and checks how close is the representation $E(\\tilde{G} )$ to the original representation $z = E(x)$ in terms of the cosine distance. We further refer to it as shifted autoencoder or SAE. Ideally, both $E(\\tilde{G} (E(x), c))$ and $E(\\tilde{G} (E(x), \\bar{c}))$, where $\\bar{c}$ denotes an inverse style code, should be both equal to $E(x)$. The loss of the shifted autoencoder is", "We also study a combination of both approaches described above, shown on Figure FIGREF17." ], "highlighted_evidence": [ "In the first one, shown in Figure FIGREF12, a special dedicated discriminator is added to the model to control that the latent representation does not contain stylistic information.", "The second extension of the baseline architecture does not use an adversarial component $D_z$ that is trying to eradicate information on $c$ from component $z$. Instead, the system, shown in Figure FIGREF16 feeds the \"soft\" generated sentence $\\tilde{G}$ into encoder $E$ and checks how close is the representation $E(\\tilde{G} )$ to the original representation $z = E(x)$ in terms of the cosine distance. We further refer to it as shifted autoencoder or SAE.", "We also study a combination of both approaches described above, shown on Figure FIGREF17." ] } ], "annotation_id": [ "5d57323f16d63938142bf03202015eb90a72fb35" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [ "accuracy can change up to 5 percentage points, whereas BLEU can vary up to 8 points" ], "yes_no": null, "free_form_answer": "", "evidence": [ "On Figure FIGREF1 one can see that the outcomes for every single rerun differ significantly. Namely, accuracy can change up to 5 percentage points, whereas BLEU can vary up to 8 points. This variance can be partially explained with the stochasticity incurred due to sampling from the latent variables. However, we show that results for state of the art models sometimes end up within error margins from one another, so one has to report the margins to compare the results rigorously. More importantly, one can see that there is an inherent trade-off between these two performance metrics. This trade-off is not only visible across models but is also present for the same retrained architecture. Therefore, improving one of the two metrics is not enough to confidently state that one system solves the style-transfer problem better than the other. One has to report error margins after several consecutive retrains and instead of comparing one of the two metrics has to talk about Pareto-like optimization that would show confident improvement of both." ], "highlighted_evidence": [ "On Figure FIGREF1 one can see that the outcomes for every single rerun differ significantly. Namely, accuracy can change up to 5 percentage points, whereas BLEU can vary up to 8 points." ] } ], "annotation_id": [ "3abf4677370399c63723d2ec098c155176f8fbe7" ], "worker_id": [ "258ee4069f740c400c0049a2580945a1cc7f044c" ] } ] }
{ "caption": [ "Figure 1: Test results of multiple runs for four different architectures retrained several times from scratch. Indepth description of the architectures can be found in Section 3.", "Figure 2: Overview of the self-reported results for sentiment transfer on Yelp! reviews. Results of (Romanov et al., 2018) are not displayed due to the absence of selfreported BLEU scores. Later in the paper we show that on different reruns BLEU and accuracy can vary from these self-reported single results.", "Figure 3: The generative model, where style is a structured code targeting sentence attributes to control. Blue dashed arrows denote the proposed independence constraint of latent representation and controlled attribute, see (Hu et al., 2017a) for the details.", "Figure 4: The generative model with dedicated discriminator introduced to ensure that semantic part of the latent representation does not have information on the style of the text.", "Figure 6: A combination of an additional discriminator used in Figure 4 with a shifted autoencoder shown in Figure 5", "Figure 5: The generative model with a dedicated loss added to control that semantic representation of the output, when processed by the encoder, is close to the semantic representation of the input.", "Figure 7: Overview of the self-reported results for sentiment transfer on Yelp! reviews alongside with the results for the baseline model (Hu et al., 2017a), architecture with additional discriminator, shifted autoencoder (SAE) with additional cosine losses, and a combination of these two architectures averaged after five re-trains alongside with architectures proposed by (Tian et al., 2018) after five consecutive re-trains. Results of (Romanov et al., 2018) are not displayed due to the absence of self-reported BLEU scores.", "Figure 9: Overview of the BLEU between output and human-written reformulations of Yelp! reviews. Architecture with additional discriminator, shifted autoencoder (SAE) with additional cosine losses, and a combination of these two architectures measured after five re-runs outperform the baseline by (Hu et al., 2017a) as well as other state of the art models. Results of (Romanov et al., 2018) are not displayed due to the absence of self-reported BLEU scores", "Figure 8: Manipulating the generated output in a way that boosts accuracy one can change BLEU between output and input. Moreover, such manipulation increases BLEU between output and human written reformulations. The picture shows behavior of SAE, but other architectures demonstrate similar behavior. The results are an average of four consecutive retrains of the same architecture." ], "file": [ "2-Figure1-1.png", "3-Figure2-1.png", "4-Figure3-1.png", "4-Figure4-1.png", "5-Figure6-1.png", "5-Figure5-1.png", "6-Figure7-1.png", "7-Figure9-1.png", "7-Figure8-1.png" ] }
1707.00110
Efficient Attention using a Fixed-Size Memory Representation
The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20% for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.
{ "section_name": [ "Introduction", "Sequence-to-Sequence Model with Attention", "Memory-Based Attention Model", "Model Interpretations", "Position Encodings (PE)", "Toy Copying Experiment", "Machine Translation", "Visualizing Attention", "Related Work", "Conclusion" ], "paragraphs": [ [ "Sequence-to-sequence models BIBREF0 , BIBREF1 have achieved state of the art results across a wide variety of tasks, including Neural Machine Translation (NMT) BIBREF2 , BIBREF3 , text summarization BIBREF4 , BIBREF5 , speech recognition BIBREF6 , BIBREF7 , image captioning BIBREF8 , and conversational modeling BIBREF9 , BIBREF10 .", "The most popular approaches are based on an encoder-decoder architecture consisting of two recurrent neural networks (RNNs) and an attention mechanism that aligns target to source tokens BIBREF2 , BIBREF11 . The typical attention mechanism used in these architectures computes a new attention context at each decoding step based on the current state of the decoder. Intuitively, this corresponds to looking at the source sequence after the output of every single target token.", "Inspired by how humans process sentences, we believe it may be unnecessary to look back at the entire original source sequence at each step. We thus propose an alternative attention mechanism (section \"Memory-Based Attention Model\" ) that leads to smaller computational time complexity. Our method predicts $K$ attention context vectors while reading the source, and learns to use a weighted average of these vectors at each step of decoding. Thus, we avoid looking back at the source sequence once it has been encoded. We show (section \"Experiments\" ) that this speeds up inference while performing on-par with the standard mechanism on both toy and real-world WMT translation datasets. We also show that our mechanism leads to larger speedups as sequences get longer. Finally, by visualizing the attention scores (section \"Visualizing Attention\" ), we verify that the proposed technique learns meaningful alignments, and that different attention context vectors specialize on different parts of the source." ], [ "Our models are based on an encoder-decoder architecture with attention mechanism BIBREF2 , BIBREF11 . An encoder function takes as input a sequence of source tokens $\\mathbf {x} = (x_1, ..., x_m)$ and produces a sequence of states $\\mathbf {s} = (s_1, ..., s_m)$ .The decoder is an RNN that predicts the probability of a target sequence $\\mathbf {y} = (y_1, ..., y_T \\mid \\mathbf {s})$ . The probability of each target token $y_i \\in \\lbrace 1, ... ,|V|\\rbrace $ is predicted based on the recurrent state in the decoder RNN, $h_i$ , the previous words, $y_{<i}$ , and a context vector $c_i$ . The context vector $c_i$ , also referred to as the attention vector, is calculated as a weighted average of the source states. ", "$$c_i & = \\sum _{j}{\\alpha _{ij} s_j} \\\\\n{\\alpha }_{i} & = \\text{softmax}(f_{att}(h_i, \\mathbf {s}))$$ (Eq. 3) ", "Here, $f_{att}(h_i, \\mathbf {s})$ is an attention function that calculates an unnormalized alignment score between the encoder state $s_j$ and the decoder state $h_i$ . Variants of $f_{att}$ used in BIBREF2 and BIBREF11 are: $\nf_{att}(h_i, s_j)=\n{\\left\\lbrace \\begin{array}{ll}\nv_a^T \\text{tanh}(W_a[h_i, s_j]),& \\emph {Bahdanau} \\\\\nh_i^TW_as_j & \\emph {Luong}\n\\end{array}\\right.}\n$ ", "where $W_a$ and $v_a$ are model parameters learned to predict alignment. Let $|S|$ and $|T|$ denote the lengths of the source and target sequences respectively and $D$ denoate the state size of the encoder and decoder RNN. Such content-based attention mechanisms result in inference times of $O(D^2|S||T|)$ , as each context vector depends on the current decoder state $h_i$ and all encoder states, and requires an $O(D^2)$ matrix multiplication.", "The decoder outputs a distribution over a vocabulary of fixed-size $|V|$ : ", "$$P(y_i \\vert y_{<i}, \\mathbf {x}) = \\text{softmax}(W[s_i; c_i] + b)$$ (Eq. 5) ", " The model is trained end-to-end by minimizing the negative log likelihood of the target words using stochastic gradient descent." ], [ "Our proposed model is shown in Figure 1 . During encoding, we compute an attention matrix $C \\in \\mathbb {R}^{K \\times D}$ , where $K$ is the number of attention vectors and a hyperparameter of our method, and $D$ is the dimensionality of the top-most encoder state. This matrix is computed by predicting a score vector $\\alpha _t \\in \\mathbb {R}^K$ at each encoding time step $t$ . $C$ is then a linear combination of the encoder states, weighted by $\\alpha _t$ : ", "$$C_k & = \\sum _{t=0}^{|S|}{\\alpha _{tk} s_t} \\\\\n\\alpha _t & = \\text{softmax}(W_\\alpha s_t) ,$$ (Eq. 7) ", " where $W_{\\alpha }$ is a parameter matrix in $\\mathbb {R}^{K\\times D}$ .", "The computational time complexity for this operation is $O(KD|S|)$ . One can think of C as compact fixed-length memory that the decoder will perform attention over. In contrast, standard approaches use a variable-length set of encoder states for attention. At each decoding step, we similarly predict $K$ scores $\\beta \\in \\mathbb {R}^K$ . The final attention context $c$ is a linear combination of the rows in $C$ weighted by the scores. Intuitively, each decoder step predicts how important each of the $K$ attention vectors is. ", "$$c & = \\sum _{i=0}^{K}{\\beta _i C_i} \\\\\n\\beta & = \\text{softmax}(W_\\beta h)$$ (Eq. 8) ", " Here, $h$ is the current state of the decoder, and $W_\\beta $ is a learned parameter matrix. Note that we do not access the encoder states at each decoder step. We simply take a linear combination of the attention matrix $C$ pre-computed during encoding - a much cheaper operation that is independent of the length of the source sequence. The time complexity of this computation is $O(KD|T|)$ as multiplication with the $K$ attention matrices needs to happen at each decoding step.", "Summing $O(KD|S|)$ from encoding and $O(KD|T|)$ from decoding, we have a total linear computational complexity of $O(KD(|S| + |T|)$ . As $D$ is typically very large, 512 or 1024 units in most applications, we expect our model to be faster than the standard attention mechanism running in $O(D^2|S||T|)$ . For long sequences (as in summarization, where |S| is large), we also expect our model to be faster than the cheaper dot-based attention mechanism, which needs $O(D|S||T|)$ computation time and requires encoder and decoder states sizes to match.", "We also experimented with using a sigmoid function instead of the softmax to score the encoder and decoder attention scores, resulting in 4 possible combinations. We call this choice the scoring function. A softmax scoring function calculates normalized scores, while the sigmoid scoring function results in unnormalized scores that can be understood as gates." ], [ "Our memory-based attention model can be understood intuitively in two ways. We can interpret it as \"predicting\" the set of attention contexts produced by a standard attention mechanism during encoding. To see this, assume we set $K \\approx |T|$ . In this case, we predict all $|T|$ attention contexts during the encoding stage and learn to choose the right one during decoding. This is cheaper than computing contexts one-by-one based on the decoder and encoder content. In fact, we could enforce this objective by first training a regular attention model and adding a regularization term to force the memory matrix $C$ to be close to the $T\\times D$ vectors computed by the standard attention. We leave it to future work to explore such an objective.", "Alternatively, we can interpret our mechanism as first predicting a compact $K \\times D$ memory matrix, a representation of the source sequence, and then performing location-based attention on the memory by picking which row of the matrix to attend to. Standard location-based attention mechanism, by contrast, predicts a location in the source sequence to focus on BIBREF11 , BIBREF8 ." ], [ "In the above formulation, the predictions of attention contexts are symmetric. That is, $C_i$ is not forced to be different from $C_{j\\ne i}$ . While we would hope for the model to learn to generate distinct attention contexts, we now present an extension that pushes the model into this direction. We add position encodings to the score matrix that forces the first few context vector $C_1, C_2, ...$ to focus on the beginning of the sequence and the last few vectors $...,C_{K-1}, C_K$ to focus on the end (thereby encouraging in-between vectors to focus on the middle).", "Explicitly, we multiply the score vector $\\alpha $ with position encodings $l_s\\in \\mathbb {R}^{K}$ : ", "$$C^{PE} & = \\sum _{s=0}^{|S|}{\\alpha ^{PE} h_s} \\\\\n\\alpha ^{PE}_s & = \\text{softmax}(W_\\alpha h_s \\circ l_s)$$ (Eq. 11) ", "To obtain $l_s$ we first calculate a constant matrix $L$ where we define each element as ", "$$L_{ks} & = (1-k/K)(1-s/\\mathcal {S})+\\frac{k}{K}\\frac{s}{\\mathcal {S}},$$ (Eq. 12) ", " adapting a formula from BIBREF13 . Here, $k\\in \\lbrace 1,2,...,K\\rbrace $ is the context vector index and $\\mathcal {S}$ is the maximum sequence length across all source sequences. The manifold is shown graphically in Figure 2 . We can see that earlier encoder states are upweighted in the first context vectors, and later states are upweighted in later vectors. The symmetry of the manifold and its stationary point having value 0.5 both follow from Eq. 12 . The elements of the matrix that fall beyond the sequence lengths are then masked out and the remaining elements are renormalized across the timestep dimension. This results in the jagged array of position encodings $\\lbrace l_{ks}\\rbrace $ ." ], [ "Due to the reduction of computational time complexity we expect our method to yield performance gains especially for longer sequences and tasks where the source can be compactly represented in a fixed-size memory matrix. To investigate the trade-off between speed and performance, we compare our technique to standard models with and without attention on a Sequence Copy Task of varying length like in BIBREF14 . We generated 4 training datasets of 100,000 examples and a validation dataset of 1,000 examples. The vocabulary size was 20. For each dataset, the sequences had lengths randomly chosen between 0 to $L$ , for $L\\in \\lbrace 10, 50, 100, 200\\rbrace $ unique to each dataset.", "All models are implemented using TensorFlow based on the seq2seq implementation of BIBREF15 and trained on a single machine with a Nvidia K40m GPU. We use a 2-layer 256-unit, a bidirectional LSTM BIBREF16 encoder, a 2-layer 256-unit LSTM decoder, and 256-dimensional embeddings. For the attention baseline, we use the standard parametrized attention BIBREF2 . Dropout of 0.2 (0.8 keep probability) is applied to the input of each cell and we optimize using Adam BIBREF17 at a learning rate of 0.0001 and batch size 128. We train for at most 200,000 steps (see Figure 3 for sample learning curves). BLEU scores are calculated on tokenized data using the multi-bleu.perl script in Moses. We decode using beam search with a beam", "size of 10 BIBREF18 .", "Table 1 shows the BLEU scores of our model on different sequence lengths while varying $K$ . This is a study of the trade-off between computational time and representational power. A large $K$ allows us to compute complex source representations, while a $K$ of 1 limits the source representation to a single vector. We can see that performance consistently increases with $K$ up to a point that depends on the data length, with longer sequences requiring more complex representations. The results with and without position encodings are almost identical on the toy data. Our technique learns to fit the data as well as the standard attention mechanism despite having less representational power. Both beat the non-attention baseline by a significant margin.", "That we are able to represent the source sequence with a fixed size matrix with fewer than $|S|$ rows suggests that traditional attention mechanisms may be representing the source with redundancies and wasting computational resources. This makes intuitive sense for the toy task, which should require a relatively simple representation.", "The last column shows that our technique significantly speeds up the inference process. The gap in inference speed increases as sequences become longer. We measured inference time on the full validation set of 1,000 examples, not including data loading or model construction times.", "Figure 3 shows the learning curves for sequence length 200. We see that $K=1$ is unable to fit the data distribution, while $K\\in \\lbrace 32, 64\\rbrace $ fits the data almost as quickly as the attention-based model. Figure 3 shows the effect of varying the encoder and decoder scoring functions between softmax and sigmoid. All combinations manage to fit the data, but some converge faster than others. In section \"Visualizing Attention\" we show that distinct alignments are learned by different function combinations." ], [ "Next, we explore if the memory-based attention mechanism is able to fit complex real-world datasets. For this purpose we use 4 large machine translation datasets of WMT'17 on the following language pairs: English-Czech (en-cs, 52M examples), English-German (en-de, 5.9M examples), English-Finish (en-fi, 2.6M examples), and English-Turkish (en-tr, 207,373 examples). We used the newly available pre-processed datasets for the WMT'17 task. Note that our scores may not be directly comparable to other work that performs their own data pre-processing. We learn shared vocabularies of 16,000 subword units using the BPE algorithm BIBREF19 . We use newstest2015 as a validation set, and report BLEU on newstest2016.", "We use a similar setup to the Toy Copy task, but use 512 RNN and embedding units, train using 8 distributed workers with 1 GPU each, and train for at most 1M steps. We save checkpoints every 30 minutes during training, and choose the best based on the validation BLEU score.", "Table 2 compares our approach with and without position encodings, and with varying values for hyperparameter $K$ , to baseline models with regular attention mechanism. Learning curves are shown in Figure 4 . We see that our memory attention model with sufficiently high $K$ performs on-par with, or slightly better, than the attention-based baseline model despite its simpler nature. Across the board, models with $K=64$ performed better than corresponding models with $K=32$ , suggesting that using a larger number of attention vectors can capture a richer understanding of source sequences. Position encodings also seem to consistently improve model performance.", "Table 3 shows that our model results in faster decoding time even on a complex dataset with a large vocabulary of 16k. We measured decoding time over the full validation set, not including time used for model setup and data loading, averaged across 10 runs. The average sequence length for examples in this data was 35, and we expect more significant speedups for tasks with longer sequences, as suggested by our experiments on toy data. Note that in our NMT examples/experiments, $K\\approx T$ , but we obtain computational savings from the fact that $K \\ll D$ . We may be able to set $K \\ll T$ , as in toy copying, and still get very good performance in other tasks. For instance, in summarization the source is complex but the representation of the source required to perform the task is \"simple\" (i.e. all that is needed to generate the abstract).", "Figure 5 shows the effect of using sigmoid and softmax function in the encoders and decoders. We found that softmax/softmax consistently performs badly, while all other combinations perform about equally well. We report results for the best combination only (as chosen on the validation set), but we found this choice to only make a minor difference." ], [ "A useful property of the standard attention mechanism is that it produces meaningful alignment between source and target sequences. Often, the attention mechanism learns to progressively focus on the next source token as it decodes the target. These visualizations can be an important tool in debugging and evaluating seq2seq models and are often used for unknown token replacement.", "This raises the question of whether or not our proposed memory attention mechanism also learns to generate meaningful alignments. Due to limiting the number of attention contexts to a number that is generally less than the sequence length, it is not immediately obvious what each context would learn to focus on. Our hope was that the model would learn to focus on multiple alignments at the same time, within the same attention vector. For example, if the source sequence is of length 40 and we have $K=10$ attention contexts, we would hope that $C_1$ roughly focuses on tokens 1 to 4, $C_2$ on tokens 5 to 8, and so on. Figures 6 and 7 show that this is indeed the case. To generate this visualization we multiply the attention scores $\\alpha $ and $\\beta $ from the encoder and decoder. Figure 8 shows a sample translation task visualization.", "Figure 6 suggests that our model learns distinct ways to use its memory depending on the encoder and decoder functions. Interestingly, using softmax normalization results in attention maps typical of those derived from using standard attention, i.e. a relatively linear mapping between source and target tokens. Meanwhile, using sigmoid gating results in what seems to be a distributed representation of the source sequences across encoder time steps, with multiple contiguous attention contexts being accessed at each decoding step." ], [ "Our contributions build on previous work in making seq2seq models more computationally efficient. BIBREF11 introduce various attention mechanisms that are computationally simpler and perform as well or better than the original one presented in BIBREF2 . However, these typically still require $O(D^2)$ computation complexity, or lack the flexibility to look at the full source sequence. Efficient location-based attention BIBREF8 has also been explored in the image recognition domain.", " BIBREF3 presents several enhancements to the standard seq2seq architecture that allow more efficient computation on GPUs, such as only attending on the bottom layer. BIBREF20 propose a linear time architecture based on stacked convolutional neural networks. BIBREF21 also propose the use of convolutional encoders to speed up NMT. BIBREF22 propose a linear attention mechanism based on covariance matrices applied to information retrieval. BIBREF23 enable online linear time attention calculation by enforcing that the alignment between input and output sequence elements be monotonic. Previously, monotonic attention was proposed for morphological inflection generation by BIBREF24 ." ], [ "In this work, we propose a novel memory-based attention mechanism that results in a linear computational time of $O(KD(|S| + |T|))$ during decoding in seq2seq models. Through a series of experiments, we demonstrate that our technique leads to consistent inference speedups as sequences get longer, and can fit complex data distributions such as those found in Neural Machine Translation. We show that our attention mechanism learns meaningful alignments despite being constrained to a fixed representation after encoding. We encourage future work that explores the optimal values of $K$ for various language tasks and examines whether or not it is possible to predict $K$ based on the task at hand. We also encourage evaluating our models on other tasks that must deal with long sequences but have compact representations, such as summarization and question-answering, and further exploration of their effect on memory and training speed." ] ] }
{ "question": [ "Which baseline methods are used?", "How much is the BLEU score?", "Which datasets are used in experiments?" ], "question_id": [ "2d3bf170c1647c5a95abae50ee3ef3b404230ce4", "6e8c587b6562fafb43a7823637b84cd01487059a", "ab9453fa2b927c97b60b06aeda944ac5c1bfef1e" ], "nlp_background": [ "five", "five", "five" ], "topic_background": [ "unfamiliar", "unfamiliar", "unfamiliar" ], "paper_read": [ "no", "no", "no" ], "search_query": [ "efficient", "efficient", "efficient" ], "question_writer": [ "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66", "50d8b4a941c26b89482c94ab324b5a274f9ced66" ], "answers": [ { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "standard parametrized attention and a non-attention baseline", "evidence": [ "Table 1 shows the BLEU scores of our model on different sequence lengths while varying $K$ . This is a study of the trade-off between computational time and representational power. A large $K$ allows us to compute complex source representations, while a $K$ of 1 limits the source representation to a single vector. We can see that performance consistently increases with $K$ up to a point that depends on the data length, with longer sequences requiring more complex representations. The results with and without position encodings are almost identical on the toy data. Our technique learns to fit the data as well as the standard attention mechanism despite having less representational power. Both beat the non-attention baseline by a significant margin.", "All models are implemented using TensorFlow based on the seq2seq implementation of BIBREF15 and trained on a single machine with a Nvidia K40m GPU. We use a 2-layer 256-unit, a bidirectional LSTM BIBREF16 encoder, a 2-layer 256-unit LSTM decoder, and 256-dimensional embeddings. For the attention baseline, we use the standard parametrized attention BIBREF2 . Dropout of 0.2 (0.8 keep probability) is applied to the input of each cell and we optimize using Adam BIBREF17 at a learning rate of 0.0001 and batch size 128. We train for at most 200,000 steps (see Figure 3 for sample learning curves). BLEU scores are calculated on tokenized data using the multi-bleu.perl script in Moses. We decode using beam search with a beam" ], "highlighted_evidence": [ "Both beat the non-attention baseline by a significant margin.", "For the attention baseline, we use the standard parametrized attention BIBREF2 ." ] } ], "annotation_id": [ "0e7135bdd269d4e83630b27b6ae64fbe62e9e5d4" ], "worker_id": [ "ca2a4695129d0180768a955fb5910d639f79aa34" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Ranges from 44.22 to 100.00 depending on K and the sequence length.", "evidence": [ "FLOAT SELECTED: Table 1: BLEU scores and computation times with varyingK and sequence length compared to baseline models with and without attention." ], "highlighted_evidence": [ "FLOAT SELECTED: Table 1: BLEU scores and computation times with varyingK and sequence length compared to baseline models with and without attention." ] } ], "annotation_id": [ "3dc877f4b4aaad7a07dbfb97b365bf847acd1161" ], "worker_id": [ "c1fbdd7a261021041f75fbe00a55b4c386ebbbb4" ] }, { "answer": [ { "unanswerable": false, "extractive_spans": [], "yes_no": null, "free_form_answer": "Sequence Copy Task and WMT'17", "evidence": [ "Due to the reduction of computational time complexity we expect our method to yield performance gains especially for longer sequences and tasks where the source can be compactly represented in a fixed-size memory matrix. To investigate the trade-off between speed and performance, we compare our technique to standard models with and without attention on a Sequence Copy Task of varying length like in BIBREF14 . We generated 4 training datasets of 100,000 examples and a validation dataset of 1,000 examples. The vocabulary size was 20. For each dataset, the sequences had lengths randomly chosen between 0 to $L$ , for $L\\in \\lbrace 10, 50, 100, 200\\rbrace $ unique to each dataset.", "Next, we explore if the memory-based attention mechanism is able to fit complex real-world datasets. For this purpose we use 4 large machine translation datasets of WMT'17 on the following language pairs: English-Czech (en-cs, 52M examples), English-German (en-de, 5.9M examples), English-Finish (en-fi, 2.6M examples), and English-Turkish (en-tr, 207,373 examples). We used the newly available pre-processed datasets for the WMT'17 task. Note that our scores may not be directly comparable to other work that performs their own data pre-processing. We learn shared vocabularies of 16,000 subword units using the BPE algorithm BIBREF19 . We use newstest2015 as a validation set, and report BLEU on newstest2016." ], "highlighted_evidence": [ "To investigate the trade-off between speed and performance, we compare our technique to standard models with and without attention on a Sequence Copy Task of varying length like in BIBREF14 .", "For this purpose we use 4 large machine translation datasets of WMT'17 on the following language pairs: English-Czech (en-cs, 52M examples), English-German (en-de, 5.9M examples), English-Finish (en-fi, 2.6M examples), and English-Turkish (en-tr, 207,373 examples)." ] } ], "annotation_id": [ "7b010301cc5c61449c64ae40c8e41551fe35d67c" ], "worker_id": [ "ca2a4695129d0180768a955fb5910d639f79aa34" ] } ] }
{ "caption": [ "Figure 1: Memory Attention model architecture. K attention vectors are predicted during encoding, and a linear combination is chosen during decoding. In our example,K=3.", "Figure 2: Surface for the position encodings.", "Table 1: BLEU scores and computation times with varyingK and sequence length compared to baseline models with and without attention.", "Figure 3: Training Curves for the Toy Copy task", "Figure 4: Comparing training curves for en-fi and en-tr with sigmoid encoder scoring and softmax decoder scoring and position encoding. Note that en-tr curves converged very quickly.", "Table 2: BLEU scores on WMT’17 translation datasets from the memory attention models and regular attention baselines. We picked the best out of the four scoring function combinations on the validation set. Note that en-tr does not have an official test set. Best test scores on each dataset are highlighted.", "Table 3: Decoding time, averaged across 10 runs, for the en-de validation set (2169 examples) with average sequence length of 35. Results are similar for both PE and non-PE models.", "Figure 5: Comparing training curves for en-fi for different encoder/decoder scoring functions for our models atK=64.", "Figure 6: Attention scores at each step of decoding for on a sample from the sequence length 100 toy copy dataset. Individual attention vectors are highlighted in blue. (y-axis: source tokens; x-axis: target tokens)", "Figure 7: Attention scores at each step of decoding for K = 4 on a sample with sequence length 11. The subfigure on the left color codes each individual attention vector. (y-axis: source; x-axis: target)", "Figure 8: Attention scores at each step of decoding for en-de WMT translation task using model with sigmoid scoring functions and K=32. The left subfigure displays each individual attention vector separately while the right subfigure displays the full combined attention. (y-axis: source; x-axis: target)" ], "file": [ "3-Figure1-1.png", "4-Figure2-1.png", "4-Table1-1.png", "5-Figure3-1.png", "6-Figure4-1.png", "6-Table2-1.png", "6-Table3-1.png", "7-Figure5-1.png", "8-Figure6-1.png", "8-Figure7-1.png", "8-Figure8-1.png" ] }