Unnamed: 0
int64
0
886
question
stringlengths
19
151
answer
stringlengths
1
1.08k
abstract
stringlengths
279
2.02k
introduction
stringlengths
52
9.04k
845
What dataset they use for evaluation?
The same 2K set from Gigaword used in BIBREF7
Currency trading (Forex) is the largest world market in terms of volume. We analyze trading and tweeting about the EUR-USD currency pair over a period of three years. First, a large number of tweets were manually labeled, and a Twitter stance classification model is constructed. The model then classifies all the tweets by the trading stance signal: buy, hold, or sell (EUR vs. USD). The Twitter stance is compared to the actual currency rates by applying the event study methodology, well-known in financial economics. It turns out that there are large differences in Twitter stance distribution and potential trading returns between the four groups of Twitter users: trading robots, spammers, trading companies, and individual traders. Additionally, we observe attempts of reputation manipulation by post festum removal of tweets with poor predictions, and deleting/reposting of identical tweets to increase the visibility without tainting one's Twitter timeline.
Currency trading (Forex) is the largest world market in terms of volume. We analyze trading and tweeting about the EUR-USD currency pair over a period of three years. First, a large number of tweets were manually labeled, and a Twitter stance classification model is constructed. The model then classifies all the tweets by the trading stance signal: buy, hold, or sell (EUR vs. USD). The Twitter stance is compared to the actual currency rates by applying the event study methodology, well-known in financial economics. It turns out that there are large differences in Twitter stance distribution and potential trading returns between the four groups of Twitter users: trading robots, spammers, trading companies, and individual traders. Additionally, we observe attempts of reputation manipulation by post festum removal of tweets with poor predictions, and deleting/reposting of identical tweets to increase the visibility without tainting one's Twitter timeline.
847
Which regions of the United States do they consider?
all regions except those that are colored black
Information distribution by electronic messages is a privileged means of transmission for many businesses and individuals, often under the form of plain-text tables. As their number grows, it becomes necessary to use an algorithm to extract text and numbers instead of a human. Usual methods are focused on regular expressions or on a strict structure in the data, but are not efficient when we have many variations, fuzzy structure or implicit labels. In this paper we introduce SC2T, a totally self-supervised model for constructing vector representations of tokens in semi-structured messages by using characters and context levels that address these issues. It can then be used for an unsupervised labeling of tokens, or be the basis for a semi-supervised information extraction system.
Today most of business-related information is transmitted in an electronic form, such as emails. Therefore, converting these messages into an easily analyzable representation could open numerous business opportunities, as a lot of them are not used fully because of the difficulty to build bespoke parsing methods. In particular, a great number of these transmissions are semi-structured text, which doesn’t necessarily follows the classic english grammar. As seen in Fig. 1 , they can be under the form of tables containing diverse elements, words and numbers, afterwards referred to as tokens. These tables are often implicitly defined, which means that there are no special tags between what is or not part of the table, or even between cells. In these cases, the structure is coming from space or tabs alignment and from the relative order of the tokens. The data often are unlabeled, which means that the content must be read with domain-based knowledge. Thus, automatic extraction of structured information is a major challenge because token candidates come in a variety of forms within a fuzzy context. A high level of supervision is hard to obtain as manual labeling requires time that is hardly affordable when receiving thousands of such emails a day, and even more so as databases can become irrelevant over time. That is why training a generalizable model to extract these data should not rely on labeled inputs, but rather on the content itself - a paradigm called self-supervised learning. Many approaches already exist in Natural Language Processing, such as Part-of-Speech (POS) tagging or Named Entity Recognition (NER), but they do not take advantage of the semi-structured data framework. On the contrary, there exists some information extraction algorithms applied to tables, but they necessitate a great amount of manually defined rules and exceptions. Our model aims to reconcile both approaches for an efficient and totally self-supervised take on information extraction in the particular context of semi-structured data. In this paper, we present a neural architecture for token embedding in plain-text tables, which provides a useful lower-dimensional representation for tasks such as unsupervised, or semi-supervised clustering. Intuitively, tokens with a similar meaning should be close in the feature space to ease any further information extraction. Our model aims to combine the better of the context and the character composition of each token, and that is why the neural architecture is designed to learn both context and character-level representations simultaneously. Finally, we can take advantage of the distances between tokens in the feature space to create proper tables from fuzzy input data.
849
How is performance measured?
they use ROC curves and cross-validation
Recent advances in modern Natural Language Processing (NLP) research have been dominated by the combination of Transfer Learning methods with large-scale language models, in particular based on the Transformer architecture. With them came a paradigm shift in NLP with the starting point for training a model on a downstream task moving from a blank specific model to a general-purpose pretrained architecture. Still, creating these general-purpose models remains an expensive and time-consuming process restricting the use of these methods to a small sub-set of the wider NLP community. In this paper, we present HuggingFace's Transformers library, a library for state-of-the-art NLP, making these developments available to the community by gathering state-of-the-art general-purpose pretrained models under a unified API together with an ecosystem of libraries, examples, tutorials and scripts targeting many downstream NLP tasks. HuggingFace's Transformers library features carefully crafted model implementations and high-performance pretrained weights for two main deep learning frameworks, PyTorch and TensorFlow, while supporting all the necessary tools to analyze, evaluate and use these models in downstream tasks such as text/token classification, questions answering and language generation among others. The library has gained significant organic traction and adoption among both the researcher and practitioner communities. We are committed at HuggingFace to pursue the efforts to develop this toolkit with the ambition of creating the standard library for building NLP systems. HuggingFace's Transformers library is available at \url{https://github.com/huggingface/transformers}.
In the past 18 months, advances on many Natural Language Processing (NLP) tasks have been dominated by deep learning models and, more specifically, the use of Transfer Learning methods BIBREF0 in which a deep neural network language model is pretrained on a web-scale unlabelled text dataset with a general-purpose training objective before being fine-tuned on various downstream tasks. Following noticeable improvements using Long Short-Term Memory (LSTM) architectures BIBREF1, BIBREF2, a series of works combining Transfer Learning methods with large-scale Transformer architectures BIBREF3 has repeatedly advanced the state-of-the-art on NLP tasks ranging from text classification BIBREF4, language understanding BIBREF5, BIBREF6, BIBREF7, machine translation BIBREF8, and zero-short language generation BIBREF9 up to co-reference resolution BIBREF10 and commonsense inference BIBREF11. While this approach has shown impressive improvements on benchmarks and evaluation metrics, the exponential increase in the size of the pretraining datasets as well as the model sizes BIBREF5, BIBREF12 has made it both difficult and costly for researchers and practitioners with limited computational resources to benefit from these models. For instance, RoBERTa BIBREF5 was trained on 160 GB of text using 1024 32GB V100. On Amazon-Web-Services cloud computing (AWS), such a pretraining would cost approximately 100K USD. Contrary to this trend, the booming research in Machine Learning in general and Natural Language Processing in particular is arguably explained significantly by a strong focus on knowledge sharing and large-scale community efforts resulting in the development of standard libraries, an increased availability of published research code and strong incentives to share state-of-the-art pretrained models. The combination of these factors has lead researchers to reproduce previous results more easily, investigate current approaches and test hypotheses without having to redevelop them first, and focus their efforts on formulating and testing new hypotheses. To bring Transfer Learning methods and large-scale pretrained Transformers back into the realm of these best practices, the authors (and the community of contributors) have developed Transformers, a library for state-of-the art Natural Language Processing with Transfer Learning models. Transformers addresses several key challenges:
854
What is novel in author's approach?
They use self-play learning , optimize the model for specific metrics, train separate models per user, use model and response classification predictors, and filter the dataset to obtain higher quality training data.
Fine-tuning language models, such as BERT, on domain specific corpora has proven to be valuable in domains like scientific papers and biomedical text. In this paper, we show that fine-tuning BERT on legal documents similarly provides valuable improvements on NLP tasks in the legal domain. Demonstrating this outcome is significant for analyzing commercial agreements, because obtaining large legal corpora is challenging due to their confidential nature. As such, we show that having access to large legal corpora is a competitive advantage for commercial applications, and academic research on analyzing contracts.
Businesses rely on contracts to capture critical obligations with other parties, such as: scope of work, amounts owed, and cancellation policies. Various efforts have gone into automatically extracting and classifying these terms. These efforts have usually been modeled as: classification, entity and relation extraction tasks. In this paper we focus on classification, but in our application we have found that our findings apply equally and sometimes, more profoundly, on other tasks. Recently, numerous studies have shown the value of fine-tuning language models such as ELMo BIBREF2 and BERT BIBREF3 to achieve state-of-the-art results BIBREF4 on domain specific tasks BIBREF5, BIBREF6. In this paper we investigate and quantify the impact of utilizing a large domain-specific corpus of legal agreements to improve the accuracy of classification models by fine-tuning BERT. Specifically, we assess: (i) the performance of a simple model that only uses the pre-trained BERT language model, (ii) the impact of further fine tuning BERT, and (iii) how this impact changes as we train on larger corpora. Ultimately, our investigations show marginal, but valuable, improvements that increase as we grow the size of the legal corpus used to fine-tine BERT — and allow us to confidently claim that not only is this approach valuable for increasing accuracy, but commercial enterprises seeking to create these models will have an edge if they can amass a corpus of legal documents.
855
How large is the Dialog State Tracking Dataset?
1,618 training dialogs, 500 validation dialogs, and 1,117 test dialogs
This is the application document for the 2019 Amazon Alexa competition. We give an overall vision of our conversational experience, as well as a sample conversation that we would like our dialog system to achieve by the end of the competition. We believe personalization, knowledge, and self-play are important components towards better chatbots. These are further highlighted by our detailed system architecture proposal and novelty section. Finally, we describe how we would ensure an engaging experience, how this research would impact the field, and related work.
Prompt: What is your team’s vision for your Socialbot? How do you want your customers to feel at the end of an interaction with your socialbot? How would your team measure success in competition? Our vision is made up of the following main points: 1. A natural, engaging, and knowledge-powered conversational experience. Made possible by a socialbot that can handle all kinds of topics and topic switching more naturally than current Alexa bots. Our goal is not necessarily for the user to feel like they are talking to a human. 2. More natural topic handling and topic switching. Incorporating knowledge into neural models BIBREF0 and using the Amazon topical chat dataset can help improve current socialbots in this aspect. 3. Building a deeper, more personalized connection with the user. We believe that offering a personalized experience is equally as important as being able to talk about a wide range of topics BIBREF1. 4. Consistency. Consistency is another important aspect of conversations which we want to take into account through our user models. 5. Diversity and interestingness. The socialbot should give diverse and interesting responses, and the user should never feel like it is merely repeating what it has said earlier. At the end of an interaction customers should feel like they just had a fun conversation, maybe learned something new, and are thrilled to talk to the bot again. Throughout the dialog, customers should feel like the socialbot is interested in them and their topics, and can offer valuable insight and opinions. It is also important for it to suggest relevant topics in an engaging way. Users should never feel like the bot is not interested or can’t continue a conversation. This is a reason behind classifying and calculating our metrics for each user input, to get an idea of user engagement in the current conversation. Our main measures for success are: - User feedback. - Comparison to other dialog systems in A/B tests. - Automatic metrics. We would measure success partly by looking at the user feedback. We expect our socialbot’s ratings to constantly increase, and verbal feedback to get more positive throughout the competition. We plan to classify verbal feedback with a simple sentiment classifier to quantitatively see the rate of improvement. Working back from the customer and constantly improving the conversational experience based on feedback is important to us. Success would also be measured by comparing our system to previous socialbots or other dialog systems in A/B tests with crowdsourced evaluators. Our goal is to have long and high-quality conversations, but the longevity shouldn’t come from awkwardly long, specific, and forced replies, as is the case with some of the current socialbots. While generally, a longer conversation is better, it is not the only metric that we wish to consider. Besides user ratings we also have a plethora of automatic metrics that we want to improve on, like metrics measuring topic depth and breadth BIBREF2, entropy metrics measuring diversity, or embedding metrics measuring coherence BIBREF3. Different metrics measure different aspects of responses, thus it is important to not solely look at metrics individually.
856
What dataset is used for train/test of this method?
Training datasets: TTS System dataset and embedding selection dataset. Evaluation datasets: Common Prosody Errors dataset and LFR dataset.
Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End-to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging success recently obtained in chit-chat dialog may not carry over to goal-oriented settings. This paper proposes a testbed to break down the strengths and shortcomings of end-to-end dialog systems in goal-oriented applications. Set in the context of restaurant reservation, our tasks require manipulating sentences and symbols, so as to properly conduct conversations, issue API calls and use the outputs of such calls. We show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a). We show similar result patterns on data extracted from an online concierge service.
null
857
How much is the gap between using the proposed objective and using only cross-entropy objective?
The mixed objective improves EM by 2.5% and F1 by 2.2%
Recent advances in Text-to-Speech (TTS) have improved quality and naturalness to near-human capabilities when considering isolated sentences. But something which is still lacking in order to achieve human-like communication is the dynamic variations and adaptability of human speech. This work attempts to solve the problem of achieving a more dynamic and natural intonation in TTS systems, particularly for stylistic speech such as the newscaster speaking style. We propose a novel embedding selection approach which exploits linguistic information, leveraging the speech variability present in the training dataset. We analyze the contribution of both semantic and syntactic features. Our results show that the approach improves the prosody and naturalness for complex utterances as well as in Long Form Reading (LFR).
Corresponding author email: tshubhi@amazon.com. Paper submitted to IEEE ICASSP 2020 Recent advances in TTS have improved the achievable synthetic speech naturalness to near human-like capabilities BIBREF0, BIBREF1, BIBREF2, BIBREF3. This means that for simple sentences, or for situations in which we can correctly predict the most appropriate prosodic representation, TTS systems are providing us with speech practically indistinguishable from that of humans. One aspect that most systems are still lacking is the natural variability of human speech, which is being observed as one of the reasons why the cognitive load of synthetic speech is higher than that of humans BIBREF4. This is something that variational models such as those based on Variational Auto-Encoding (VAE) BIBREF3, BIBREF5 attempt to solve by exploiting the sampling capabilities of the acoustic embedding space at inference time. Despite the advantages that VAE-based inference brings, it also suffers from the limitation that to synthesize a sample, one has to select an appropriate acoustic embedding for it, which can be challenging. A possible solution to this is to remove the selection process and consistently use a centroid to represent speech. This provides reliable acoustic representations but it suffers again from the monotonicity problem of conventional TTS. Another approach is to simply do a random sampling of the acoustic space. This would certainly solve the monotonicity problem if the acoustic embedding were varied enough. It can however, introduce erratic prosodic representations of longer texts, which can prove to be worse than being monotonous. Finally, one can consider text-based selection or prediction, as done in this research. In this work, we present a novel approach for informed embedding selection using linguistic features. The tight relationship between syntactic constituent structure and prosody is well known BIBREF6, BIBREF7. In the traditional Natural Language Processing (NLP) pipeline, constituency parsing produces full syntactic trees. More recent approaches based on Contextual Word Embedding (CWE) suggest that CWE are largely able to implicitly represent the classic NLP pipeline BIBREF8, while still retaining the ability to model lexical semantics BIBREF9. Thus, in this work we explore how TTS systems can enhance the quality of speech synthesis by using such linguistic features to guide the prosodic contour of generated speech. Similar relevant recent work exploring the advantages of exploiting syntactic information for TTS can be seen in BIBREF10, BIBREF11. While those studies, without any explicit acoustic pairing to the linguistic information, inject a number of curated features concatenated to the phonetic sequence as a way of informing the TTS system, the present study makes use of the linguistic information to drive the acoustic embedding selection rather than using it as an additional model features. An exploration of how to use linguistics as a way of predicting adequate acoustic embeddings can be seen in BIBREF12, where the authors explore the path of predicting an adequate embedding by informing the system with a set of linguistic and semantic information. The main difference of the present work is that in our case, rather than predicting a point in a high-dimensional space by making use of sparse input information (which is a challenging task and potentially vulnerable to training-domain dependencies), we use the linguistic information to predict the most similar embedding in our training set, reducing the complexity of the task significantly. The main contributions of this work are: i) we propose a novel approach of embedding selection in the acoustic space by using linguistic features; ii) we demonstrate that including syntactic information-driven acoustic embedding selection improves the overall speech quality, including its prosody; iii) we compare the improvements achieved by exploiting syntactic information in contrast with those brought by CWE; iv) we demonstrate that the approach improves the TTS quality in LFR experience as well.
859
How many domains of ontologies do they gather data from?
5 domains: software, stuff, african wildlife, healthcare, datatypes
A core step in statistical data-to-text generation concerns learning correspondences between structured data representations (e.g., facts in a database) and associated texts. In this paper we aim to bootstrap generators from large scale datasets where the data (e.g., DBPedia facts) and related texts (e.g., Wikipedia abstracts) are loosely aligned. We tackle this challenging task by introducing a special-purpose content selection mechanism. We use multi-instance learning to automatically discover correspondences between data and text pairs and show how these can be used to enhance the content signal while training an encoder-decoder architecture. Experimental results demonstrate that models trained with content-specific objectives improve upon a vanilla encoder-decoder which solely relies on soft attention.
A core step in statistical data-to-text generation concerns learning correspondences between structured data representations (e.g., facts in a database) and paired texts BIBREF0 , BIBREF1 , BIBREF2 . These correspondences describe how data representations are expressed in natural language (content realisation) but also indicate which subset of the data is verbalised in the text (content selection). Although content selection is traditionally performed by domain experts, recent advances in generation using neural networks BIBREF3 , BIBREF4 have led to the use of large scale datasets containing loosely related data and text pairs. A prime example are online data sources like DBPedia BIBREF5 and Wikipedia and their associated texts which are often independently edited. Another example are sports databases and related textual resources. Wiseman et al. wiseman-shieber-rush:2017:EMNLP2017 recently define a generation task relating statistics of basketball games with commentaries and a blog written by fans. In this paper, we focus on short text generation from such loosely aligned data-text resources. We work with the biographical subset of the DBPedia and Wikipedia resources where the data corresponds to DBPedia facts and texts are Wikipedia abstracts about people. Figure 1 shows an example for the film-maker Robert Flaherty, the Wikipedia infobox, and the corresponding abstract. We wish to bootstrap a data-to-text generator that learns to verbalise properties about an entity from a loosely related example text. Given the set of properties in Figure ( 1 a) and the related text in Figure ( 1 b), we want to learn verbalisations for those properties that are mentioned in the text and produce a short description like the one in Figure ( 1 c). In common with previous work BIBREF6 , BIBREF7 , BIBREF8 our model draws on insights from neural machine translation BIBREF3 , BIBREF9 using an encoder-decoder architecture as its backbone. BIBREF7 introduce the task of generating biographies from Wikipedia data, however they focus on single sentence generation. We generalize the task to multi-sentence text, and highlight the limitations of the standard attention mechanism which is often used as a proxy for content selection. When exposed to sub-sequences that do not correspond to any facts in the input, the soft attention mechanism will still try to justify the sequence and somehow distribute the attention weights over the input representation BIBREF10 . The decoder will still memorise high frequency sub-sequences in spite of these not being supported by any facts in the input. We propose to alleviate these shortcomings via a specific content selection mechanism based on multi-instance learning (MIL; BIBREF11 , BIBREF11 ) which automatically discovers correspondences, namely alignments, between data and text pairs. These alignments are then used to modify the generation function during training. We experiment with two frameworks that allow to incorporate alignment information, namely multi-task learning (MTL; BIBREF12 , BIBREF12 ) and reinforcement learning (RL; BIBREF13 , BIBREF13 ). In both cases we define novel objective functions using the learnt alignments. Experimental results using automatic and human-based evaluation show that models trained with content-specific objectives improve upon vanilla encoder-decoder architectures which rely solely on soft attention. The remainder of this paper is organised as follows. We discuss related work in Section "Related Work" and describe the MIL-based content selection approach in Section "Bidirectional Content Selection" . We explain how the generator is trained in Section "Generator Training" and present evaluation experiments in Section "Experimental Setup" . Section "Conclusions" concludes the paper.
861
what is the practical application for this paper?
Improve existing NLP methods. Improve linguistic analysis. Measure impact of word normalization tools.
Answering science questions posed in natural language is an important AI challenge. Answering such questions often requires non-trivial inference and knowledge that goes beyond factoid retrieval. Yet, most systems for this task are based on relatively shallow Information Retrieval (IR) and statistical correlation techniques operating on large unstructured corpora. We propose a structured inference system for this task, formulated as an Integer Linear Program (ILP), that answers natural language questions using a semi-structured knowledge base derived from text, including questions requiring multi-step inference and a combination of multiple facts. On a dataset of real, unseen science questions, our system significantly outperforms (+14%) the best previous attempt at structured reasoning for this task, which used Markov Logic Networks (MLNs). It also improves upon a previous ILP formulation by 17.7%. When combined with unstructured inference methods, the ILP system significantly boosts overall performance (+10%). Finally, we show our approach is substantially more robust to a simple answer perturbation compared to statistical correlation methods.
Answering questions posed in natural language is a fundamental AI task, with a large number of impressive QA systems built over the years. Today's Internet search engines, for instance, can successfully retrieve factoid style answers to many natural language queries by efficiently searching the Web. Information Retrieval (IR) systems work under the assumption that answers to many questions of interest are often explicitly stated somewhere BIBREF0 , and all one needs, in principle, is access to a sufficiently large corpus. Similarly, statistical correlation based methods, such as those using Pointwise Mutual Information or PMI BIBREF1 , work under the assumption that many questions can be answered by looking for words that tend to co-occur with the question words in a large corpus. While both of these approaches help identify correct answers, they are not suitable for questions requiring reasoning, such as chaining together multiple facts in order to arrive at a conclusion. Arguably, such reasoning is a cornerstone of human intelligence, and is a key ability evaluated by standardized science exams given to students. For example, consider a question from the NY Regents 4th Grade Science Test: We would like a QA system that, even if the answer is not explicitly stated in a document, can combine basic scientific and geographic facts to answer the question, e.g., New York is in the north hemisphere; the longest day occurs during the summer solstice; and the summer solstice in the north hemisphere occurs in June (hence the answer is June). Figure 1 illustrates how our system approaches this, with the highlighted support graph representing its line of reasoning. Further, we would like the system to be robust under simple perturbations, such as changing New York to New Zealand (in the southern hemisphere) or changing an incorrect answer option to an irrelevant word such as “last” that happens to have high co-occurrence with the question text. To this end, we propose a structured reasoning system, called TableILP, that operates over a semi-structured knowledge base derived from text and answers questions by chaining multiple pieces of information and combining parallel evidence. The knowledge base consists of tables, each of which is a collection of instances of an $n$ -ary relation defined over natural language phrases. E.g., as illustrated in Figure 1 , a simple table with schema (country, hemisphere) might contain the instance (United States, Northern) while a ternary table with schema (hemisphere, orbital event, month) might contain (North, Summer Solstice, June). TableILP treats lexical constituents of the question $Q$ , as well as cells of potentially relevant tables $T$ , as nodes in a large graph $\mathcal {G}_{Q,T}$ , and attempts to find a subgraph $G$ of $\mathcal {G}_{Q,T}$ that “best” supports an answer option. The notion of best support is captured via a number of structural and semantic constraints and preferences, which are conveniently expressed in the Integer Linear Programming (ILP) formalism. We then use an off-the-shelf ILP optimization engine called SCIP BIBREF3 to determine the best supported answer for $Q$ . Following a recently proposed AI challenge BIBREF4 , we evaluate TableILP on unseen elementary-school science questions from standardized tests. Specifically, we consider a challenge set BIBREF2 consisting of all non-diagram multiple choice questions from 6 years of NY Regents 4th grade science exams. In contrast to a state-of-the-art structured inference method BIBREF5 for this task, which used Markov Logic Networks (MLNs) BIBREF6 , TableILP achieves a significantly (+14% absolute) higher test score. This suggests that a combination of a rich and fine-grained constraint language, namely ILP, even with a publicly available solver is more effective in practice than various MLN formulations of the task. Further, while the scalability of the MLN formulations was limited to very few (typically one or two) selected science rules at a time, our approach easily scales to hundreds of relevant scientific facts. It also complements the kind of questions amenable to IR and PMI techniques, as is evidenced by the fact that a combination (trained using simple Logistic Regression BIBREF2 ) of TableILP with IR and PMI results in a significant (+10% absolute) boost in the score compared to IR alone. Our ablation study suggests that combining facts from multiple tables or multiple rows within a table plays an important role in TableILP's performance. We also show that TableILP benefits from the table structure, by comparing it with an IR system using the same knowledge (the table rows) but expressed as simple sentences; TableILP scores significantly (+10%) higher. Finally, we demonstrate that our approach is robust to a simple perturbation of incorrect answer options: while the simple perturbation results in a relative drop of 20% and 33% in the performance of IR and PMI methods, respectively, it affects TableILP's performance by only 12%.
863
What's the method used here?
Two neural networks: an extractor based on an encoder (BERT) and a decoder (LSTM Pointer Network BIBREF22) and an abstractor identical to the one proposed in BIBREF8.
Interpretability of a predictive model is a powerful feature that gains the trust of users in the correctness of the predictions. In word sense disambiguation (WSD), knowledge-based systems tend to be much more interpretable than knowledge-free counterparts as they rely on the wealth of manually-encoded elements representing word senses, such as hypernyms, usage examples, and images. We present a WSD system that bridges the gap between these two so far disconnected groups of methods. Namely, our system, providing access to several state-of-the-art WSD models, aims to be interpretable as a knowledge-based system while it remains completely unsupervised and knowledge-free. The presented tool features a Web interface for all-word disambiguation of texts that makes the sense predictions human readable by providing interpretable word sense inventories, sense representations, and disambiguation results. We provide a public API, enabling seamless integration.
The notion of word sense is central to computational lexical semantics. Word senses can be either encoded manually in lexical resources or induced automatically from text. The former knowledge-based sense representations, such as those found in the BabelNet lexical semantic network BIBREF0 , are easily interpretable by humans due to the presence of definitions, usage examples, taxonomic relations, related words, and images. The cost of such interpretability is that every element mentioned above is encoded manually in one of the underlying resources, such as Wikipedia. Unsupervised knowledge-free approaches, e.g. BIBREF1 , BIBREF2 , require no manual labor, but the resulting sense representations lack the above-mentioned features enabling interpretability. For instance, systems based on sense embeddings are based on dense uninterpretable vectors. Therefore, the meaning of a sense can be interpreted only on the basis of a list of related senses. We present a system that brings interpretability of the knowledge-based sense representations into the world of unsupervised knowledge-free WSD models. The contribution of this paper is the first system for word sense induction and disambiguation, which is unsupervised, knowledge-free, and interpretable at the same time. The system is based on the WSD approach of Panchenko:17 and is designed to reach interpretability level of knowledge-based systems, such as Babelfy BIBREF3 , within an unsupervised knowledge-free framework. Implementation of the system is open source. A live demo featuring several disambiguation models is available online.
864
By how much does their method outperform state-of-the-art OOD detection?
AE-HCN outperforms by 17%, AE-HCN-CNN outperforms by 20% on average
As an attempt to combine extractive and abstractive summarization, Sentence Rewriting models adopt the strategy of extracting salient sentences from a document first and then paraphrasing the selected ones to generate a summary. However, the existing models in this framework mostly rely on sentence-level rewards or suboptimal labels, causing a mismatch between a training objective and evaluation metric. In this paper, we present a novel training signal that directly maximizes summary-level ROUGE scores through reinforcement learning. In addition, we incorporate BERT into our model, making good use of its ability on natural language understanding. In extensive experiments, we show that a combination of our proposed model and training procedure obtains new state-of-the-art performance on both CNN/Daily Mail and New York Times datasets. We also demonstrate that it generalizes better on DUC-2002 test set.
The task of automatic text summarization aims to compress a textual document to a shorter highlight while keeping salient information of the original text. In general, there are two ways to do text summarization: Extractive and Abstractive BIBREF0. Extractive approaches generate summaries by selecting salient sentences or phrases from a source text, while abstractive approaches involve a process of paraphrasing or generating sentences to write a summary. Recent work BIBREF1, BIBREF2 demonstrates that it is highly beneficial for extractive summarization models to incorporate pre-trained language models (LMs) such as BERT BIBREF3 into their architectures. However, the performance improvement from the pre-trained LMs is known to be relatively small in case of abstractive summarization BIBREF4, BIBREF5. This discrepancy may be due to the difference between extractive and abstractive approaches in ways of dealing with the task—the former classifies whether each sentence to be included in a summary, while the latter generates a whole summary from scratch. In other words, as most of the pre-trained LMs are designed to be of help to the tasks which can be categorized as classification including extractive summarization, they are not guaranteed to be advantageous to abstractive summarization models that should be capable of generating language BIBREF6, BIBREF7. On the other hand, recent studies for abstractive summarization BIBREF8, BIBREF9, BIBREF10 have attempted to exploit extractive models. Among these, a notable one is BIBREF8, in which a sophisticated model called Reinforce-Selected Sentence Rewriting is proposed. The model consists of both an extractor and abstractor, where the extractor picks out salient sentences first from a source article, and then the abstractor rewrites and compresses the extracted sentences into a complete summary. It is further fine-tuned by training the extractor with the rewards derived from sentence-level ROUGE scores of the summary generated from the abstractor. In this paper, we improve the model of BIBREF8, addressing two primary issues. Firstly, we argue there is a bottleneck in the existing extractor on the basis of the observation that its performance as an independent summarization model (i.e., without the abstractor) is no better than solid baselines such as selecting the first 3 sentences. To resolve the problem, we present a novel neural extractor exploiting the pre-trained LMs (BERT in this work) which are expected to perform better according to the recent studies BIBREF1, BIBREF2. Since the extractor is a sort of sentence classifier, we expect that it can make good use of the ability of pre-trained LMs which is proven to be effective in classification. Secondly, the other point is that there is a mismatch between the training objective and evaluation metric; the previous work utilizes the sentence-level ROUGE scores as a reinforcement learning objective, while the final performance of a summarization model is evaluated by the summary-level ROUGE scores. Moreover, as BIBREF11 pointed out, sentences with the highest individual ROUGE scores do not necessarily lead to an optimal summary, since they may contain overlapping contents, causing verbose and redundant summaries. Therefore, we propose to directly use the summary-level ROUGE scores as an objective instead of the sentence-level scores. A potential problem arising from this apprsoach is the sparsity of training signals, because the summary-level ROUGE scores are calculated only once for each training episode. To alleviate this problem, we use reward shaping BIBREF12 to give an intermediate signal for each action, preserving the optimal policy. We empirically demonstrate the superiority of our approach by achieving new state-of-the-art abstractive summarization results on CNN/Daily Mail and New York Times datasets BIBREF13, BIBREF14. It is worth noting that our approach shows large improvements especially on ROUGE-L score which is considered a means of assessing fluency BIBREF11. In addition, our model performs much better than previous work when testing on DUC-2002 dataset, showing better generalization and robustness of our model. Our contributions in this work are three-fold: a novel successful application of pre-trained transformers for abstractive summarization; suggesting a training method to globally optimize sentence selection; achieving the state-of-the-art results on the benchmark datasets, CNN/Daily Mail and New York Times.
865
What are dilated convolutions?
Similar to standard convolutional networks but instead they skip some input values effectively operating on a broader scale.
Neural dialog models often lack robustness to anomalous user input and produce inappropriate responses which leads to frustrating user experience. Although there are a set of prior approaches to out-of-domain (OOD) utterance detection, they share a few restrictions: they rely on OOD data or multiple sub-domains, and their OOD detection is context-independent which leads to suboptimal performance in a dialog. The goal of this paper is to propose a novel OOD detection method that does not require OOD data by utilizing counterfeit OOD turns in the context of a dialog. For the sake of fostering further research, we also release new dialog datasets which are 3 publicly available dialog corpora augmented with OOD turns in a controllable way. Our method outperforms state-of-the-art dialog models equipped with a conventional OOD detection mechanism by a large margin in the presence of OOD utterances.
Recently, there has been a surge of excitement in developing chatbots for various purposes in research and enterprise. Data-driven approaches offered by common bot building platforms (e.g. Google Dialogflow, Amazon Alexa Skills Kit, Microsoft Bot Framework) make it possible for a wide range of users to easily create dialog systems with a limited amount of data in their domain of interest. Although most task-oriented dialog systems are built for a closed set of target domains, any failure to detect out-of-domain (OOD) utterances and respond with an appropriate fallback action can lead to frustrating user experience. There have been a set of prior approaches for OOD detection which require both in-domain (IND) and OOD data BIBREF0 , BIBREF1 . However, it is a formidable task to collect sufficient data to cover in theory unbounded variety of OOD utterances. In contrast, BIBREF2 introduced an in-domain verification method that requires only IND utterances. Later, with the rise of deep neural networks, BIBREF3 proposed an autoencoder-based OOD detection method which surpasses prior approaches without access to OOD data. However, those approaches still have some restrictions such that there must be multiple sub-domains to learn utterance representation and one must set a decision threshold for OOD detection. This can prohibit these methods from being used for most bots that focus on a single task. The goal of this paper is to propose a novel OOD detection method that does not require OOD data by utilizing counterfeit OOD turns in the context of a dialog. Most prior approaches do not consider dialog context and make predictions for each utterance independently. We will show that this independent decision leads to suboptimal performance even when actual OOD utterances are given to optimize the model and that the use of dialog context helps reduce OOD detection errors. To consider dialog context, we need to connect the OOD detection task with the overall dialog task. Thus, for this work, we build upon Hybrid Code Networks (HCN) BIBREF4 since HCNs achieve state-of-the-art performance in a data-efficient way for task-oriented dialogs, and propose AE-HCNs which extend HCNs with an autoencoder (Figure FIGREF8 ). Furthermore, we release new dialog datasets which are three publicly available dialog corpora augmented with OOD turns in a controlled way (exemplified in Table TABREF2 ) to foster further research.
868
what are the three methods presented in the paper?
Optimized TF-IDF, iterated TF-IDF, BERT re-ranking.
There is a lot of research interest in encoding variable length sentences into fixed length vectors, in a way that preserves the sentence meanings. Two common methods include representations based on averaging word vectors, and representations based on the hidden states of recurrent neural networks such as LSTMs. The sentence vectors are used as features for subsequent machine learning tasks or for pre-training in the context of deep learning. However, not much is known about the properties that are encoded in these sentence representations and about the language information they capture. We propose a framework that facilitates better understanding of the encoded representations. We define prediction tasks around isolated aspects of sentence structure (namely sentence length, word content, and word order), and score representations by the ability to train a classifier to solve each prediction task when using the representation as input. We demonstrate the potential contribution of the approach by analyzing different sentence representation mechanisms. The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded vector's dimensionality on the resulting representations.
Parameters of the encoder-decoder were tuned on a dedicated validation set. We experienced with different learning rates (0.1, 0.01, 0.001), dropout-rates (0.1, 0.2, 0.3, 0.5) BIBREF11 and optimization techniques (AdaGrad BIBREF6 , AdaDelta BIBREF30 , Adam BIBREF15 and RMSprop BIBREF29 ). We also experimented with different batch sizes (8, 16, 32), and found improvement in runtime but no significant improvement in performance. Based on the tuned parameters, we trained the encoder-decoder models on a single GPU (NVIDIA Tesla K40), with mini-batches of 32 sentences, learning rate of 0.01, dropout rate of 0.1, and the AdaGrad optimizer; training takes approximately 10 days and is stopped after 5 epochs with no loss improvement on a validation set.
869
what datasets did the authors use?
Kaggle Subversive Kaggle Wikipedia Subversive Wikipedia Reddit Subversive Reddit
The TextGraphs-13 Shared Task on Explanation Regeneration asked participants to develop methods to reconstruct gold explanations for elementary science questions. Red Dragon AI's entries used the language of the questions and explanation text directly, rather than a constructing a separate graph-like representation. Our leaderboard submission placed us 3rd in the competition, but we present here three methods of increasing sophistication, each of which scored successively higher on the test set after the competition close.
The Explanation Regeneration shared task asked participants to develop methods to reconstruct gold explanations for elementary science questions BIBREF1, using a new corpus of gold explanations BIBREF2 that provides supervision and instrumentation for this multi-hop inference task. Each explanation is represented as an “explanation graph”, a set of atomic facts (between 1 and 16 per explanation, drawn from a knowledge base of 5,000 facts) that, together, form a detailed explanation for the reasoning required to answer and explain the resoning behind a question. Linking these facts to achieve strong performance at rebuilding the gold explanation graphs requires methods to perform multi-hop inference - which has been shown to be far harder than inference of smaller numbers of hops BIBREF3, particularly for the case here, where there is considerable uncertainty (at a lexical level) of how individual explanations logically link somewhat `fuzzy' graph nodes.
874
How much performance improvements they achieve on SQuAD?
Compared to baselines SAN (Table 1) shows improvement of 1.096% on EM and 0.689% F1. Compared to other published SQuAD results (Table 2) SAN is ranked second.
In this paper, we describe our system which participates in the shared task of Hate Speech Detection on Social Networks of VLSP 2019 evaluation campaign. We are provided with the pre-labeled dataset and an unlabeled dataset for social media comments or posts. Our mission is to pre-process and build machine learning models to classify comments/posts. In this report, we use Bidirectional Long Short-Term Memory to build the model that can predict labels for social media text according to Clean, Offensive, Hate. With this system, we achieve comparative results with 71.43% on the public standard test set of VLSP 2019.
In recent years, social networking has grown and become prevalent with every people, it makes easy for people to interact and share with each other. However, every problem has two sides. It also has some negative issues, hate speech is a hot topic in the domain of social media. With the freedom of speech on social networks and anonymity on the internet, some people are free to comment on hate and insults. Hate speech can have an adverse effect on human behavior as well as directly affect society. We don't manually delete each of those comments, which is time-consuming and boring. This spurs research to build an automated system that detects hate speech and eliminates them. With that system, we can detect and eliminate hate speech and thus reduce their spread on social media. With Vietnamese, we can use methods to apply specific extraction techniques manually and in combination with string labeling algorithms such as Conditional Random Field (CRF)[1], Model Hidden Markov (HMM)[2] or Entropy[3]. However, we have to choose the features manually to bring the model with high accuracy. Deep Neural Network architectures can handle the weaknesses of the above methods. In this report we apply Bidirectional Long Short-Term Memory (Bi-LSTM) to build the model. Also combined with the word embedding matrix to increase the accuracy of the model. The rest of the paper is organized as follows. In section 2, we presented the related work. In section 3, we described our Bi-LSTM system. In sections 4 and 5, we presented the experimental process and results. Finally, section 6 gives conclusions about the work.
879
What is the baseline?
The baseline is a multi-task architecture inspired by another paper.
We analyze the language learned by an agent trained with reinforcement learning as a component of the ActiveQA system [Buck et al., 2017]. In ActiveQA, question answering is framed as a reinforcement learning task in which an agent sits between the user and a black box question-answering system. The agent learns to reformulate the user's questions to elicit the optimal answers. It probes the system with many versions of a question that are generated via a sequence-to-sequence question reformulation model, then aggregates the returned evidence to find the best answer. This process is an instance of \emph{machine-machine} communication. The question reformulation model must adapt its language to increase the quality of the answers returned, matching the language of the question answering system. We find that the agent does not learn transformations that align with semantic intuitions but discovers through learning classical information retrieval techniques such as tf-idf re-weighting and stemming.
BIBREF0 propose a reinforcement learning framework for question answering, called active question answering (ActiveQA), that aims to improve answering by systematically perturbing input questions (cf. BIBREF1 ). Figure 1 depicts the generic agent-environment framework. The agent (AQA) interacts with the environment (E) in order to answer a question ( $q_0$ ). The environment includes a question answering system (Q&A), and emits observations and rewards. A state $s_t$ at time $t$ is the sequence of observations and previous actions generated starting from $q_0$ : $s_t=x_0,u_0,x_1,\ldots ,u_{t-1},x_t$ , where $x_i$ includes the question asked ( $q_{i}$ ), the corresponding answer returned by the QA system ( $a_i$ ), and possibly additional information such as features or auxiliary tasks. The agent includes an action scoring component (U), which produced and action $u_t$ by deciding whether to submit a new question to the environment or to return a final answer. Formally, $u_t\in \mathcal {Q}\cup \mathcal {A}$ , where $s_t$0 is the set of all possible questions, and $s_t$1 is the set of all possible answers. The agent relies on a question reformulation system (QR), that provides candidate follow up questions, and on an answer ranking system (AR), which scores the answers contained in $s_t$2 . Each answer returned is assigned a reward. The objective is to maximize the expected reward over a set of questions. BIBREF0 present a simplified version of this system with three core components: a question reformulator, an off-the-shelf black box QA system, and a candidate answer selection model. The question reformulator is trained with policy gradient BIBREF2 to optimize the F1 score of the answers returned by the QA system to the question reformulations in place of the original question. The reformulator is implemented as a sequence-to-sequence model of the kind used for machine translation BIBREF3 , BIBREF4 . When generating question reformulations, the action-space is equal to the size of the vocabulary, typically $16k$ sentence pieces. Due to this large number of actions we warm start the reformulation policy with a monolingual sequence-to-sequence model that performs generic paraphrasing. This model is trained using the zero-shot translation technique BIBREF5 on a large multilingual parallel corpus BIBREF6 , followed by regular supervised learning on a smaller monolingual corpus of questions BIBREF7 . The reformulation and selection models form a trainable agent that seeks the best answers from the QA system. The reformulator proposes $N$ versions $q_i$ of the input question $q_0$ and passes them to the environment, which provides $N$ corresponding answers, $a_i$ . The selection model scores each triple $(q_0,q_i,a_i)$ and returns the top-scoring candidate. Crucially, the agent may only query the environment with natural language questions. Thus, ActiveQA involves a machine-machine communication process inspired by the human-machine communication that takes place when users interact with digital services during information seeking tasks. For example, while searching for information on a search engine users tend to adopt a keyword-like `queryese' style of questioning. The AQA agent proves effective at reformulating questions on SearchQA BIBREF8 , a large dataset of complex questions from the Jeopardy! game. For this task BiDAF is chosen for the environment BIBREF9 , a deep network built for QA which has produced state-of-the-art results. Compared to a QA system that forms the environment using only the original questions, AQA outperforms this baseline by a wide margin, 11.4% absolute F1, thereby reducing the gap between machine (BiDAF) and human performance by 66%. Here we perform a qualitative analysis of this communication process to better understand what kind of language the agent has learned. We find that while optimizing its reformulations to adapt to the language of the QA system, AQA diverges from well structured language in favour of less fluent, but more effective, classic information retrieval (IR) query operations. These include term re-weighting (tf-idf), expansion and morphological simplification/stemming. We hypothesize that the explanation of this behaviour is that current machine comprehension tasks primarily require ranking of short textual snippets, thus incentivizing relevance more than deep language understanding.
881
What does recurrent deep stacking network do?
Stacks and joins outputs of previous frames with inputs of the current frame
Interest in larger-context neural machine translation, including document-level and multi-modal translation, has been growing. Multiple works have proposed new network architectures or evaluation schemes, but potentially helpful context is still sometimes ignored by larger-context translation models. In this paper, we propose a novel learning algorithm that explicitly encourages a neural translation model to take into account additional context using a multilevel pair-wise ranking loss. We evaluate the proposed learning algorithm with a transformer-based larger-context translation system on document-level translation. By comparing performance using actual and random contexts, we show that a model trained with the proposed algorithm is more sensitive to the additional context.
Despite its rapid adoption by academia and industry and its recent success BIBREF0 , neural machine translation has been found largely incapable of exploiting additional context other than the current source sentence. This incapability stems from the fact that larger-context machine translation systems tend to ignore additional context, such as previous sentences and associated images. Much of recent efforts have gone into building a novel network architecture that can better exploit additional context however without much success BIBREF1 , BIBREF2 , BIBREF3 . In this paper, we approach the problem of larger-context neural machine translation from the perspective of “learning” instead. We propose to explicitly encourage the model to exploit additional context by assigning a higher log-probability to a translation paired with a correct context than to that paired with an incorrect one. We design this regularization term to be applied at token, sentence and batch levels to cope with the fact that the benefit from additional context may differ from one level to another. Our experiments on document-level translation using a modified transformer BIBREF4 reveal that the model trained using the proposed learning algorithm is indeed sensitive to the context, contrarily to some previous works BIBREF1 . We also see a small improvement in terms of overall quality (measured in BLEU). These two observations together suggest that the proposed approach is a promising direction toward building an effective larger-context neural translation model.
882
What is the reward model for the reinforcement learning appraoch?
reward 1 for successfully completing the task, with a discount by the number of turns, and reward 0 when fail
This paper presented our work on applying Recurrent Deep Stacking Networks (RDSNs) to Robust Automatic Speech Recognition (ASR) tasks. In the paper, we also proposed a more efficient yet comparable substitute to RDSN, Bi- Pass Stacking Network (BPSN). The main idea of these two models is to add phoneme-level information into acoustic models, transforming an acoustic model to the combination of an acoustic model and a phoneme-level N-gram model. Experiments showed that RDSN and BPsn can substantially improve the performances over conventional DNNs.
Ever since the introduction of Deep Neural Networks (DNNs) to Automatic Speech Recognition (ASR) tasks BIBREF0 , researchers had been trying to use additional inputs to the raw input features. We extracted features that are more representative using the first and second order differentiates of the raw input features. And we utilized features in multiple neighboring frames to make use of the context information. Efforts had been continuously made in designing and modifying models that are more powerful. We designed Recurrent Neural Networks (RNNs) BIBREF1 for context-sensitive applications, Convolutional Neural Networks (CNNs) BIBREF2 for image pattern classification, and many other variances of conventional DNNs. In addition, we re-introduced Long Short-Term Memory (LSTM) BIBREF3 , making our DNNs more capable of incorporating large amounts of data and making accurate predictions. In the area of Robust ASR, although it is always helpful to incorporate more data, we still lack a model as well-designed as CNN in Computer Vision (CV). Many methods were proposed on both front-end BIBREF4 and back-end. The models in this paper belong to the back-end methods. Inspired by recent progresses in Natural Language Processing BIBREF5 , we proposed the Recurrent Deep Stacking Network (RDSN) and successfully applied it to Speech Enhancement tasks. RDSN utilizes the phoneme information in previous frames as additional inputs to the raw features. From another perspective of view, this framework transformed the Acoustic Model into a hybrid model consisted of an Acoustic Model and a simple N-gram Language Model on phoneme level. In the next section, we will explain the framework of RDSN and tricks to compress the outputs. Then we will show the experimental results and make a conclusion.
883
Does this paper propose a new task that others can try to improve performance on?
No, there has been previous work on recognizing social norm violation.
End-to-end learning of recurrent neural networks (RNNs) is an attractive solution for dialog systems; however, current techniques are data-intensive and require thousands of dialogs to learn simple behaviors. We introduce Hybrid Code Networks (HCNs), which combine an RNN with domain-specific knowledge encoded as software and system action templates. Compared to existing end-to-end approaches, HCNs considerably reduce the amount of training data required, while retaining the key benefit of inferring a latent representation of dialog state. In addition, HCNs can be optimized with supervised learning, reinforcement learning, or a mixture of both. HCNs attain state-of-the-art performance on the bAbI dialog dataset, and outperform two commercially deployed customer-facing dialog systems.
Task-oriented dialog systems help a user to accomplish some goal using natural language, such as making a restaurant reservation, getting technical support, or placing a phonecall. Historically, these dialog systems have been built as a pipeline, with modules for language understanding, state tracking, action selection, and language generation. However, dependencies between modules introduce considerable complexity – for example, it is often unclear how to define the dialog state and what history to maintain, yet action selection relies exclusively on the state for input. Moreover, training each module requires specialized labels. Recently, end-to-end approaches have trained recurrent neural networks (RNNs) directly on text transcripts of dialogs. A key benefit is that the RNN infers a latent representation of state, obviating the need for state labels. However, end-to-end methods lack a general mechanism for injecting domain knowledge and constraints. For example, simple operations like sorting a list of database results or updating a dictionary of entities can expressed in a few lines of software, yet may take thousands of dialogs to learn. Moreover, in some practical settings, programmed constraints are essential – for example, a banking dialog system would require that a user is logged in before they can retrieve account information. This paper presents a model for end-to-end learning, called Hybrid Code Networks (HCNs) which addresses these problems. In addition to learning an RNN, HCNs also allow a developer to express domain knowledge via software and action templates. Experiments show that, compared to existing recurrent end-to-end techniques, HCNs achieve the same performance with considerably less training data, while retaining the key benefit of end-to-end trainability. Moreover, the neural network can be trained with supervised learning or reinforcement learning, by changing the gradient update applied. This paper is organized as follows. Section "Model description" describes the model, and Section "Related work" compares the model to related work. Section "Supervised learning evaluation I" applies HCNs to the bAbI dialog dataset BIBREF0 . Section "Supervised learning evaluation II" then applies the method to real customer support domains at our company. Section "Reinforcement learning illustration" illustrates how HCNs can be optimized with reinforcement learning, and Section "Conclusion" concludes.
884
What task do they evaluate on?
Fill-in-the-blank natural language questions
Social norms are shared rules that govern and facilitate social interaction. Violating such social norms via teasing and insults may serve to upend power imbalances or, on the contrary reinforce solidarity and rapport in conversation, rapport which is highly situated and context-dependent. In this work, we investigate the task of automatically identifying the phenomena of social norm violation in discourse. Towards this goal, we leverage the power of recurrent neural networks and multimodal information present in the interaction, and propose a predictive model to recognize social norm violation. Using long-term temporal and contextual information, our model achieves an F1 score of 0.705. Implications of our work regarding developing a social-aware agent are discussed.
Social norms are informal understandings that govern human behavior. They serve as the basis for our beliefs and expectations about others, and are instantiated in human-human conversation through verbal and nonverbal behaviors BIBREF0 , BIBREF1 . There is considerable body of work on modeling socially normative behavior in intelligent agent-based systems BIBREF2 , BIBREF3 , aiming to facilitate lifelike conversations with human users. Violating such social norms and impoliteness in the conversation, on the other hand, have also been demonstrated to positively affect certain aspects of the social interaction. For instance, BIBREF4 suggests impoliteness may challenge rapport in strangers but it is also an indicator of built relationship among friends. The literature on social psychology BIBREF5 shows that the task of managing interpersonal bond like rapport requires management of face which, in turn, relies on behavioral expectation, which are allied with social norms early in a relationship, and become more interpersonally determined as the relationship proceeds. BIBREF6 advanced the arguments by proposing that with the increasing knowledge of one another, more general norms may be purposely violated in order to accommodate each other's behavior expectation. Moreover, they proposed that such kind of social norm violation in fact reinforce the sense of in-group connectedness. Finally in BIBREF7 , the authors discovered the effect of temporally co-occurring smile and social norm violation that signal high interpersonal rapport. Thus, we believe that recognizing the phenomena of social norm violation in dialog can contribute important insights into understanding the interpersonal dynamics that unfold between the interlocutors. Interesting prior work on quantifying social norm violation has taken a heavily data-driven focus BIBREF8 , BIBREF9 . For instance, BIBREF8 trained a series of bigram language models to quantify the violation of social norms in users' posts on an online community by leveraging cross-entropy value, or the deviation of word sequences predicted by the language model and their usage by the user. However, their models were trained on written-language instead of natural face-face dialog corpus. Another kind of social norm violation was examined by BIBREF10 , who developed a classifier to identify specific types of sarcasm in tweets. They utilized a bootstrapping algorithm to automatically extract lists of positive sentiment phrases and negative situation phrases from given sarcastic tweets, which were in turn leveraged to recognize sarcasm in an SVM classifier. However, no contextual information was considered in this work. BIBREF11 understood the nature of social norm violation in dialog by correlating it with associated observable verbal, vocal and visual cues. By leveraging their findings and statistical machine learning techniques, they built a computational model for automatic recognition. While they preserved short-term temporal contextual information in the model, this study avoided dealing with sparsity of the social norm violation phenomena by under-sampling the negative-class instances to make a balanced dataset. Motivated by theoretical rationale and prior empirical findings concerning the relationship between violation social norm and interpersonal dynamics, in the current work, we take a step towards addressing the above limitations and our contributions are two-fold: (1)We quantitatively evaluate the contribution of long-term temporal contextual information on detecting violation of social norm. (2)We incorporate this understanding to our computational model for automatic recognizing social norm violation by leveraging the power of recurrent neural network on modeling the long-term temporal dependencies.
886
How many feature maps are generated for a given triple?
3 feature maps for a given tuple
Natural language understanding has recently seen a surge of progress with the use of sentence encoders like ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2019) which are pretrained on variants of language modeling. We conduct the first large-scale systematic study of candidate pretraining tasks, comparing 19 different tasks both as alternatives and complements to language modeling. Our primary results support the use language modeling, especially when combined with pretraining on additional labeled-data tasks. However, our results are mixed across pretraining tasks and show some concerning trends: In ELMo's pretrain-then-freeze paradigm, random baselines are worryingly strong and results vary strikingly across target tasks. In addition, fine-tuning BERT on an intermediate task often negatively impacts downstream transfer. In a more positive trend, we see modest gains from multitask training, suggesting the development of more sophisticated multitask and transfer learning techniques as an avenue for further research.
State-of-the-art models for natural language processing (NLP) tasks like translation, question answering, and parsing include components intended to extract representations for the meaning and contents of each input sentence. These sentence encoder components are typically trained directly for the target task at hand. This approach can be effective on data rich tasks and yields human performance on some narrowly-defined benchmarks BIBREF1 , BIBREF2 , but it is tenable only for the few NLP tasks with millions of examples of training data. This has prompted interest in pretraining for sentence encoding: There is good reason to believe it should be possible to exploit outside data and training signals to effectively pretrain these encoders, both because they are intended to primarily capture sentence meaning rather than any task-specific skill, and because we have seen dramatic successes with pretraining in the related domains of word embeddings BIBREF3 and image encoders BIBREF4 . More concretely, four recent papers show that pretrained sentence encoders can yield very strong performance on NLP tasks. First, BIBREF5 show that a BiLSTM encoder from a neural machine translation (MT) system can be effectively reused elsewhere. BIBREF6 , BIBREF0 , and BIBREF7 show that various kinds of encoder pretrained in an unsupervised fashion through generative language modeling (LM) are effective as well. Each paper uses its own evaluation methods, though, making it unclear which pretraining task is most effective or whether multiple pretraining tasks can be productively combined; in the related setting of sentence-to-vector encoding, multitask learning with multiple labeled datasets has yielded a robust state of the art BIBREF8 . This paper attempts to systematically address these questions. We train reusable sentence encoders on 17 different pretraining tasks, several simple baselines, and several combinations of these tasks, all using a single model architecture and procedure for pretraining and transfer, inspired by ELMo. We then evaluate each of these encoders on the nine target language understanding tasks in the GLUE benchmark BIBREF9 , yielding a total of 40 sentence encoders and 360 total trained models. We then measure correlation in performance across target tasks and plot learning curves evaluating the effect of training data volume on each pretraining and target tasks. Looking to the results of this experiment, we find that language modeling is the most effective single pretraining task we study, and that multitask learning during pretraining can offer further gains and a new state-of-the-art among fixed sentence encoders. We also, however, find reasons to worry that ELMo-style pretraining, in which we pretrain a model and use it on target tasks with no further fine-tuning, is brittle and seriously limiting: (i) Trivial baseline representations do nearly as well as the best pretrained encoders, and the margins between substantially different pretraining tasks can be extremely small. (ii) Different target tasks differ dramatically on what kinds of pretraining they benefit most from, and multitask pretraining is not sufficient to circumvent this problem and offer general-purpose pretrained encoders.