Unnamed: 0
int64
0
886
question
stringlengths
19
151
answer
stringlengths
1
1.08k
abstract
stringlengths
279
2.02k
introduction
stringlengths
52
9.04k
590
What are the two PharmaCoNER subtasks?
Entity identification with offset mapping and concept indexing
Analogies such as man is to king as woman is to X are often used to illustrate the amazing power of word embeddings. Concurrently, they have also exposed how strongly human biases are encoded in vector spaces built on natural language. While finding that queen is the answer to man is to king as woman is to X leaves us in awe, papers have also reported finding analogies deeply infused with human biases, like man is to computer programmer as woman is to homemaker, which instead leave us with worry and rage. In this work we show that,often unknowingly, embedding spaces have not been treated fairly. Through a series of simple experiments, we highlight practical and theoretical problems in previous works, and demonstrate that some of the most widely used biased analogies are in fact not supported by the data. We claim that rather than striving to find sensational biases, we should aim at observing the data"as is", which is biased enough. This should serve as a fair starting point to properly address the evident, serious, and compelling problem of human bias in word embeddings.
Word embeddings are distributed representations of texts which capture similarities between words. Beside improving a wide variety of NLP tasks, the power of word embeddings is often also tested intrinsically. Together with the idea of training word embeddings, BIBREF0 introduced the idea of testing the soundness of embedding spaces via the analogy task. Proportional analogies are equations of the form INLINEFORM0 , or simply A is to B as C is to D. Given the terms INLINEFORM1 , the model must return the word that correctly stands for INLINEFORM2 in the given analogy. A most classic example is man is to king as woman is to X, where the model is expected to return queen, by subtracting “manness" from the concept of king to obtain some general royalty, and then re-adding some “womanness" to obtain the concept of queen ( INLINEFORM3 ). Beside this kind of magical power, however, embeddings have been shown to carry worrying biases present in our society and thus encoded in language. Recent studies BIBREF1 , BIBREF2 found that embeddings yield biased analogies such as the classic man is to doctor as woman is to nurse, or man is to computer programmer as woman is to homemaker. Attempts at reducing bias, either via postprocessing BIBREF1 or directly in training BIBREF3 have nevertheless left two outstanding issues: bias is still encoded implicitly BIBREF4 , and it is debatable whether we should aim at removal or rather at transparency and awareness BIBREF5 , BIBREF4 . With an eye to transparency, we took a closer look at the analogy structure. In the original proportional analogy implementation, all terms of the equation INLINEFORM0 are distinct BIBREF0 , BIBREF6 . In other words, the model is forced to return a different concept than the original ones. Given an analogy of the form INLINEFORM1 , the model is not allowed to yield any term INLINEFORM2 such that INLINEFORM3 , or INLINEFORM4 , or INLINEFORM5 , since the code explicitly prevents this. While this constraint is helpful when all terms of the analogy are expected to be different, it becomes a problem, and even a dangerous artifact, when the terms could or even should be the same. We investigate this issue using the original analogy test set BIBREF0 , and examples from the literature. We test all examples on different embedding spaces built for English, using two settings for the analogy code: when all terms must be different (as in the original, widely used, implementation), and without this constraint, meaning that any word, including the input terms, can be returned. As far as we know, this is the first work that evaluates and reports analogies in an unrestricted fashion, since the analogy code is always used as is. Our experiments and results suggest that the mainstream examples as well as the use of the analogy task itself as a tool to detect bias should be revised and reconsidered. Warning This work does not mean at all to downplay the presence and danger of human biases in word embeddings. On the contrary: embeddings do encode human biases, and we believe that this issue deserves the full attention of the field. However, we also believe that overemphasising and specifically seeking biases to achieve sensational results is not beneficial. It is also not necessary: what we observe naturally is worrying and sensational enough. Rather, we should aim at transparency and experimental clarity so as to ensure the fairest and most efficient treatment of the problem.
591
How do they perform data augmentation?
They randomly sample sentences from Wikipedia that contains an object RC and add them to training data
Named entity recognition (NER) is the very first step in the linguistic processing of any new domain. It is currently a common process in BioNLP on English clinical text. However, it is still in its infancy in other major languages, as it is the case for Spanish. Presented under the umbrella of the PharmaCoNER shared task, this paper describes a very simple method for the annotation and normalization of pharmacological, chemical and, ultimately, biomedical named entities in clinical cases. The system developed for the shared task is based on limited knowledge, collected, structured and munged in a way that clearly outperforms scores obtained by similar dictionary-based systems for English in the past. Along with this recovering of the knowledge-based methods for NER in subdomains, the paper also highlights the key contribution of resource-based systems in the validation and consolidation of both the annotation guidelines and the human annotation practices. In this sense, some of the authors discoverings on the overall quality of human annotated datasets question the above-mentioned `official' results obtained by this system, that ranked second (0.91 F1-score) and first (0.916 F1-score), respectively, in the two PharmaCoNER subtasks.
Named Entity Recognition (ner) is considered a necessary first step in the linguistic processing of any new domain, as it facilitates the development of applications showing co-occurrences of domain entities, cause-effect relations among them, and, eventually, it opens the (still to be reached) possibility of understanding full text content. On the other hand, Biomedical literature and, more specifically, clinical texts, show a number of features as regards ner that pose a challenge to NLP researchers BIBREF0: (1) the clinical discourse is characterized by being conceptually very dense; (2) the number of different classes for nes is greater than traditional classes used with, for instance, newswire text; (3) they show a high formal variability for nes (actually, it is rare to find entities in their “canonical form”); and, (4) this text type contains a great number of ortho-typographic errors, due mainly to time constraints when drafted. Many ways to approach ner for biomedical literature have been proposed, but they roughly fall into three main categories: rule-based, dictionary-based (sometimes called knowledge-based) and machine-learning based solutions. Traditionally, the first two approaches have been the choice before the availability of Human Annotated Datasets (had), albeit rule-based approaches require (usually hand-crafted) rules to identify terms in the text, while dictionary-based approaches tend to miss medical terms not mentioned in the system dictionary BIBREF1. Nonetheless, with the creation and distribution of had as well as the development and success of supervised machine learning methods, a plethora of data-driven approaches have emerged —from Hidden Markov Models (HMMs) BIBREF2, Support Vector Machines (SVMs) BIBREF3 and Conditional Random Fields (CRFs) BIBREF4, to, more recently, those founded on neural networks BIBREF5. This fact has had an impact on knowledge-based methods, demoting them to a second plane. Besides, this situation has been favoured by claims on the uselessness of gazetteers for ner in, for example, Genomic Medicine (GM), as it was suggested by BIBREF0 [p. 26]CohenandDemner-Fushman:2014: One of the findings of the first BioCreative shared task was the demonstration of the long-suspected fact that gazetteers are typically of little use in GM. Although one might think that this view could strictly refer to the subdomain of GM and to the past —BioCreative I was a shared task held back in 2004—, we can still find similar claims today, not only referred to rule-based and dictionary-based methods, but also to stochastic ones BIBREF5. In this paper, in spite of previous statements, we present a system that uses rule-based and dictionary-based methods combined (in a way we prefer to call resource-based). Our final goals in the paper are two-fold: on the one hand, to describe our system, developed for the PharmaCoNER shared task, dealing with the annotation of some of the nes in health records (namely, pharmacological, chemical and biomedical entities) using a revisited version of rule- and dictionary-based approaches; and, on the other hand, to give pause for thought about the quality of datasets (and, thus, the fairness) with which systems of this type are evaluated, and to highlight the key role of resource-based systems in the validation and consolidation of both the annotation guidelines and the human annotation practices. In section SECREF2, we describe our initial resources and explain how they were built, and try to address the issues posed by features (1) and (2) above. Section SECREF3 depicts the core of our system and the methods we have devised to deal with text features (3) and (4). Results obtained in PharmaCoNER by our system are presented in section SECREF4. Section SECREF5 details some of our errors, but, most importantly, focusses on the errors and inconsistencies found in the evaluation dataset, given that they may shed doubts on the scores obtained by any system in the competition. Finally, we present some concluding remarks in section SECREF6.
592
What are the characteristics of the rural dialect?
It uses particular forms of a concept rather than all of them uniformly
We explore the utilities of explicit negative examples in training neural language models. Negative examples here are incorrect words in a sentence, such as"barks"in"*The dogs barks". Neural language models are commonly trained only on positive examples, a set of sentences in the training data, but recent studies suggest that the models trained in this way are not capable of robustly handling complex syntactic constructions, such as long-distance agreement. In this paper, using English data, we first demonstrate that appropriately using negative examples about particular constructions (e.g., subject-verb agreement) will boost the model's robustness on them, with a negligible loss of perplexity. The key to our success is an additional margin loss between the log-likelihoods of a correct word and an incorrect word. We then provide a detailed analysis of the trained models. One of our findings is the difficulty of object-relative clauses for RNNs. We find that even with our direct learning signals the models still suffer from resolving agreement across an object-relative clause. Augmentation of training sentences involving the constructions somewhat helps, but the accuracy still does not reach the level of subject-relative clauses. Although not directly cognitively appealing, our method can be a tool to analyze the true architectural limitation of neural models on challenging linguistic constructions.
intro Despite not being exposed to explicit syntactic supervision, neural language models (LMs), such as recurrent neural networks, are able to generate fluent and natural sentences, suggesting that they induce syntactic knowledge about the language to some extent. However, it is still under debate whether such induced knowledge about grammar is robust enough to deal with syntactically challenging constructions such as long-distance subject-verb agreement. So far, the results for RNN language models (RNN-LMs) trained only with raw text are overall negative; prior work has reported low performance on the challenging test cases BIBREF0 even with the massive size of the data and model BIBREF1, or argue the necessity of an architectural change to track the syntactic structure explicitly BIBREF2, BIBREF3. Here the task is to evaluate whether a model assigns a higher likelihood on a grammatically correct sentence (UNKREF3) over an incorrect sentence (UNKREF5) that is minimally different from the original one BIBREF4. .5ex The author that the guards like laughs. .5ex The author that the guards like laugh. In this paper, to obtain a new insight into the syntactic abilities of neural LMs, in particular RNN-LMs, we perform a series of experiments under a different condition from the prior work. Specifically, we extensively analyze the performance of the models that are exposed to explicit negative examples. In this work, negative examples are the sentences or tokens that are grammatically incorrect, such as (UNKREF5) above. Since these negative examples provide a direct learning signal on the task at test time it may not be very surprising if the task performance goes up. We acknowledge this, and argue that our motivation for this setup is to deepen understanding, in particular the limitation or the capacity of the current architectures, which we expect can be reached with such strong supervision. Another motivation is engineering: we could exploit negative examples in different ways, and establishing a better way will be of practical importance toward building an LM or generator that can be robust on particular linguistic constructions. The first research question we pursue is about this latter point: what is a better method to utilize negative examples that help LMs to acquire robustness on the target syntactic constructions? Regarding this point, we find that adding additional token-level loss trying to guarantee a margin between log-probabilities for the correct and incorrect words (e.g., $\log p(\textrm {laughs} | h)$ and $\log p(\textrm {laugh} | h)$ for (UNKREF3)) is superior to the alternatives. On the test set of BIBREF0, we show that LSTM language models (LSTM-LMs) trained by this loss reach near perfect level on most syntactic constructions for which we create negative examples, with only a slight increase of perplexity about 1.0 point. Past work conceptually similar to us is BIBREF5, which, while not directly exploiting negative examples, trains an LM with additional explicit supervision signals to the evaluation task. They hypothesize that LSTMs do have enough capacity to acquire robust syntactic abilities but the learning signals given by the raw text are weak, and show that multi-task learning with a binary classification task to predict the upcoming verb form (singular or plural) helps models aware of the target syntax (subject-verb agreement). Our experiments basically confirm and strengthen this argument, with even stronger learning signals from negative examples, and we argue this allows to evaluate the true capacity of the current architectures. In our experiments (Section exp), we show that our margin loss achieves higher syntactic performance. Another relevant work on the capacity of LSTMs is BIBREF6, which shows that by distilling from syntactic LMs BIBREF7, LSTM-LMs can be robust on syntax. We show that our LMs with the margin loss outperforms theirs in most of the aspects, further strengthening the capacity of LSTMs, and also discuss the limitation. The latter part of this paper is a detailed analysis of the trained models and introduced losses. Our second question is about the true limitation of LSTM-LMs: are there still any syntactic constructions that the models cannot handle robustly even with our direct learning signals? This question can be seen as a fine-grained one raised by BIBREF5 with a stronger tool and improved evaluation metric. Among tested constructions, we find that syntactic agreement across an object relative clause (RC) is challenging. To inspect whether this is due to the architectural limitation, we train another LM on a dataset, on which we unnaturally augment sentences involving object RCs. Since it is known that object RCs are relatively rare compared to subject RCs BIBREF8, frequency may be the main reason for the lower performance. Interestingly, even when increasing the number of sentences with an object RC by eight times (more than twice of sentences with a subject RC), the accuracy does not reach the same level as agreement across a subject RC. This result suggests an inherent difficulty to track a syntactic state across an object RC for sequential neural architectures. We finally provide an ablation study to understand the encoded linguistic knowledge in the models learned with the help of our method. We experiment under reduced supervision at two different levels: (1) at a lexical level, by not giving negative examples on verbs that appear in the test set; (2) at a construction level, by not giving negative examples about a particular construction, e.g., verbs after a subject RC. We observe no huge score drops by both. This suggests that our learning signals at a lexical level (negative words) strengthen the abstract syntactic knowledge about the target constructions, and also that the models can generalize the knowledge acquired by negative examples to similar constructions for which negative examples are not explicitly given. The result also implies that negative examples do not have to be complete and can be noisy, which will be appealing from an engineering perspective.
594
What is the performance of the models on the tasks?
Overall accuracy per model is: 5-gram (60.5), LSTM (68.9), TXL (68.7), GPT-2 (80.1)
This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation. We can train fastText on more than one billion words in less than ten minutes using a standard multicore~CPU, and classify half a million sentences among~312K classes in less than a minute.
Text classification is an important task in Natural Language Processing with many applications, such as web search, information retrieval, ranking and document classification BIBREF0 , BIBREF1 . Recently, models based on neural networks have become increasingly popular BIBREF2 , BIBREF3 , BIBREF4 . While these models achieve very good performance in practice, they tend to be relatively slow both at train and test time, limiting their use on very large datasets. Meanwhile, linear classifiers are often considered as strong baselines for text classification problems BIBREF5 , BIBREF6 , BIBREF7 . Despite their simplicity, they often obtain state-of-the-art performances if the right features are used BIBREF8 . They also have the potential to scale to very large corpus BIBREF9 . In this work, we explore ways to scale these baselines to very large corpus with a large output space, in the context of text classification. Inspired by the recent work in efficient word representation learning BIBREF10 , BIBREF11 , we show that linear models with a rank constraint and a fast loss approximation can train on a billion words within ten minutes, while achieving performance on par with the state-of-the-art. We evaluate the quality of our approach fastText on two different tasks, namely tag prediction and sentiment analysis.
597
What other non-neural baselines do the authors compare to?
bag of words, tf-idf, bag-of-means
The dependency of the generalization error of neural networks on model and dataset size is of critical importance both in practice and for understanding the theory of neural networks. Nevertheless, the functional form of this dependency remains elusive. In this work, we present a functional form which approximates well the generalization error in practice. Capitalizing on the successful concept of model scaling (e.g., width, depth), we are able to simultaneously construct such a form and specify the exact models which can attain it across model/data scales. Our construction follows insights obtained from observations conducted over a range of model/data scales, in various model types and datasets, in vision and language tasks. We show that the form both fits the observations well across scales, and provides accurate predictions from small- to large-scale models and data.
With the success and heightened adoption of neural networks for real world tasks, some questions remain poorly answered. For a given task and model architecture, how much data would one require to reach a prescribed performance level? How big a model would be needed? Addressing such questions is made especially difficult by the mounting evidence that large, deep neural networks trained on large-scale data outperform their smaller counterparts, rendering the training of high performance models prohibitively costly. Indeed, in the absence of practical answers to the above questions, surrogate approaches have proven useful. One such common approach is model scaling, where one designs and compares small-scale models, and applies the obtained architectural principles at a larger scale BIBREF0, BIBREF1, BIBREF2. Despite these heuristics being widely used to various degrees of success, the relation between the performance of a model in the small- and large-scale settings is not well understood. Hence, exploring the limitations or improving the efficiency of such methods remains subject to trial and error. In this work we circle back to the fundamental question: what is the (functional) relation between generalization error and model and dataset sizes? Critically, we capitalize on the concept of model scaling in its strictest form: we consider the case where there is some given scaling policy that completely defines how to scale up a model from small to large scales. We include in this context all model parameters, such that traversing from one scale (in which all parameters are known) to another requires no additional resources for specifying the model (e.g., architecture search/design). We empirically explore the behavior of the generalization error over a wide range of datasets and models in vision and language tasks. While the error landscape seems fairly complex at first glance, we observe the emergence of several key characteristics shared across benchmarks and domains. Chief among these characteristics is the emergence of regions where power-law behavior approximates the error well both with respect to data size, when holding model size fixed, and vice versa. Motivated by these observations, we establish criteria which a function approximating the error landscape should meet. We propose an intuitive candidate for such a function and evaluate its quality, both in explaining the observed error landscapes and in extrapolating from small scale (seen) to large scale (unseen) errors. Critically, our functional approximation of the error depends on both model and data sizes. We find that this function leads to a high quality fit and extrapolation. For instance, the mean and standard deviation of the relative errors are under 2% when fitting across all scales investigated and under 5% when extrapolating from a slimmed-down model (1/16 of the parameters) on a fraction of the training data (1/8 of the examples) on the ImageNet BIBREF3 and WikiText-103 BIBREF4 datasets, with similar results for other datasets. To the best of our knowledge, this is the first work that provides simultaneously: [itemsep=2pt,topsep=0pt,parsep=0pt] A joint functional form of the generalization error landscape—as dependent on both data and model size—with few, interpretable degrees of freedom (section SECREF5). Direct and complete specification (via the scaling policy) of the model configuration attaining said generalization error across model and dataset sizes. Highly accurate approximation of error measurements across model and data scales via the functional form, evaluated on different models, datasets, and tasks (section SECREF6 ). Highly accurate error prediction from small to large model and data (section SECREF7). We conclude with a discussion of some implications of our findings as a practical and principled tool for understanding network design at small scale and for efficient computation and trade-off design in general. We hope this work also provides a useful empirical leg to stand on and an invitation to search for a theory of generalization error which accounts for our findings.
599
On what dataset is Aristo system trained?
Aristo Corpus Regents 4th Regents 8th Regents `12th ARC-Easy ARC-challenge
In this work, we present a simple yet better variant of Self-Critical Sequence Training. We make a simple change in the choice of baseline function in REINFORCE algorithm. The new baseline can bring better performance with no extra cost, compared to the greedy decoding baseline.
Self-Critical Sequence Training(SCST), upon its release, has been a popular way to train sequence generation models. While originally proposed for image captioning task, SCST not only has become the new standard for training captioning models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, but also has been applied to many other tasks, like video captioningBIBREF10, BIBREF11, BIBREF12, reading comprehensionBIBREF13, summarizationBIBREF14, BIBREF15, BIBREF16, BIBREF17, image paragraph generationBIBREF18, speech recognitionBIBREF19. SCST is used to optimize generated sequences over a non-differentiable objective, usually the evaluation metrics, for example, CIDEr for captioning, ROUGE for summarization. To optimize such objective, SCST adopts REINFORCE with baseline BIBREF20, where a “Self-Critical” baseline is used; specifically, the score of the greedy decoding output is used as the baseline. This is proved to be better than learned baseline function which is more commonly used in Reinforcement Learning literature. In this work, we present a different baseline choice which was first proposed in BIBREF21, to the best of our knowledge. With more elaboration in Sec. SECREF3, this baseline can be described as a variant of “Self-Critical”. This method is simple, but also faster and more effective compared to the greedy decoding baseline used in SCST.
605
What language technologies have been introduced in the past?
- Font & Keyboard - Speech-to-Text - Text-to-Speech - Text Prediction - Spell Checker - Grammar Checker - Text Search - Machine Translation - Voice to Text Search - Voice to Speech Search
Understanding the semantic relationships between terms is a fundamental task in natural language processing applications. While structured resources that can express those relationships in a formal way, such as ontologies, are still scarce, a large number of linguistic resources gathering dictionary definitions is becoming available, but understanding the semantic structure of natural language definitions is fundamental to make them useful in semantic interpretation tasks. Based on an analysis of a subset of WordNet's glosses, we propose a set of semantic roles that compose the semantic structure of a dictionary definition, and show how they are related to the definition's syntactic configuration, identifying patterns that can be used in the development of information extraction frameworks and semantic models.
This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/ Many natural language understanding tasks such as Text Entailment and Question Answering systems are dependent on the interpretation of the semantic relationships between terms. The challenge on the construction of robust semantic interpretation models is to provide a model which is both comprehensive (capture a large set of semantic relations) and fine-grained. While semantic relations (high-level binary predicates which express relationships between words) can serve as a semantic interpretation model, in many cases, the relationship between words cannot be fully articulated as a single semantic relation, depending on a contextualization that involves one or more target words, their corresponding semantic relationships and associated logical operators (e.g. modality, functional operators). Natural language definitions of terms, such as dictionary definitions, are resources that are still underutilized in the context of semantic interpretation tasks. The high availability of natural language definitions in different domains of discourse, in contrast to the scarcity of comprehensive structured resources such as ontologies, make them a candidate linguistic resource to provide a data source for fine-grained semantic models. Under this context, understanding the syntactic and semantic “shape” of natural language definitions, i.e., how definitions are usually expressed, is fundamental for the extraction of structured representations and for the construction of semantic models from these data sources. This paper aims at filling this gap by providing a systematic analysis of the syntactic and semantic structure of natural language definitions and proposing a set of semantic roles for them. By semantic role here we mean entity-centered roles, that is, roles representing the part played by an expression in a definition, showing how it relates to the entity being defined. WordNet BIBREF0 , one of the most employed linguistic resources in semantic applications, was used as a corpus for this task. The analysis points out the syntactic and semantic regularity of definitions, making explicit an enumerable set of syntactic and semantic patterns which can be used to derive information extraction frameworks and semantic models. The contributions of this paper are: (i) a systematic preliminary study of syntactic and semantic relationships expressed in a corpus of definitions, (ii) the derivation of semantic categories for the classification of semantic patterns within definitions, and (iii) the description of the main syntactic and semantic shapes present in definitions, along with the quantification of the distribution of these patterns. The paper is organized as follows: Section "Structural Aspects of Definitions" presents the basic structural aspects of definitions according to the classic theory of definitions. Section "Semantic Roles for Lexical Definitions" introduces the proposed set of semantic roles for definitions. Section "Identifying Semantic Roles in Definitions" outlines the relationship between semantic and syntactic patterns. Section "Related Work" lists related work, followed by the conclusions and future work in Section "Conclusion" .
608
How do they define local variance?
The reciprocal of the variance of the attention distribution
In this paper, we investigate whether text from a Community Question Answering (QA) platform can be used to predict and describe real-world attributes. We experiment with predicting a wide range of 62 demographic attributes for neighbourhoods of London. We use the text from QA platform of Yahoo! Answers and compare our results to the ones obtained from Twitter microblogs. Outcomes show that the correlation between the predicted demographic attributes using text from Yahoo! Answers discussions and the observed demographic attributes can reach an average Pearson correlation coefficient of \r{ho} = 0.54, slightly higher than the predictions obtained using Twitter data. Our qualitative analysis indicates that there is semantic relatedness between the highest correlated terms extracted from both datasets and their relative demographic attributes. Furthermore, the correlations highlight the different natures of the information contained in Yahoo! Answers and Twitter. While the former seems to offer a more encyclopedic content, the latter provides information related to the current sociocultural aspects or phenomena.
Recent years have seen a huge boom in the number of different social media platforms available to users. People are increasingly using these platforms to voice their opinions or let others know about their whereabouts and activities. Each of these platforms has its own characteristics and is used for different purposes. The availability of a huge amount of data from many social media platforms has inspired researchers to study the relation between the data generated through the use of these platforms and real-world attributes. Many recent studies in this field are particularly inspired by the availability of text-based social media platforms such as blogs and Twitter. Text from Twitter microblogs, in particular, has been widely used as data source to make predictions in many domains. For example, box-office revenues are predicted using text from Twitter BIBREF0 . Twitter data has also been used to find correlations between the mood stated in tweets and the value of Dow Jones Industrial Average (DJIA) BIBREF1 . Predicting demographics of individual users using their language on social media platforms, especially Twitter, has been the focus of many research works: text from blogs and on-line forum posts are utilised to predict user's age through the analysis of linguistic features. Results show that the age of users can be predicted where the predicted and observed values reach a Pearson correlation coefficient of almost $0.7$ . Sociolinguistic associations using geo-tagged Twitter data have been discovered BIBREF2 and the results indicate that the demographic information of users such as first language, race, and ethnicity can be predicted by using text from Twitter with a correlation up to $0.3$ . Other research shows that users' income can also be predicted using tweets with a good prediction accuracy BIBREF3 . Text from Twitter microblogs has also been used to discover the relation between the language of users and the deprivation index of neighbourhoods. The collective sentiment extracted from the tweets of users has been shown BIBREF4 to have significant correlation ( $0.35$ ) with the deprivation index of the communities the users belong to. Data generated on QA platforms have not been used in the past for predicting real-world attributes. Most research work that utilise QA data aim to increase the performance of such platforms in analysing question quality BIBREF5 , predicting the best answers BIBREF6 , BIBREF7 or the best responder BIBREF8 . In this paper, we use the text from the discussions on the QA platform of Yahoo! Answers about neighbourhoods of London to show that the QA text can be used to predict the demographic attributes of the population of those neighbourhoods. We compare the performance of Yahoo! Answers data to the performance of data from Twitter, a platform that has been widely used for predicting many real-world attributes. Unlike many current works that focus on predicting one or few selected attributes (e.g. deprivation, race or income) using social media data, we study a wide range of 62 demographic attributes. Furthermore, we test whether terms extracted from both Yahoo! Answers and Twitter are semantically related to these attributes and provide examples of sociocultural profiles of neighbourhoods through the interpretation of the coefficients of the predictive models. The contributions of this paper can be summarised as follows:
610
how do they measure discussion quality?
Measuring three aspects: argumentation, specificity and knowledge domain.
Contextual word embeddings (e.g. GPT, BERT, ELMo, etc.) have demonstrated state-of-the-art performance on various NLP tasks. Recent work with the multilingual version of BERT has shown that the model performs very well in zero-shot and zero-resource cross-lingual settings, where only labeled English data is used to finetune the model. We improve upon multilingual BERT's zero-resource cross-lingual performance via adversarial learning. We report the magnitude of the improvement on the multilingual MLDoc text classification and CoNLL 2002/2003 named entity recognition tasks. Furthermore, we show that language-adversarial training encourages BERT to align the embeddings of English documents and their translations, which may be the cause of the observed performance gains.
Contextual word embeddings BIBREF0 , BIBREF1 , BIBREF2 have been successfully applied to various NLP tasks, including named entity recognition, document classification, and textual entailment. The multilingual version of BERT (which is trained on Wikipedia articles from 100 languages and equipped with a 110,000 shared wordpiece vocabulary) has also demonstrated the ability to perform `zero-resource' cross-lingual classification on the XNLI dataset BIBREF3 . Specifically, when multilingual BERT is finetuned for XNLI with English data alone, the model also gains the ability to handle the same task in other languages. We believe that this zero-resource transfer learning can be extended to other multilingual datasets. In this work, we explore BERT's zero-resource performance on the multilingual MLDoc classification and CoNLL 2002/2003 NER tasks. We find that the baseline zero-resource performance of BERT exceeds the results reported in other work, even though cross-lingual resources (e.g. parallel text, dictionaries, etc.) are not used during BERT pretraining or finetuning. We apply adversarial learning to further improve upon this baseline, achieving state-of-the-art zero-resource results. There are many recent approaches to zero-resource cross-lingual classification and NER, including adversarial learning BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , using a model pretrained on parallel text BIBREF8 , BIBREF9 , BIBREF10 and self-training BIBREF11 . Due to the newness of the subject matter, the definition of `zero-resource' varies somewhat from author to author. For the experiments that follow, `zero-resource' means that, during model training, we do not use labels from non-English data, nor do we use human or machine-generated parallel text. Only labeled English text and unlabeled non-English text are used during training, and hyperparameters are selected using English evaluation sets. Our contributions are the following:
611
what were the baselines?
2008 Punyakanok et al. 2009 Zhao et al. + ME 2008 Toutanova et al. 2010 Bjorkelund et al. 2015 FitzGerald et al. 2015 Zhou and Xu 2016 Roth and Lapata 2017 He et al. 2017 Marcheggiani et al. 2017 Marcheggiani and Titov 2018 Tan et al. 2018 He et al. 2018 Strubell et al. 2018 Cai et al. 2018 He et al. 2018 Li et al.
Classroom discussions in English Language Arts have a positive effect on students' reading, writing and reasoning skills. Although prior work has largely focused on teacher talk and student-teacher interactions, we focus on three theoretically-motivated aspects of high-quality student talk: argumentation, specificity, and knowledge domain. We introduce an annotation scheme, then show that the scheme can be used to produce reliable annotations and that the annotations are predictive of discussion quality. We also highlight opportunities provided by our scheme for education and natural language processing research.
Current research, theory, and policy surrounding K-12 instruction in the United States highlight the role of student-centered disciplinary discussions (i.e. discussions related to a specific academic discipline or school subject such as physics or English Language Arts) in instructional quality and student learning opportunities BIBREF0 , BIBREF1 . Such student-centered discussions – often called “dialogic" or “inquiry-based” – are widely viewed as the most effective instructional approach for disciplinary understanding, problem-solving, and literacy BIBREF2 , BIBREF3 , BIBREF4 . In English Language Arts (ELA) classrooms, student-centered discussions about literature have a positive impact on the development of students' reasoning, writing, and reading skills BIBREF5 , BIBREF6 . However, most studies have focused on the role of teachers and their talk BIBREF7 , BIBREF2 , BIBREF8 rather than on the aspects of student talk that contribute to discussion quality. Additionally, studies of student-centered discussions rarely use the same coding schemes, making it difficult to generalize across studies BIBREF2 , BIBREF9 . This limitation is partly due to the time-intensive work required to analyze discourse data through qualitative methods such as ethnography and discourse analysis. Thus, qualitative case studies have generated compelling theories about the specific features of student talk that lead to high-quality discussions, but few findings can be generalized and leveraged to influence instructional improvements across ELA classrooms. As a first step towards developing an automated system for detecting the features of student talk that lead to high quality discussions, we propose a new annotation scheme for student talk during ELA “text-based" discussions - that is, discussions that center on a text or piece of literature (e.g., book, play, or speech). The annotation scheme was developed to capture three aspects of classroom talk that are theorized in the literature as important to discussion quality and learning opportunities: argumentation (the process of systematically reasoning in support of an idea), specificity (the quality of belonging or relating uniquely to a particular subject), and knowledge domain (area of expertise represented in the content of the talk). We demonstrate the reliability and validity of our scheme via an annotation study of five transcripts of classroom discussion.
612
Which soft-selection approaches are evaluated?
LSTM and BERT
Semantic role labeling (SRL) aims to discover the predicateargument structure of a sentence. End-to-end SRL without syntactic input has received great attention. However, most of them focus on either span-based or dependency-based semantic representation form and only show specific model optimization respectively. Meanwhile, handling these two SRL tasks uniformly was less successful. This paper presents an end-to-end model for both dependency and span SRL with a unified argument representation to deal with two different types of argument annotations in a uniform fashion. Furthermore, we jointly predict all predicates and arguments, especially including long-term ignored predicate identification subtask. Our single model achieves new state-of-the-art results on both span (CoNLL 2005, 2012) and dependency (CoNLL 2008, 2009) SRL benchmarks.
The purpose of semantic role labeling (SRL) is to derive the meaning representation for a sentence, which is beneficial to a wide range of natural language processing (NLP) tasks BIBREF0 , BIBREF1 . SRL can be formed as four subtasks, including predicate detection, predicate disambiguation, argument identification and argument classification. For argument annotation, there are two formulizations. One is based on text spans, namely span-based SRL. The other is dependency-based SRL, which annotates the syntactic head of argument rather than entire argument span. Figure FIGREF1 shows example annotations. Great progress has been made in syntactic parsing BIBREF2 , BIBREF3 , BIBREF4 . Most traditional SRL methods rely heavily on syntactic features. To alleviate the inconvenience, recent works BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 propose end-to-end models for SRL, putting syntax aside and still achieving favorable results. However, these systems focus on either span or dependency SRL, which motivates us to explore a uniform approach. Both span and dependency are effective formal representations for semantics, though for a long time it has been kept unknown which form, span or dependency, would be better for the convenience and effectiveness of semantic machine learning and later applications. Furthermore, researchers are interested in two forms of SRL models that may benefit from each other rather than their separated development. This topic has been roughly discussed in BIBREF19 , who concluded that the (best) dependency SRL system at then clearly outperformed the span-based (best) system through gold syntactic structure transformation. However, BIBREF19 johansson2008EMNLP like all other traditional SRL models themselves had to adopt rich syntactic features, and their comparison was done between two systems in quite different building styles. Instead, this work will develop full syntax-agnostic SRL systems with the same fashion for both span and dependency representation, so that we can revisit this issue under a more solid empirical basis. In addition, most efforts focus on argument identification and classification since span and dependency SRL corpora have already marked predicate positions. Although no predicate identification is needed, it is not available in many downstream applications. Therefore, predicate identification should be carefully handled in a complete practical SRL system. To address this problem, BIBREF9 he2018jointly proposed an end-to-end approach for jointly predicting predicates and arguments for span SRL. Likewise, BIBREF11 cai2018full introduced an end-to-end model to naturally cover all predicate/argument identification and classification subtasks for dependency SRL. To jointly predict predicates and arguments, we present an end-to-end framework for both span and dependency SRL. Our model extends the span SRL model of BIBREF9 he2018jointly, directly regarding all words in a sentence as possible predicates, considering all spans or words as potential arguments and learning distributions over possible predicates. However, we differ by (1) introducing unified argument representation to handle two different types of SRL tasks, and (2) employing biaffine scorer to make decisions for predicate-argument relationship. The proposed models are evaluated on span SRL datasets: CoNLL 2005 and 2012 data, as well as the dependency SRL dataset of CoNLL 2008 and 2009 shared tasks. For span SRL, our single model outperforms the previous best results by 0.3% and 0.5% F INLINEFORM0 -score on CoNLL 2005 and 2012 test sets respectively. For dependency SRL, we achieve new state-of-the-art of 85.3% F INLINEFORM1 and 90.4% F INLINEFORM2 on CoNLL 2008 and 2009 benchmarks respectively.
616
How big is slot filing dataset?
Dataset has 1737 train, 497 dev and 559 test sentences.
Recently, neural models pretrained on a language modeling task, such as ELMo (Peters et al., 2017), OpenAI GPT (Radford et al., 2018), and BERT (Devlin et al., 2018), have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference. In this paper, we describe a simple re-implementation of BERT for query-based passage re-ranking. Our system is the state of the art on the TREC-CAR dataset and the top entry in the leaderboard of the MS MARCO passage retrieval task, outperforming the previous state of the art by 27% (relative) in MRR@10. The code to reproduce our results is available at https://github.com/nyu-dl/dl4marco-bert
We have seen rapid progress in machine reading compression in recent years with the introduction of large-scale datasets, such as SQuAD BIBREF3 , MS MARCO BIBREF4 , SearchQA BIBREF5 , TriviaQA BIBREF6 , and QUASAR-T BIBREF7 , and the broad adoption of neural models, such as BiDAF BIBREF8 , DrQA BIBREF9 , DocumentQA BIBREF10 , and QAnet BIBREF11 . The information retrieval (IR) community has also experienced a flourishing development of neural ranking models, such as DRMM BIBREF12 , KNRM BIBREF13 , Co-PACRR BIBREF14 , and DUET BIBREF15 . However, until recently, there were only a few large datasets for passage ranking, with the notable exception of the TREC-CAR BIBREF16 . This, at least in part, prevented the neural ranking models from being successful when compared to more classical IR techniques BIBREF17 . We argue that the same two ingredients that made possible much progress on the reading comprehension task are now available for passage ranking task. Namely, the MS MARCO passage ranking dataset, which contains one million queries from real users and their respective relevant passages annotated by humans, and BERT, a powerful general purpose natural language processing model. In this paper, we describe in detail how we have re-purposed BERT as a passage re-ranker and achieved state-of-the-art results on the MS MARCO passage re-ranking task.
617
How large is the dataset they generate?
4.756 million sentences
Slot Filling is the task of extracting the semantic concept from a given natural language utterance. Recently it has been shown that using contextual information, either in work representations (e.g., BERT embedding) or in the computation graph of the model, could improve the performance of the model. However, recent work uses the contextual information in a restricted manner, e.g., by concatenating the word representation and its context feature vector, limiting the model from learning any direct association between the context and the label of word. We introduce a new deep model utilizing the contextual information for each work in the given sentence in a multi-task setting. Our model enforce consistency between the feature vectors of the context and the word while increasing the expressiveness of the context about the label of the word. Our empirical analysis on a slot filling dataset proves the superiority of the model over the baselines.
Slot Filling (SF) is the task of identifying the semantic concept expressed in natural language utterance. For instance, consider a request to edit an image expressed in natural language: “Remove the blue ball on the table and change the color of the wall to brown”. Here, the user asks for an "Action" (i.e., removing) on one “Object” (blue ball on the table) in the image and changing an “Attribute” (i.e., color) of the image to new “Value” (i.e., brown). Our goal in SF is to provide a sequence of labels for the given sentence to identify the semantic concept expressed in the given sentence. Prior work have shown that contextual information could be useful for SF. They utilize contextual information either in word level representation (i.e., via contextualize embedding e.g., BERT BIBREF0) or in the model computation graph (e.g., concatenating the context feature to the word feature BIBREF1). However, such methods fail to capture the explicit dependence between the context of the word and its label. Moreover, such limited use of contextual information (i.e., concatenation of the feature vector and context vector) in the model cannot model the interaction between the word representation and its context. In order to alleviate these issues, in this work, we propose a novel model to explicitly increase the predictability of the word label using its context and increasing the interactivity between word representations and its context. More specifically, in our model we use the context of the word to predict its label and by doing so our model learns label-aware context for each word in the sentence. In order to improve the interactivity between the word representation and its context, we increase the mutual information between the word representations and its context. In addition to these contributions, we also propose an auxiliary task to predict which labels are expressed in a given sentence. Our model is trained in a mutli-tasking framework. Our experiments on a SF dataset for identifying semantic concepts from natural language request to edit an image show the superiority of our model compared to previous baselines. Our model achieves the state-of-the-art results on the benchmark dataset by improving the F1 score by almost 2%, which corresponds to a 12.3% error rate reduction.
625
What are the weaknesses of their proposed interpretability quantification method?
can be biased by dataset used and may generate categories which are suboptimal compared to human designed categories
This paper is to explore the possibility to use alternative data and artificial intelligence techniques to trade stocks. The efficacy of the daily Twitter sentiment on predicting the stock return is examined using machine learning methods. Reinforcement learning(Q-learning) is applied to generate the optimal trading policy based on the sentiment signal. The predicting power of the sentiment signal is more significant if the stock price is driven by the expectation of the company growth and when the company has a major event that draws the public attention. The optimal trading strategy based on reinforcement learning outperforms the trading strategy based on the machine learning prediction.
In a world where traditional financial information is ubiquitous and the financial models are largely homogeneous, finding hidden information that has not been priced in from alternative data is critical. The recent development in Natural Language Processing provides such opportunities to look into text data in addition to numerical data. When the market sets the stock price, it is not uncommon that the expectation of the company growth outweighs the company fundamentals. Twitter, a online news and social network where the users post and interact with messages to express views about certain topics, contains valuable information on the public mood and sentiment. A collection of research BIBREF0 BIBREF1 have shown that there is a positive correlation between the "public mood" and the "market mood". Other research BIBREF2 also shows that significant correlation exists between the Twitter sentiment and the abnormal return during the peaks of the Twitter volume during a major event. Once a signal that has predicting power on the stock market return is constructed, a trading strategy to express the view of the signal is needed. Traditionally, the quantitative finance industry relies on backtest, a process where the trading strategies are tuned during the simulations or optimizations. Reinforcement learning provides a way to find the optimal policy by maximizing the expected future utility. There are recent attempts from the Artificial Intelligence community to apply reinforcement learning to asset allocation BIBREF3 , algorithmic trading BIBREF4 BIBREF5 , and portfolio management BIBREF6 . The contribution of this paper is two-fold: First, the predicting power of Twitter sentiment is evaluated. Our results show sentiment is more suitable to construct alpha signals rather than total return signals and shows predicting power especially when the Twitter volume is high. Second, we proposed a trading strategy based on reinforcement learning (Q-learning) that takes the sentiment features as part of its states. The paper is constructed as follows: In the second section, scraping Tweets from Twitter website and preprocessing the data are described in details. In the third section, assigning sentiment scores to the text data is discussed. In the fourth section, feature engineering and prediction based on the sentiment score is discussed. In the fifth section, how the reinforcement learning is applied to generate the optimal trading strategy is described.
627
How was lexical diversity measured?
By computing number of unique responses and number of responses divided by the number of unique responses to that question for each of the questions
Designing a reliable natural language (NL) interface for querying tables has been a longtime goal of researchers in both the data management and natural language processing (NLP) communities. Such an interface receives as input an NL question, translates it into a formal query, executes the query and returns the results. Errors in the translation process are not uncommon, and users typically struggle to understand whether their query has been mapped correctly. We address this problem by explaining the obtained formal queries to non-expert users. Two methods for query explanations are presented: the first translates queries into NL, while the second method provides a graphic representation of the query cell-based provenance (in its execution on a given table). Our solution augments a state-of-the-art NL interface over web tables, enhancing it in both its training and deployment phase. Experiments, including a user study conducted on Amazon Mechanical Turk, show our solution to improve both the correctness and reliability of an NL interface.
Natural language interfaces have been gaining significant popularity, enabling ordinary users to write and execute complex queries. One of the prominent paradigms for developing NL interfaces is semantic parsing, which is the mapping of NL phrases into a formal language. As Machine Learning techniques are standardly used in semantic parsing, a training set of question-answer pairs is provided alongside a target database BIBREF0 , BIBREF1 , BIBREF2 . The parser is a parameterized function that is trained by updating its parameters such that questions from the training set are translated into queries that yield the correct answers. A crucial challenge for using semantic parsers is their reliability. Flawless translation from NL to formal language is an open problem, and even state-of-the-art parsers are not always right. With no explanation of the executed query, users are left wondering if the result is actually correct. Consider the example in Figure FIGREF1 , displaying a table of Olympic games and the question "Greece held its last Olympics in what year?". A semantic parser parsing the question generates multiple candidate queries and returns the evaluation result of its top ranked query. The user is only presented with the evaluation result, 2004. Although the end result is correct, she has no clear indication whether the question was correctly parsed. In fact, the interface might have chosen any candidate query yielding 2004. Ensuring the system has executed a correct query (rather than simply returning a correct answer in a particular instance) is essential, as it enables reusing the query as the data evolves over time. For example, a user might wish for a query such as "The average price of the top 5 stocks on Wall Street" to be run on a daily basis. Only its correct translation into SQL will consistently return accurate results. Our approach is to design provenance-based BIBREF3 , BIBREF4 query explanations that are extensible, domain-independent and immediately understandable by non-expert users. We devise a cell-based provenance model for explaining formal queries over web tables and implement it with our query explanations, (see Figure FIGREF1 ). We enhance an existing NL interface for querying tables BIBREF5 by introducing a novel component featuring our query explanations. Following the parsing of an input NL question, our component explains the candidate queries to users, allowing non-experts to choose the one that best fits their intention. The immediate application is to improve the quality of obtained queries at deployment time over simply choosing the parser's top query (without user feedback). Furthermore, we show how query explanations can be used to obtain user feedback which is used to retrain the Machine Learning system, thereby improving its performance.
628
What human evaluation method is proposed?
comparing the summary with the text instead of the reference and labeling the candidate bad if it is incorrect or irrelevant
Creative tasks such as ideation or question proposal are powerful applications of crowdsourcing, yet the quantity of workers available for addressing practical problems is often insufficient. To enable scalable crowdsourcing thus requires gaining all possible efficiency and information from available workers. One option for text-focused tasks is to allow assistive technology, such as an autocompletion user interface (AUI), to help workers input text responses. But support for the efficacy of AUIs is mixed. Here we designed and conducted a randomized experiment where workers were asked to provide short text responses to given questions. Our experimental goal was to determine if an AUI helps workers respond more quickly and with improved consistency by mitigating typos and misspellings. Surprisingly, we found that neither occurred: workers assigned to the AUI treatment were slower than those assigned to the non-AUI control and their responses were more diverse, not less, than those of the control. Both the lexical and semantic diversities of responses were higher, with the latter measured using word2vec. A crowdsourcer interested in worker speed may want to avoid using an AUI, but using an AUI to boost response diversity may be valuable to crowdsourcers interested in receiving as much novel information from workers as possible.
Crowdsourcing applications vary from basic, self-contained tasks such as image recognition or labeling BIBREF0 all the way to open-ended and creative endeavors such as collaborative writing, creative question proposal, or more general ideation BIBREF1 . Yet scaling the crowd to very large sets of creative tasks may require prohibitive numbers of workers. Scalability is one of the key challenges in crowdsourcing: how to best apply the valuable but limited resources provided by crowd workers and how to help workers be as efficient as possible. Efficiency gains can be achieved either collectively at the level of the entire crowd or by helping individual workers. At the crowd level, efficiency can be gained by assigning tasks to workers in the best order BIBREF2 , by filtering out poor tasks or workers, or by best incentivizing workers BIBREF3 . At the individual worker level, efficiency gains can come from helping workers craft more accurate responses and complete tasks in less time. One way to make workers individually more efficient is to computationally augment their task interface with useful information. For example, an autocompletion user interface (AUI) BIBREF4 , such as used on Google's main search page, may speed up workers as they answer questions or propose ideas. However, support for the benefits of AUIs is mixed and existing research has not considered short, repetitive inputs such as those required by many large-scale crowdsourcing problems. More generally, it is not yet clear what are the best approaches or general strategies to achieve efficiency gains for creative crowdsourcing tasks. In this work, we conducted a randomized trial of the benefits of allowing workers to answer a text-based question with the help of an autocompletion user interface. Workers interacted with a web form that recorded how quickly they entered text into the response field and how quickly they submitted their responses after typing is completed. After the experiment concluded, we measured response diversity using textual analyses and response quality using a followup crowdsourcing task with an independent population of workers. Our results indicate that the AUI treatment did not affect quality, and did not help workers perform more quickly or achieve greater response consensus. Instead, workers with the AUI were significantly slower and their responses were more diverse than workers in the non-AUI control group.
631
What languages are represented in the dataset?
EN, JA, ES, AR, PT, KO, TH, FR, TR, RU, IT, DE, PL, NL, EL, SV, FA, VI, FI, CS, UK, HI, DA, HU, NO, RO, SR, LV, BG, UR, TA, MR, BN, IN, KN, ET, SL, GU, CY, ZH, CKB, IS, LT, ML, SI, IW, NE, KM, MY, TL, KA, BO
Short-text classification, like all data science, struggles to achieve high performance using limited data. As a solution, a short sentence may be expanded with new and relevant feature words to form an artificially enlarged dataset, and add new features to testing data. This paper applies a novel approach to text expansion by generating new words directly for each input sentence, thus requiring no additional datasets or previous training. In this unsupervised approach, new keywords are formed within the hidden states of a pre-trained language model and then used to create extended pseudo documents. The word generation process was assessed by examining how well the predicted words matched to topics of the input sentence. It was found that this method could produce 3-10 relevant new words for each target topic, while generating just 1 word related to each non-target topic. Generated words were then added to short news headlines to create extended pseudo headlines. Experimental results have shown that models trained using the pseudo headlines can improve classification accuracy when limiting the number of training examples.
The web has provided researchers with vast amounts of unlabeled text data, and enabled the development of increasingly sophisticated language models which can achieve state of the art performance despite having no task specific training BIBREF0, BIBREF1, BIBREF2. It is desirable to adapt these models for bespoke tasks such as short text classification. Short-text is nuanced, difficult to model statistically, and sparse in features, hindering traditional analysis BIBREF3. These difficulties become further compounded when training is limited, as is the case for many practical applications. This paper provides a method to expand short-text with additional keywords, generated using a pre-trained language model. The method takes advantage of general language understanding to suggest contextually relevant new words, without necessitating additional domain data. The method can form both derivatives of the input vocabulary, and entirely new words arising from contextualised word interactions and is ideally suited for applications where data volume is limited. figureBinary Classification of short headlines into 'WorldPost' or 'Crime' categories, shows improved performance with extended pseudo headlines when the training set is small. Using: Random forest classifier, 1000 test examples, 10-fold cross validation.
633
How faster is training and decoding compared to former models?
Proposed vs best baseline: Decoding: 8541 vs 8532 tokens/sec Training: 8h vs 8h
One of the basic tasks of computational language documentation (CLD) is to identify word boundaries in an unsegmented phonemic stream. While several unsupervised monolingual word segmentation algorithms exist in the literature, they are challenged in real-world CLD settings by the small amount of available data. A possible remedy is to take advantage of glosses or translation in a foreign, well-resourced, language, which often exist for such data. In this paper, we explore and compare ways to exploit neural machine translation models to perform unsupervised boundary detection with bilingual information, notably introducing a new loss function for jointly learning alignment and segmentation. We experiment with an actual under-resourced language, Mboshi, and show that these techniques can effectively control the output segmentation length.
All over the world, languages are disappearing at an unprecedented rate, fostering the need for specific tools aimed to aid field linguists to collect, transcribe, analyze, and annotate endangered language data (e.g. BIBREF0, BIBREF1). A remarkable effort in this direction has improved the data collection procedures and tools BIBREF2, BIBREF3, enabling to collect corpora for an increasing number of endangered languages (e.g. BIBREF4). One of the basic tasks of computational language documentation (CLD) is to identify word or morpheme boundaries in an unsegmented phonemic or orthographic stream. Several unsupervised monolingual word segmentation algorithms exist in the literature, based, for instance, on information-theoretic BIBREF5, BIBREF6 or nonparametric Bayesian techniques BIBREF7, BIBREF8. These techniques are, however, challenged in real-world settings by the small amount of available data. A possible remedy is to take advantage of glosses or translations in a foreign, well-resourced language (WL), which often exist for such data, hoping that the bilingual context will provide additional cues to guide the segmentation algorithm. Such techniques have already been explored, for instance, in BIBREF9, BIBREF10 in the context of improving statistical alignment and translation models; and in BIBREF11, BIBREF12, BIBREF13 using Attentional Neural Machine Translation (NMT) models. In these latter studies, word segmentation is obtained by post-processing attention matrices, taking attention information as a noisy proxy to word alignment BIBREF14. In this paper, we explore ways to exploit neural machine translation models to perform unsupervised boundary detection with bilingual information. Our main contribution is a new loss function for jointly learning alignment and segmentation in neural translation models, allowing us to better control the length of utterances. Our experiments with an actual under-resourced language (UL), Mboshi BIBREF17, show that this technique outperforms our bilingual segmentation baseline.
637
What datasets are used to evaluate the model?
WN18 and FB15k
We propose a methodology that adapts graph embedding techniques (DeepWalk (Perozzi et al., 2014) and node2vec (Grover and Leskovec, 2016)) as well as cross-lingual vector space mapping approaches (Least Squares and Canonical Correlation Analysis) in order to merge the corpus and ontological sources of lexical knowledge. We also perform comparative analysis of the used algorithms in order to identify the best combination for the proposed system. We then apply this to the task of enhancing the coverage of an existing word embedding's vocabulary with rare and unseen words. We show that our technique can provide considerable extra coverage (over 99%), leading to consistent performance gain (around 10% absolute gain is achieved with w2v-gn-500K cf.\S 3.3) on the Rare Word Similarity dataset.
The prominent model for representing semantics of words is the distributional vector space model BIBREF2 and the prevalent approach for constructing these models is the distributional one which assumes that semantics of a word can be predicted from its context, hence placing words with similar contexts in close proximity to each other in an imaginary high-dimensional vector space. Distributional techniques, either in their conventional form which compute co-occurrence matrices BIBREF2 , BIBREF3 and learn high-dimensional vectors for words, or the recent neural-based paradigm which directly learns latent low-dimensional vectors, usually referred to as embeddings BIBREF4 , rely on a multitude of occurrences for each individual word to enable accurate representations. As a result of this statistical nature, words that are infrequent or unseen during training, such as domain-specific words, will not have reliable embeddings. This is the case even if massive corpora are used for training, such as the 100B-word Google News dataset BIBREF5 . Recent work on embedding induction has mainly focused on morphologically complex rare words and has tried to address the problem by learning transformations that can transfer a word's semantic information to its morphological variations, hence inducing embeddings for complex forms by breaking them into their sub-word units BIBREF6 , BIBREF7 , BIBREF8 . However, these techniques are unable to effectively model single-morpheme words for which no sub-word information is available in the training data, essentially ignoring most of the rare domain-specific entities which are crucial in the performance of NLP systems when applied to those domains. On the other hand, distributional techniques generally ignore all the lexical knowledge that is encoded in dictionaries, ontologies, or other lexical resources. There exist hundreds of high coverage or domain-specific lexical resources which contain valuable information for infrequent words, particularly in domains such as health. Here, we present a methodology that merges the two worlds by benefiting from both expert-based lexical knowledge encoded in external resources as well as statistical information derived from large corpora, enabling vocabulary expansion not only for morphological variations but also for infrequent single-morpheme words. The contributions of this work are twofold: (1) we propose a technique that induces embeddings for rare and unseen words by exploiting the information encoded for them in an external lexical resource, and (2) we apply, possibly for the first time, vector space mapping techniques, which are widely used in multilingual settings, to map two lexical semantic spaces with different properties in the same language. We show that a transfer methodology can lead to consistent improvements on a standard rare word similarity dataset.
638
What is the source of the dataset?
Online sites tagged as fake news site by Verafiles and NUJP and news website in the Philippines, including Pilipino Star Ngayon, Abante, and Bandera
Recent studies on knowledge base completion, the task of recovering missing facts based on observed facts, demonstrate the importance of learning embeddings from multi-step relations. Due to the size of knowledge bases, previous works manually design relation paths of observed triplets in symbolic space (e.g. random walk) to learn multi-step relations during training. However, these approaches suffer some limitations as most paths are not informative, and it is prohibitively expensive to consider all possible paths. To address the limitations, we propose learning to traverse in vector space directly without the need of symbolic space guidance. To remember the connections between related observed triplets and be able to adaptively change relation paths in vector space, we propose Implicit ReasoNets (IRNs), that is composed of a global memory and a controller module to learn multi-step relation paths in vector space and infer missing facts jointly without any human-designed procedure. Without using any axillary information, our proposed model achieves state-of-the-art results on popular knowledge base completion benchmarks.
null
640
What metadata is included?
besides claim, label and claim url, it also includes a claim ID, reason, category, speaker, checker, tags, claim entities, article title, publish data and claim date
In this paper, we propose a novel unsupervised learning method for the lexical acquisition of words related to places visited by robots, from human continuous speech signals. We address the problem of learning novel words by a robot that has no prior knowledge of these words except for a primitive acoustic model. Further, we propose a method that allows a robot to effectively use the learned words and their meanings for self-localization tasks. The proposed method is nonparametric Bayesian spatial concept acquisition method (SpCoA) that integrates the generative model for self-localization and the unsupervised word segmentation in uttered sentences via latent variables related to the spatial concept. We implemented the proposed method SpCoA on SIGVerse, which is a simulation environment, and TurtleBot2, which is a mobile robot in a real environment. Further, we conducted experiments for evaluating the performance of SpCoA. The experimental results showed that SpCoA enabled the robot to acquire the names of places from speech sentences. They also revealed that the robot could effectively utilize the acquired spatial concepts and reduce the uncertainty in self-localization.
Autonomous robots, such as service robots, operating in the human living environment with humans have to be able to perform various tasks and language communication. To this end, robots are required to acquire novel concepts and vocabulary on the basis of the information obtained from their sensors, e.g., laser sensors, microphones, and cameras, and recognize a variety of objects, places, and situations in an ambient environment. Above all, we consider it important for the robot to learn the names that humans associate with places in the environment and the spatial areas corresponding to these names; i.e., the robot has to be able to understand words related to places. Therefore, it is important to deal with considerable uncertainty, such as the robot's movement errors, sensor noise, and speech recognition errors. Several studies on language acquisition by robots have assumed that robots have no prior lexical knowledge. These studies differ from speech recognition studies based on a large vocabulary and natural language processing studies based on lexical, syntactic, and semantic knowledge BIBREF0 , BIBREF1 . Studies on language acquisition by robots also constitute a constructive approach to the human developmental process and the emergence of symbols. The objectives of this study were to build a robot that learns words related to places and efficiently utilizes this learned vocabulary in self-localization. Lexical acquisition related to places is expected to enable a robot to improve its spatial cognition. A schematic representation depicting the target task of this study is shown in Fig. FIGREF3 . This study assumes that a robot does not have any vocabularies in advance but can recognize syllables or phonemes. The robot then performs self-localization while moving around in the environment, as shown in Fig. FIGREF3 (a). An utterer speaks a sentence including the name of the place to the robot, as shown in Fig. FIGREF3 (b). For the purposes of this study, we need to consider the problems of self-localization and lexical acquisition simultaneously. When a robot learns novel words from utterances, it is difficult to determine segmentation boundaries and the identity of different phoneme sequences from the speech recognition results, which can lead to errors. First, let us consider the case of the lexical acquisition of an isolated word. For example, if a robot obtains the speech recognition results “aporu”, “epou”, and “aqpuru” (incorrect phoneme recognition of apple), it is difficult for the robot to determine whether they denote the same referent without prior knowledge. Second, let us consider a case of the lexical acquisition of the utterance of a sentence. For example, a robot obtains a speech recognition result, such as “thisizanaporu.” The robot has to necessarily segment a sentence into individual words, e.g., “this”, “iz”, “an”, and “aporu”. In addition, it is necessary for the robot to recognize words referring to the same referent, e.g., the fruit apple, from among the many segmented results that contain errors. In case of Fig. FIGREF3 (c), there is some possibility of learning names including phoneme errors, e.g., “afroqtabutibe,” because the robot does not have any lexical knowledge. On the other hand, when a robot performs online probabilistic self-localization, we assume that the robot uses sensor data and control data, e.g., values obtained using a range sensor and odometry. If the position of the robot on the global map is unclear, the difficulties associated with the identification of the self-position by only using local sensor information become problematic. In the case of global localization using local information, e.g., a range sensor, the problem that the hypothesis of self-position is present in multiple remote locations, frequently occurs, as shown in Fig. FIGREF3 (d). In order to solve the abovementioned problems, in this study, we adopted the following approach. An utterance is recognized as not a single phoneme sequence but a set of candidates of multiple phonemes. We attempt to suppress the variability in the speech recognition results by performing word discovery taking into account the multiple candidates of speech recognition. In addition, the names of places are learned by associating with words and positions. The lexical acquisition is complemented by using certain particular spatial information; i.e., this information is obtained by hearing utterances including the same word in the same place many times. Furthermore, in this study, we attempt to address the problem of the uncertainty of self-localization by improving the self-position errors by using a recognized utterance including the name of the current place and the acquired spatial concepts, as shown in Fig. FIGREF3 (e). In this paper, we propose nonparametric Bayesian spatial concept acquisition method (SpCoA) on basis of unsupervised word segmentation and a nonparametric Bayesian generative model that integrates self-localization and a clustering in both words and places. The main contributions of this paper are as follows: The remainder of this paper is organized as follows: In Section SECREF2 , previous studies on language acquisition and lexical acquisition relevant to our study are described. In Section SECREF3 , the proposed method SpCoA is presented. In Sections SECREF4 and SECREF5 , we discuss the effectiveness of SpCoA in the simulation and in the real environment. Section SECREF6 concludes this paper.
641
How much important is the visual grounding in the learning of the multilingual representations?
performance is significantly degraded without pixel data
We contribute the largest publicly available dataset of naturally occurring factual claims for the purpose of automatic claim verification. It is collected from 26 fact checking websites in English, paired with textual sources and rich metadata, and labelled for veracity by human expert journalists. We present an in-depth analysis of the dataset, highlighting characteristics and challenges. Further, we present results for automatic veracity prediction, both with established baselines and with a novel method for joint ranking of evidence pages and predicting veracity that outperforms all baselines. Significant performance increases are achieved by encoding evidence, and by modelling metadata. Our best-performing model achieves a Macro F1 of 49.2%, showing that this is a challenging testbed for claim veracity prediction.
Misinformation and disinformation are two of the most pertinent and difficult challenges of the information age, exacerbated by the popularity of social media. In an effort to counter this, a significant amount of manual labour has been invested in fact checking claims, often collecting the results of these manual checks on fact checking portals or websites such as politifact.com or snopes.com. In a parallel development, researchers have recently started to view fact checking as a task that can be partially automated, using machine learning and NLP to automatically predict the veracity of claims. However, existing efforts either use small datasets consisting of naturally occurring claims (e.g. BIBREF0 , BIBREF1 ), or datasets consisting of artificially constructed claims such as FEVER BIBREF2 . While the latter offer valuable contributions to further automatic claim verification work, they cannot replace real-world datasets.
642
How is the generative model evaluated?
Comparing BLEU score of model with and without attention
There has been significant interest recently in learning multilingual word embeddings -- in which semantically similar words across languages have similar embeddings. State-of-the-art approaches have relied on expensive labeled data, which is unavailable for low-resource languages, or have involved post-hoc unification of monolingual embeddings. In the present paper, we investigate the efficacy of multilingual embeddings learned from weakly-supervised image-text data. In particular, we propose methods for learning multilingual embeddings using image-text data, by enforcing similarity between the representations of the image and that of the text. Our experiments reveal that even without using any expensive labeled data, a bag-of-words-based embedding model trained on image-text data achieves performance comparable to the state-of-the-art on crosslingual semantic similarity tasks.
Recent advances in learning distributed representations for words (i.e., word embeddings) have resulted in improvements across numerous natural language understanding tasks BIBREF0 , BIBREF1 . These methods use unlabeled text corpora to model the semantic content of words using their co-occurring context words. Key to this is the observation that semantically similar words have similar contexts BIBREF2 , thus leading to similar word embeddings. A limitation of these word embedding approaches is that they only produce monolingual embeddings. This is because word co-occurrences are very likely to be limited to being within language rather than across language in text corpora. Hence semantically similar words across languages are unlikely to have similar word embeddings. To remedy this, there has been recent work on learning multilingual word embeddings, in which semantically similar words within and across languages have similar word embeddings BIBREF3 . Multilingual embeddings are not just interesting as an interlingua between multiple languages; they are useful in many downstream applications. For example, one application of multilingual embeddings is to find semantically similar words and phrases across languages BIBREF4 . Another use of multilingual embeddings is in enabling zero-shot learning on unseen languages, just as monolingual word embeddings enable predictions on unseen words BIBREF5 . In other words, a classifier using pretrained multilingual word embeddings can generalize to other languages even if training data is only in English. Interestingly, multilingual embeddings have also been shown to improve monolingual task performance BIBREF6 , BIBREF7 . Consequently, multilingual embeddings can be very useful for low-resource languages – they allow us to overcome the scarcity of data in these languages. However, as detailed in Section "Related Work" , most work on learning multilingual word embeddings so far has heavily relied on the availability of expensive resources such as word-aligned / sentence-aligned parallel corpora or bilingual lexicons. Unfortunately, this data can be prohibitively expensive to collect for many languages. Furthermore even for languages with such data available, the coverage of the data is a limiting factor that restricts how much of the semantic space can be aligned across languages. Overcoming this data bottleneck is a key contribution of our work. We investigate the use of cheaply available, weakly-supervised image-text data for learning multilingual embeddings. Images are a rich, language-agnostic medium that can provide a bridge across languages. For example, the English word “cat” might be found on webpages containing images of cats. Similarly, the German word “katze” (meaning cat) is likely to be found on other webpages containing similar (or perhaps identical) images of cats. Thus, images can be used to learn that these words have similar semantic content. Importantly, image-text data is generally available on the internet even for low-resource languages. As image data has proliferated on the internet, tools for understanding images have advanced considerably. Convolutional neural networks (CNNs) have achieved roughly human-level or better performance on vision tasks, particularly classification BIBREF8 , BIBREF9 , BIBREF10 . During classification of an image, CNNs compute intermediate outputs that have been used as generic image features that perform well across a variety of vision tasks BIBREF11 . We use these image features to enforce that words associated with similar images have similar embeddings. Since words associated with similar images are likely to have similar semantic content, even across languages, our learned embeddings capture crosslingual similarity. There has been other recent work on reducing the amount of supervision required to learn multilingual embeddings (cf. Section "Related Work" ). These methods take monolingual embeddings learned using existing methods and align them post-hoc in a shared embedding space. A limitation with post-hoc alignment of monolingual embeddings, first noticed by BIBREF12 , is that doing training of monolingual embeddings and alignment separately may lead to worse results than joint training of embeddings in one step. Since the monolingual embedding objective is distinct from the multilingual embedding objective, monolingual embeddings are not required to capture all information helpful for post-hoc multilingual alignment. Post-hoc alignment loses out on some information, whereas joint training does not. BIBREF12 observe improved results using a joint training method compared to a similar post-hoc method. Thus, a joint training approach is desirable. To our knowledge, no previous method jointly learns multilingual word embeddings using weakly-supervised data available for low-resource languages. To summarize: In this paper we propose an approach for learning multilingual word embeddings using image-text data jointly across all languages. We demonstrate that even a bag-of-words based embedding approach achieves performance competitive with the state-of-the-art on crosslingual semantic similarity tasks. We present experiments for understanding the effect of using pixel data as compared to co-occurrences alone. We also provide a method for training and making predictions on multilingual word embeddings even when the language of the text is unknown.
643
What is an example of a health-related tweet?
The health benefits of alcohol consumption are more limited than previously thought, researchers say
The ability to reason with natural language is a fundamental prerequisite for many NLP tasks such as information extraction, machine translation and question answering. To quantify this ability, systems are commonly tested whether they can recognize textual entailment, i.e., whether one sentence can be inferred from another one. However, in most NLP applications only single source sentences instead of sentence pairs are available. Hence, we propose a new task that measures how well a model can generate an entailed sentence from a source sentence. We take entailment-pairs of the Stanford Natural Language Inference corpus and train an LSTM with attention. On a manually annotated test set we found that 82% of generated sentences are correct, an improvement of 10.3% over an LSTM baseline. A qualitative analysis shows that this model is not only capable of shortening input sentences, but also inferring new statements via paraphrasing and phrase entailment. We then apply this model recursively to input-output pairs, thereby generating natural language inference chains that can be used to automatically construct an entailment graph from source sentences. Finally, by swapping source and target sentences we can also train a model that given an input sentence invents additional information to generate a new sentence.
The ability to determine entailment or contradiction between natural language text is essential for improving the performance in a wide range of natural language processing tasks. Recognizing Textual Entailment (RTE) is a task primarily designed to determine whether two natural language sentences are independent, contradictory or in an entailment relationship where the second sentence (the hypothesis) can be inferred from the first (the premise). Although systems that perform well in RTE could potentially be used to improve question answering, information extraction, text summarization and machine translation BIBREF0 , only in few of such downstream NLP tasks sentence-pairs are actually available. Usually, only a single source sentence (e.g. a question that needs to be answered or a source sentence that we want to translate) is present and models need to come up with their own hypotheses and commonsense knowledge inferences. The release of the large Stanford Natural Language Inference (SNLI) corpus BIBREF1 allowed end-to-end differentiable neural networks to outperform feature-based classifiers on the RTE task BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . In this work, we go a step further and investigate how well recurrent neural networks can produce true hypotheses given a source sentence. Furthermore, we qualitatively demonstrate that by only training on input-output pairs and recursively generating entailed sentence we can generate natural language inference chains (see Figure 1 for an example). Note that every inference step is interpretable as it is mapping one natural language sentence to another one. Our contributions are fourfold: (i) we propose an entailment generation task based on the SNLI corpus (§ "Entailment Generation" ), (ii) we investigate a sequence-to-sequence model and find that $82\%$ of generated sentences are correct (§ "Example Generations" ), (iii) we demonstrate the ability to generate natural language inference chains trained solely from entailment pairs (§ "Entailment Generation"1 ), and finally (iv) we can also generate sentences with more specific information by swapping source and target sentences during training (§ "Entailment Generation"6 ).
645
What is the challenge for other language except English
not researched as much as English
The cryptocurrency is attracting more and more attention because of the blockchain technology. Ethereum is gaining a significant popularity in blockchain community, mainly due to the fact that it is designed in a way that enables developers to write smart contracts and decentralized applications (Dapps). There are many kinds of cryptocurrency information on the social network. The risks and fraud problems behind it have pushed many countries including the United States, South Korea, and China to make warnings and set up corresponding regulations. However, the security of Ethereum smart contracts has not gained much attention. Through the Deep Learning approach, we propose a method of sentiment analysis for Ethereum's community comments. In this research, we first collected the users' cryptocurrency comments from the social network and then fed to our LSTM + CNN model for training. Then we made prediction through sentiment analysis. With our research result, we have demonstrated that both the precision and the recall of sentiment analysis can achieve 0.80+. More importantly, we deploy our sentiment analysis1 on RatingToken and Coin Master (mobile application of Cheetah Mobile Blockchain Security Center23). We can effectively provide detail information to resolve the risks of being fake and fraud problems.
Since Satoshi Nakamoto published the article "Bitcoin: A Peer-to-Peer Electronic Cash System" in 2008 BIBREF0 , and after the official launch of Bitcoin in 2009, technologies such as blockchain and cryptocurrency have attracted attention from academia and industry. At present, the technologies have been applied to many fields such as medical science, economics, Internet of Things BIBREF1 . Since the launch of Ethereum (Next Generation Encryption Platform) BIBREF2 with smart contract function proposed by Vitalik Buterin in 2015, lots of attention has been obtained on its dedicated cryptocurrency Ether, smart contract, blockchain and its decentralized Ethereum Virtual Machine (EVM). The main reason is that its design method provides developers with the ability to develop Decentralized apps (Dapps), and thus obtain wider applications. A new application paradigm opens the door to many possibilities and opportunities. Initial Coin Offerings (ICOs) is a financing method for the blockchain industry. As an example of financial innovation, ICO provides rapid access to capital for new ventures but suffers from drawbacks relating to non-regulation, considerable risk, and non-accountability. According to a report prepared by Satis Group Crypto Research, around 81% of the total number of ICOs launched since 2017 have turned out to be scams BIBREF3 . Also, according to the inquiry reveals of the University of Pennsylvania that many ICO failed even to promise that they would protect investors against insider self-dealing. Google, Facebook, and Twitter have announced that they will ban advertising of cryptocurrencies, ICO etc. in the future. Fraudulent pyramid selling of virtual currency happens frequently in China. The People’s Bank of China has banned the provision of services for virtual currency transactions and ICO activities. The incredibly huge amount of ICO projects make it difficult for people to recognize its risks. In order to find items of interest, people usually query the social network using the opinion of those items, and then view or further purchase them. In reality, the opinion of social network platforms is the major entrance for users. People are inclined to view or buy the items that have been purchased by many other people, and/or have high review scores. Sentiment analysis is contextual mining of text which identifies and extracts subjective information in the source material and helping a business to understand the social sentiment of their brand, product or service while monitoring online conversations. To resolve the risks and fraud problems, it is important not only to analyze the social-network opinion, but also to scan the smart contract vulnerability detection.
647
Which matching features do they employ?
Matching features from matching sentences from various perspectives.
This paper studies how the linguistic components of blogposts collected from Sina Weibo, a Chinese microblogging platform, might affect the blogposts' likelihood of being censored. Our results go along with King et al. (2013)'s Collective Action Potential (CAP) theory, which states that a blogpost's potential of causing riot or assembly in real life is the key determinant of it getting censored. Although there is not a definitive measure of this construct, the linguistic features that we identify as discriminatory go along with the CAP theory. We build a classifier that significantly outperforms non-expert humans in predicting whether a blogpost will be censored. The crowdsourcing results suggest that while humans tend to see censored blogposts as more controversial and more likely to trigger action in real life than the uncensored counterparts, they in general cannot make a better guess than our model when it comes to `reading the mind' of the censors in deciding whether a blogpost should be censored. We do not claim that censorship is only determined by the linguistic features. There are many other factors contributing to censorship decisions. The focus of the present paper is on the linguistic form of blogposts. Our work suggests that it is possible to use linguistic properties of social media posts to automatically predict if they are going to be censored.
In 2019, Freedom in the World, a yearly survey produced by Freedom House that measures the degree of civil liberties and political rights in every nation, recorded the 13th consecutive year of decline in global freedom. This decline spans across long-standing democracies such as USA as well as authoritarian regimes such as China and Russia. “Democracy is in retreat. The offensive against freedom of expression is being supercharged by a new and more effective form of digital authoritarianism." According to the report, China is now exporting its model of comprehensive internet censorship and surveillance around the world, offering trainings, seminars, and even study trips as well as advanced equipment. In this paper, we deal with a particular type of censorship – when a post gets removed from a social media platform semi-automatically based on its content. We are interested in exploring whether there are systematic linguistic differences between posts that get removed by censors from Sina Weibo, a Chinese microblogging platform, and the posts that remain on the website. Sina Weibo was launched in 2009 and became the most popular social media platform in China. Sina Weibo has over 431 million monthly active users. In cooperation with the ruling regime, Weibo sets strict control over the content published under its service BIBREF0. According to Zhu et al. zhu-etal:2013, Weibo uses a variety of strategies to target censorable posts, ranging from keyword list filtering to individual user monitoring. Among all posts that are eventually censored, nearly 30% of them are censored within 5–30 minutes, and nearly 90% within 24 hours BIBREF1. We hypothesize that the former are done automatically, while the latter are removed by human censors. Research shows that some of the censorship decisions are not necessarily driven by the criticism of the state BIBREF2, the presence of controversial topics BIBREF3, BIBREF4, or posts that describe negative events BIBREF5. Rather, censorship is triggered by other factors, such as for example, the collective action potential BIBREF2, i.e., censors target posts that stimulate collective action, such as riots and protests. The goal of this paper is to compare censored and uncensored posts that contain the same sensitive keywords and topics. Using the linguistic features extracted, a neural network model is built to explore whether censorship decision can be deduced from the linguistic characteristics of the posts. The contributions of this paper are: 1. We decipher a way to determine whether a blogpost on Weibo has been deleted by the author or censored by Weibo. 2. We develop a corpus of censored and uncensored Weibo blogposts that contain sensitive keyword(s). 3. We build a neural network classifier that predicts censorship significantly better than non-expert humans. 4. We find a set of linguistics features that contributes to the censorship prediction problem. 5. We indirectly test the construct of Collective Action Potential (CAP) proposed by King et al. king-etal:2013 through crowdsourcing experiments and find that the existence of CAP is more prevalent in censored blogposts than uncensored blogposts as judged by human annotators.
651
How large is the corpus they use?
449050
Every day media generate large amounts of text. An unbiased view on media reports requires an understanding of the political bias of media content. Assistive technology for estimating the political bias of texts can be helpful in this context. This study proposes a simple statistical learning approach to predict political bias from text. Standard text features extracted from speeches and manifestos of political parties are used to predict political bias in terms of political party affiliation and in terms of political views. Results indicate that political bias can be predicted with above chance accuracy. Mistakes of the model can be interpreted with respect to changes of policies of political actors. Two approaches are presented to make the results more interpretable: a) discriminative text features are related to the political orientation of a party and b) sentiment features of texts are correlated with a measure of political power. Political power appears to be strongly correlated with positive sentiment of a text. To highlight some potential use cases a web application shows how the model can be used for texts for which the political bias is not clear such as news articles.
Modern media generate a large amount of content at an ever increasing rate. Keeping an unbiased view on what media report on requires to understand the political bias of texts. In many cases it is obvious which political bias an author has. In other cases some expertise is required to judge the political bias of a text. When dealing with large amounts of text however there are simply not enough experts to examine all possible sources and publications. Assistive technology can help in this context to try and obtain a more unbiased sample of information. Ideally one would choose for each topic a sample of reports from the entire political spectrum in order to form an unbiased opinion. But ordering media content with respect to the political spectrum at scale requires automated prediction of political bias. The aim of this study is to provide empirical evidence indicating that leveraging open data sources of german texts, automated political bias prediction is possible with above chance accuracy. These experimental results confirm and extend previous findings BIBREF0 , BIBREF1 ; a novel contribution of this work is a proof of concept which applies this technology to sort news article recommendations according to their political bias. When human experts determine political bias of texts they will take responsibility for what they say about a text, and they can explain their decisions. This is a key difference to many statistical learning approaches. Not only is the responsibility question problematic, it can also be difficult to interpret some of the decisions. In order to validate and explain the predictions of the models three strategies that allow for better interpretations of the models are proposed. First the model misclassifications are related to changes in party policies. Second univariate measures of correlation between text features and party affiliation allow to relate the predictions to the kind of information that political experts use for interpreting texts. Third sentiment analysis is used to investigate whether this aspect of language has discriminatory power. In the following sec:related briefly surveys some related work, thereafter sec:data gives an overview of the data acquisition and preprocessing methods, sec:model presents the model, training and evaluation procedures; in sec:results the results are discussed and sec:conclusion concludes with some interpretations of the results and future research directions.
653
How many shared layers are in the system?
1
Methods for unsupervised hypernym detection may broadly be categorized according to two paradigms: pattern-based and distributional methods. In this paper, we study the performance of both approaches on several hypernymy tasks and find that simple pattern-based methods consistently outperform distributional methods on common benchmark datasets. Our results show that pattern-based models provide important contextual constraints which are not yet captured in distributional methods.
Hierarchical relationships play a central role in knowledge representation and reasoning. Hypernym detection, i.e., the modeling of word-level hierarchies, has long been an important task in natural language processing. Starting with BIBREF0 , pattern-based methods have been one of the most influential approaches to this problem. Their key idea is to exploit certain lexico-syntactic patterns to detect is-a relations in text. For instance, patterns like “ INLINEFORM0 such as INLINEFORM1 ”, or “ INLINEFORM2 and other INLINEFORM3 ” often indicate hypernymy relations of the form INLINEFORM4 is-a INLINEFORM5 . Such patterns may be predefined, or they may be learned automatically BIBREF1 , BIBREF2 . However, a well-known problem of Hearst-like patterns is their extreme sparsity: words must co-occur in exactly the right configuration, or else no relation can be detected. To alleviate the sparsity issue, the focus in hypernymy detection has recently shifted to distributional representations, wherein words are represented as vectors based on their distribution across large corpora. Such methods offer rich representations of lexical meaning, alleviating the sparsity problem, but require specialized similarity measures to distinguish different lexical relationships. The most successful measures to date are generally inspired by the Distributional Inclusion Hypothesis (DIH) BIBREF3 , which states roughly that contexts in which a narrow term INLINEFORM0 may appear (“cat”) should be a subset of the contexts in which a broader term INLINEFORM1 (“animal”) may appear. Intuitively, the DIH states that we should be able to replace any occurrence of “cat” with “animal” and still have a valid utterance. An important insight from work on distributional methods is that the definition of context is often critical to the success of a system BIBREF4 . Some distributional representations, like positional or dependency-based contexts, may even capture crude Hearst pattern-like features BIBREF5 , BIBREF6 . While both approaches for hypernym detection rely on co-occurrences within certain contexts, they differ in their context selection strategy: pattern-based methods use predefined manually-curated patterns to generate high-precision extractions while DIH methods rely on unconstrained word co-occurrences in large corpora. Here, we revisit the idea of using pattern-based methods for hypernym detection. We evaluate several pattern-based models on modern, large corpora and compare them to methods based on the DIH. We find that simple pattern-based methods consistently outperform specialized DIH methods on several difficult hypernymy tasks, including detection, direction prediction, and graded entailment ranking. Moreover, we find that taking low-rank embeddings of pattern-based models substantially improves performance by remedying the sparsity issue. Overall, our results show that Hearst patterns provide high-quality and robust predictions on large corpora by capturing important contextual constraints, which are not yet modeled in distributional methods.
656
How many layers of self-attention does the model have?
1, 4, 8, 16, 32, 64
It has recently been shown that word embeddings encode social biases, with a harmful impact on downstream tasks. However, to this point there has been no similar work done in the field of graph embeddings. We present the first study on social bias in knowledge graph embeddings, and propose a new metric suitable for measuring such bias. We conduct experiments on Wikidata and Freebase, and show that, as with word embeddings, harmful social biases related to professions are encoded in the embeddings with respect to gender, religion, ethnicity and nationality. For example, graph embeddings encode the information that men are more likely to be bankers, and women more likely to be homekeepers. As graph embeddings become increasingly utilized, we suggest that it is important the existence of such biases are understood and steps taken to mitigate their impact.
Recent work in the word embeddings literature has shown that embeddings encode gender and racial biases, BIBREF0, BIBREF1, BIBREF2. These biases can have harmful effects in downstream tasks including coreference resolution, BIBREF3 and machine translation, BIBREF4, leading to the development of a range of methods to try to mitigate such biases, BIBREF0, BIBREF5. In an adjacent literature, learning embeddings of knowledge graph (KG) entities and relations is becoming an increasingly common first step in utilizing KGs for a range of tasks, from missing link prediction, BIBREF6, BIBREF7, to more recent methods integrating learned embeddings into language models, BIBREF8, BIBREF9, BIBREF10. A natural question to ask is “do graph embeddings encode social biases in similar fashion to word embeddings". We show that existing methods for identifying bias in word embeddings are not suitable for KG embeddings, and present an approach to overcome this using embedding finetuning. We demonstrate (perhaps unsurprisingly) that unequal distributions of people of different genders, ethnicities, religions and nationalities in Freebase and Wikidata result in biases related to professions being encoded in graph embeddings, such as that men are more likely to be bankers and women more likely to be homekeepers. Such biases are potentially harmful when KG embeddings are used in applications. For example, if embeddings are used in a fact checking task, they would make it less likely that we accept facts that a female entity is a politician as opposed to a male entity. Alternatively, as KG embeddings get utilized as input to language models BIBREF8, BIBREF9, BIBREF10, such biases can affect all downstream NLP tasks.
665
what are the state of the art methods?
S2VT, RGB (VGG), RGB (VGG)+Flow (AlexNet), LSTM-E (VGG), LSTM-E (C3D) and Yao et al.
Speaker Recognition is a challenging task with essential applications such as authentication, automation, and security. The SincNet is a new deep learning based model which has produced promising results to tackle the mentioned task. To train deep learning systems, the loss function is essential to the network performance. The Softmax loss function is a widely used function in deep learning methods, but it is not the best choice for all kind of problems. For distance-based problems, one new Softmax based loss function called Additive Margin Softmax (AM-Softmax) is proving to be a better choice than the traditional Softmax. The AM-Softmax introduces a margin of separation between the classes that forces the samples from the same class to be closer to each other and also maximizes the distance between classes. In this paper, we propose a new approach for speaker recognition systems called AM-SincNet, which is based on the SincNet but uses an improved AM-Softmax layer. The proposed method is evaluated in the TIMIT dataset and obtained an improvement of approximately 40% in the Frame Error Rate compared to SincNet.
Speaker Recognition is an essential task with applications in biometric authentication, identification, and security among others BIBREF0 . The field is divided into two main subtasks: Speaker Identification and Speaker Verification. In Speaker Identification, given an audio sample, the model tries to identify to which one in a list of predetermined speakers the locution belongs. In the Speaker Verification, the model verifies if a sampled audio belongs to a given speaker or not. Most of the literature techniques to tackle this problem are based on INLINEFORM0 -vectors methods BIBREF1 , which extract features from the audio samples and classify the features using methods such as PLDA BIBREF2 , heavy-tailed PLDA BIBREF3 , and Gaussian PLDA BIBREF4 . Despite the advances in recent years BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , Speaker Recognition is still a challenging problem. In the past years, Deep Neural Networks (DNN) has been taking place on pattern recognition tasks and signal processing. Convolutional Neural Networks (CNN) have already show that they are the actual best choice to image classification, detection or recognition tasks. In the same way, DNN models are being used combined with the traditional approaches or in end-to-end approaches for Speaker Recognition tasks BIBREF12 , BIBREF13 , BIBREF14 . In hybrid approaches, it is common to use the DNN model to extract features from a raw audio sample and then encode it on embedding vectors with low-dimensionality which samples sharing common features with closer samples. Usually, the embedding vectors are classified using traditional approaches. The difficult behind the Speaker Recognition tasks is that audio signals are complex to model in low and high-level features that are discriminant enough to distinguish different speakers. Methods that use handcrafted features can extract more human-readable features and have a more appealing approach because humans can see what the method is doing and which features are used to make the inference. Nevertheless, handcrafted features lack in power. In fact, while we know what patterns they are looking for, we have no guarantee that these patterns are the best for the job. On the other hand, approaches based on Deep Learning have the power to learn patterns that humans may not be able to understand, but usually get better results than traditional methods, despite having more computational cost to training. A promising approach to Speaker Recognition based on Deep Learning is the SincNet model BIBREF16 that unifies the power of Deep Learning with the interpretability of the handcrafted features. SincNet uses a Deep Learning model to process raw audio samples and learn powerful features. Therefore, it replaces the first layer of the DNN model, which is responsible for the convolution with parametrized sinc functions. The parametrized sinc functions implement band-pass filters and are used to convolve the waveform audio signal to extract basic low-level features to be later processed by the deeper layers of the network. The use of the sinc functions helps the network to learn more relevant features and also improves the convergence time of the model as the sinc functions have significantly fewer parameters than the first layer of traditional DNN. At the top of the model, the SincNet uses a Softmax layer which is responsible for mapping the final features processed by the network into a multi-dimensional space corresponding to the different classes or speakers. The Softmax function is usually used as the last layer of DNN models. The function is used to delimit a linear surface that can be used as a decision boundary to separate samples from different classes. Although the Softmax function works well on optimizing a decision boundary that can be used to separate the classes, it is not appropriate to minimize the distance from samples of the same class. These characteristics may spoil the model efficiency on tasks like Speaker Verification that require to measure the distance between the samples to make a decision. To deal with this problem, new approaches such as Additive Margin Softmax BIBREF15 (AM-Softmax) are being proposed. The AM-Softmax introduces an additive margin to the decision boundary which forces the samples to be closer to each other, maximizing the distance between the classes and at the same time minimizing the distance from samples of the same class. In this paper, we propose a new method for Speaker Verification called Additive Margin SincNet (AM-SincNet) that is highly inspirited on the SincNet architecture and the AM-Softmax loss function. In order to validate our hypothesis, the proposed method is evaluated on the TIMIT BIBREF17 dataset based in the Frame Error Rate. The following sections are organized as: In Section SECREF2 , we present the related works, the proposed method is introduced at Section SECREF3 , Section SECREF4 explains how we built our experiments, the results are discussed at Section SECREF5 , and finally at Section SECREF6 we made our conclusions.
668
Which four languages do they experiment with?
German, English, Italian, Chinese
De-identification is the process of removing 18 protected health information (PHI) from clinical notes in order for the text to be considered not individually identifiable. Recent advances in natural language processing (NLP) has allowed for the use of deep learning techniques for the task of de-identification. In this paper, we present a deep learning architecture that builds on the latest NLP advances by incorporating deep contextualized word embeddings and variational drop out Bi-LSTMs. We test this architecture on two gold standard datasets and show that the architecture achieves state-of-the-art performance on both data sets while also converging faster than other systems without the use of dictionaries or other knowledge sources.
[block] 1 5mm * 2pt*22pt [block] 2.1 5mm [block] 2.1.1 5mm
669
Does DCA or GMM-based attention perform better in experiments?
About the same performance
The majority of existing speech emotion recognition models are trained and evaluated on a single corpus and a single language setting. These systems do not perform as well when applied in a cross-corpus and cross-language scenario. This paper presents results for speech emotion recognition for 4 languages in both single corpus and cross corpus setting. Additionally, since multi-task learning (MTL) with gender, naturalness and arousal as auxiliary tasks has shown to enhance the generalisation capabilities of the emotion models, this paper introduces language ID as another auxiliary task in MTL framework to explore the role of spoken language on emotion recognition which has not been studied yet.
Speech conveys human emotions most naturally. In recent years there has been an increased research interest in speech emotion recognition domain. The first step in a typical SER system is extracting linguistic and acoustic features from speech signal. Some para-linguistic studies find Low-Level Descriptor (LLD) features of the speech signal to be most relevant to studying emotions in speech. These features include frequency related parameters like pitch and jitter, energy parameters like shimmer and loudness, spectral parameters like alpha ratio and other parameters that convey cepstral and dynamic information. Feature extraction is followed with a classification task to predict the emotions of the speaker. Data scarcity or lack of free speech corpus is a problem for research in speech domain in general. This also means that there are even fewer resources for studying emotion in speech. For those that are available are dissimilar in terms of the spoken language, type of emotion (i.e. naturalistic, elicited, or acted) and labelling scheme (i.e. dimensional or categorical). Across various studies involving SER we observe that performance of model depends heavily on whether training and testing is performed from the same corpus or not. Performance is best when focus is on a single corpus at a time, without considering the performance of model in cross-language and cross-corpus scenarios. In this work, we work with diverse SER datasets i.e. tackle the problem in both cross-language and cross-corpus setting. We use transfer learning across SER datasets and investigate the effects of language spoken on the accuracy of the emotion recognition system using our Multi-Task Learning framework. The paper is organized as follows: Section 2 reviewed related work on SER, cross-lingual and cross-corpus SER and the recent studies on role of language identification in speech emotion recognition system, Section 3 describes the datasets that have been used, Section 4 presents detailed descriptions of three types of SER experiments we conduct in this paper. In Section 5, we present our results and evaluations of our models. Section 6 presents some additional experiments to draw a direct comparison with previously published research. Finally, we discuss future work and conclude the paper.
670
What evaluation metric is used?
F1 and Weighted-F1
Despite the ability to produce human-level speech for in-domain text, attention-based end-to-end text-to-speech (TTS) systems suffer from text alignment failures that increase in frequency for out-of-domain text. We show that these failures can be addressed using simple location-relative attention mechanisms that do away with content-based query/key comparisons. We compare two families of attention mechanisms: location-relative GMM-based mechanisms and additive energy-based mechanisms. We suggest simple modifications to GMM-based attention that allow it to align quickly and consistently during training, and introduce a new location-relative attention mechanism to the additive energy-based family, called Dynamic Convolution Attention (DCA). We compare the various mechanisms in terms of alignment speed and consistency during training, naturalness, and ability to generalize to long utterances, and conclude that GMM attention and DCA can generalize to very long utterances, while preserving naturalness for shorter, in-domain utterances.
Sequence-to-sequence models that use an attention mechanism to align the input and output sequences BIBREF0, BIBREF1 are currently the predominant paradigm in end-to-end TTS. Approaches based on the seminal Tacotron system BIBREF2 have demonstrated naturalness that rivals that of human speech for certain domains BIBREF3. Despite these successes, there are sometimes complaints of a lack of robustness in the alignment procedure that leads to missing or repeating words, incomplete synthesis, or an inability to generalize to longer utterances BIBREF4, BIBREF5, BIBREF6. The original Tacotron system BIBREF2 used the content-based attention mechanism introduced in BIBREF1 to align the target text with the output spectrogram. This mechanism is purely content-based and does not exploit the monotonicity and locality properties of TTS alignment, making it one of the least stable choices. The Tacotron 2 system BIBREF3 used the improved hybrid location-sensitive mechanism from BIBREF7 that combines content-based and location-based features, allowing generalization to utterances longer than those seen during training. The hybrid mechanism still has occasional alignment issues which led a number of authors to develop attention mechanisms that directly exploit monotonicity BIBREF8, BIBREF4, BIBREF5. These monotonic alignment mechanisms have demonstrated properties like increased alignment speed during training, improved stability, enhanced naturalness, and a virtual elimination of synthesis errors. Downsides of these methods include decreased efficiency due to a reliance on recursion to marginalize over possible alignments, the necessity of training hacks to ensure learning doesn't stall or become unstable, and decreased quality when operating in a more efficient hard alignment mode during inference. Separately, some authors BIBREF9 have moved back toward the purely location-based GMM attention introduced by Graves in BIBREF0, and some have proposed stabilizing GMM attention by using softplus nonlinearities in place of the exponential function BIBREF10, BIBREF11. However, there has been no systematic comparison of these design choices. In this paper, we compare the content-based and location-sensitive mechanisms used in Tacotron 1 and 2 with a variety of simple location-relative mechanisms in terms of alignment speed and consistency, naturalness of the synthesized speech, and ability to generalize to long utterances. We show that GMM-based mechanisms are able to generalize to very long (potentially infinite-length) utterances, and we introduce simple modifications that result in improved speed and consistency of alignment during training. We also introduce a new location-relative mechanism called Dynamic Convolution Attention that modifies the hybrid location-sensitive mechanism from Tacotron 2 to be purely location-based, allowing it to generalize to very long utterances as well.
671
Is any data-to-text generation model trained on this new corpus, what are the results?
Yes, Transformer based seq2seq is evaluated with average BLEU 0.519, METEOR 0.388, ROUGE 0.631 CIDEr 2.531 and SER 2.55%.
The detection of offensive language in the context of a dialogue has become an increasingly important application of natural language processing. The detection of trolls in public forums (Galan-Garcia et al., 2016), and the deployment of chatbots in the public domain (Wolf et al., 2017) are two examples that show the necessity of guarding against adversarially offensive behavior on the part of humans. In this work, we develop a training scheme for a model to become robust to such human attacks by an iterative build it, break it, fix it strategy with humans and models in the loop. In detailed experiments we show this approach is considerably more robust than previous systems. Further, we show that offensive language used within a conversation critically depends on the dialogue context, and cannot be viewed as a single sentence offensive detection task as in most previous work. Our newly collected tasks and methods will be made open source and publicly available.
The detection of offensive language has become an important topic as the online community has grown, as so too have the number of bad actors BIBREF2. Such behavior includes, but is not limited to, trolling in public discussion forums BIBREF3 and via social media BIBREF4, BIBREF5, employing hate speech that expresses prejudice against a particular group, or offensive language specifically targeting an individual. Such actions can be motivated to cause harm from which the bad actor derives enjoyment, despite negative consequences to others BIBREF6. As such, some bad actors go to great lengths to both avoid detection and to achieve their goals BIBREF7. In that context, any attempt to automatically detect this behavior can be expected to be adversarially attacked by looking for weaknesses in the detection system, which currently can easily be exploited as shown in BIBREF8, BIBREF9. A further example, relevant to the natural langauge processing community, is the exploitation of weaknesses in machine learning models that generate text, to force them to emit offensive language. Adversarial attacks on the Tay chatbot led to the developers shutting down the system BIBREF1. In this work, we study the detection of offensive language in dialogue with models that are robust to adversarial attack. We develop an automatic approach to the “Build it Break it Fix it” strategy originally adopted for writing secure programs BIBREF10, and the “Build it Break it” approach consequently adapting it for NLP BIBREF11. In the latter work, two teams of researchers, “builders” and “breakers” were used to first create sentiment and semantic role-labeling systems and then construct examples that find their faults. In this work we instead fully automate such an approach using crowdworkers as the humans-in-the-loop, and also apply a fixing stage where models are retrained to improve them. Finally, we repeat the whole build, break, and fix sequence over a number of iterations. We show that such an approach provides more and more robust systems over the fixing iterations. Analysis of the type of data collected in the iterations of the break it phase shows clear distribution changes, moving away from simple use of profanity and other obvious offensive words to utterances that require understanding of world knowledge, figurative language, and use of negation to detect if they are offensive or not. Further, data collected in the context of a dialogue rather than a sentence without context provides more sophisticated attacks. We show that model architectures that use the dialogue context efficiently perform much better than systems that do not, where the latter has been the main focus of existing research BIBREF12, BIBREF5, BIBREF13. Code for our entire build it, break it, fix it algorithm will be made open source, complete with model training code and crowdsourcing interface for humans. Our data and trained models will also be made available for the community.
676
How are the potentially relevant text fragments identified?
Generate a query out of the claim and querying a search engine, rank the words by means of TF-IDF, use IBM's AlchemyAPI to identify named entities, generate queries of 5–10 tokens, which execute against a search engine, and collect the snippets and the URLs in the results, skipping any result that points to a domain that is considered unreliable.
In this paper, we demonstrate the importance of coreference resolution for natural language processing on the example of the TAC Slot Filling shared task. We illustrate the strengths and weaknesses of automatic coreference resolution systems and provide experimental results to show that they improve performance in the slot filling end-to-end setting. Finally, we publish KBPchains, a resource containing automatically extracted coreference chains from the TAC source corpus in order to support other researchers working on this topic.
Coreference resolution systems group noun phrases (mentions) that refer to the same entity into the same chain. Mentions can be full names (e.g., John Miller), pronouns (e.g., he), demonstratives (e.g., this), comparatives (e.g., the first) or descriptions of the entity (e.g. the 40-year-old) BIBREF0 . Although coreference resolution has been a research focus for several years, systems are still far away from being perfect. Nevertheless, there are many tasks in natural language processing (NLP) which would benefit from coreference information, such as information extraction, question answering or summarization BIBREF1 . In BIBREF2 , for example, we showed that coreference information can also be incorporated into word embedding training. In general, coreference resolution systems can be used as a pre-processing step or as a part of a pipeline of different modules. Slot Filling is an information extraction task which has become popular in the last years BIBREF3 . It is a shared task organized by the Text Analysis Conference (TAC). The task aims at extracting information about persons, organizations or geo-political entities from a large collection of news, web and discussion forum documents. An example is “Steve Jobs” for the slot “X founded Apple”. Thinking of a text passage like “Steve Jobs was an American businessman. In 1976, he co-founded Apple”, it is clear that coreference resolution can play an important role for finding the correct slot filler value. In this study, we investigate how coreference resolution could help to improve performance on slot filling and which challenges exist. Furthermore, we present how we pre-processed the TAC source corpus with a coreference resolution system in order to be able to run the slot filling system more efficiently. In addition to this paper, we also publish the results of this pre-processing since it required long computation time and much resources.
679
What dataset did they use?
weibo-100k, Ontonotes, LCQMC and XNLI
The strength of pragmatic inferences systematically depends on linguistic and contextual cues. For example, the presence of a partitive construction increases the strength of a so-called scalar inference: humans perceive the inference that Chris did not eat all of the cookies to be stronger after hearing "Chris ate some of the cookies" than after hearing the same utterance without a partitive, "Chris ate some cookies". In this work, we explore to what extent it is possible to learn associations between linguistic cues and inference strength ratings without direct supervision. We show that an LSTM-based sentence encoder with an attention mechanism trained on a dataset of human inference strength ratings is able to predict ratings with high accuracy (r=0.78). We probe the model's behavior in multiple analyses using corpus data and manually constructed minimal pairs and find that the model learns associations between linguistic cues and scalar inferences, suggesting that these associations are inferable from statistical input.
An important property of human communication is that listeners can infer information beyond the literal meaning of an utterance. One well-studied type of inference is scalar inference BIBREF0, BIBREF1, whereby a listener who hears an utterance with a scalar item like some infers the negation of a stronger alternative with all: Chris ate some of the cookies. $\rightsquigarrow $ Chris ate some, but not all, of the cookies. Early accounts of scalar inferences (e.g., BIBREF2, BIBREF1, BIBREF3) considered them to arise by default unless specifically cancelled by the context. However, in a recent corpus study, degen2015investigating showed that there is much more variability in the strength of scalar inferences from some to not all than previously assumed. degen2015investigating further showed that this variability is not random and that several lexical, syntactic, and semantic/pragmatic features of context explain much of the variance in inference strength. Recent Bayesian game-theoretic models of pragmatic reasoning BIBREF4, BIBREF5, which are capable of integrating multiple linguistic cues with world knowledge, are able to correctly predict listeners' pragmatic inferences in many cases (e.g., BIBREF6, BIBREF7). These experimental and modeling results suggest that listeners integrate multiple linguistic and contextual cues in utterance interpretation, raising the question how listeners are able to draw these pragmatic inferences so quickly in interaction. This is an especially pressing problem considering that inference in Bayesian models of pragmatics is intractable when scaling up beyond toy domains to make predictions about arbitrary utterances. One possibility is that language users learn to use shortcuts to the inference (or lack thereof) by learning associations between the speaker's intention and surface-level cues present in the linguistic signal across many instances of encountering a scalar expression like some. In this work, we investigate whether it is possible to learn such associations between cues in the linguistic signal and speaker intentions by training neural network models to predict empirically elicited inference strength ratings from the linguistic input. In this enterprise we follow the recent successes of neural network models in predicting a range of linguistic phenomena such as long distance syntactic dependencies (e.g., BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15), semantic entailments (e.g., BIBREF16, BIBREF17), acceptability judgements BIBREF18, factuality BIBREF19, and, to some extent, speaker commitment BIBREF20. In particular, we ask: How well can a neural network sentence encoder learn to predict human inference strength judgments for utterances with some? To what extent does such a model capture the qualitative effects of hand-mined contextual features previously identified as influencing inference strength? To address these questions, we first compare the performance of neural models that differ in the underlying word embedding model (GloVe, ELMo, or BERT) and in the sentence embedding model (LSTM, LSTM$+$attention). We then probe the best model's behavior through a regression analysis, an analysis of attention weights, and an analysis of predictions on manually constructed minimal sentence pairs.
680
What is the size of the dataset?
3029
Most Chinese pre-trained encoders take a character as a basic unit and learn representations according to character's external contexts, ignoring the semantics expressed in the word, which is the smallest meaningful unit in Chinese. Hence, we propose a novel word aligned attention to incorporate word segmentation information, which is complementary to various Chinese pre-trained language models. Specifically, we devise a mixed-pooling strategy to align the character level attention to the word level, and propose an effective fusion method to solve the potential issue of segmentation error propagation. As a result, word and character information are explicitly integrated at the fine-tuning procedure. Experimental results on various Chinese NLP benchmarks demonstrate that our model could bring another significant gain over several pre-trained models.
Pre-trained language Models (PLM) such as ELMo BIBREF0, BERT BIBREF1, ERNIE BIBREF2 and XLNet BIBREF3 have been proven to capture rich language information from text and then benefit many NLP applications by simple fine-tuning, including sentiment classification, natural language inference, named entity recognition and so on. Generally, most of PLMs focus on using attention mechanism BIBREF4 to represent the natural language, such as word-level attention for English and character-level attention for Chinese. Unlike English, in Chinese, words are not separated by explicit delimiters, which means that character is the smallest linguistic unit. However, in most cases, the semantic of single Chinese character is ambiguous. UTF8gbsn For example, in Table 1, using the attention over word 西山 is more intuitive than over the two individual characters 西 and 山. Moreover, previous work has shown that considering the word segmentation information can lead to better language understanding and accordingly benefits various Chines NLP tasks BIBREF5, BIBREF6, BIBREF7. All these factors motivate us to expand the character-level attention mechanism in Chinese PLM to represent attention over words . To this end, there are two main challenges. (1) How to seamlessly integrate word segmentation information into character-level attention module of PLM is an important problem. (2) Gold-standard segmentation is rarely available in the downstream tasks, and how to effectively reduce the cascading noise caused by automatic segmentation tools BIBREF8 is another challenge. In this paper, we propose a new architecture, named Multi-source Word Alignd Attention (MWA), to solve the above issues. (1) Psycholinguistic experiments BIBREF9, BIBREF10 have shown that readers are likely to pay approximate attention to each character in one Chinese word. Drawing inspiration from such finding, we introduce a novel word-aligned attention, which could aggregate attention weight of characters in one word into a unified value with the mixed pooling strategy BIBREF11. (2) For reducing segmentation error, we further extend our word-aligned attention with multi-source segmentation produced by various segmenters, and deploy a fusion function to pull together their disparate output. In this way, we can implicitly reduce the error caused by automatic annotation. Extensive experiments are conducted on various Chinese NLP datasets including named entity recognition, sentiment classification, sentence pair matching, natural language inference, etc. The results show that the proposed model brings another gain over BERT BIBREF1, ERNIE BIBREF2 and BERT-wwm BIBREF12, BIBREF13 in all the tasks.
681
What are the 12 AV approaches which are examined?
MOCC, OCCAV, COAV, AVeer, GLAD, DistAV, Unmasking, Caravel, GenIM, ImpGI, SPATIUM and NNCD
This work investigates the task-oriented dialogue problem in mixed-domain settings. We study the effect of alternating between different domains in sequences of dialogue turns using two related state-of-the-art dialogue systems. We first show that a specialized state tracking component in multiple domains plays an important role and gives better results than an end-to-end task-oriented dialogue system. We then propose a hybrid system which is able to improve the belief tracking accuracy of about 28% of average absolute point on a standard multi-domain dialogue dataset. These experimental results give some useful insights for improving our commercial chatbot platform this http URL, which is currently deployed for many practical chatbot applications.
In this work, we investigate the problem of task-oriented dialogue in mixed-domain settings. Our work is related to two lines of research in Spoken Dialogue System (SDS), namely task-oriented dialogue system and multi-domain dialogue system. We briefly review the recent literature related to these topics as follows. Task-oriented dialogue systems are computer programs which can assist users to complete tasks in specific domains by understanding user requests and generating appropriate responses within several dialogue turns. Such systems are useful in domain-specific chatbot applications which help users find a restaurant or book a hotel. Conventional approach for building a task-oriented dialogue system is concerned with building a quite complex pipeline of many connected components. These components are usually independently developed which include at least four crucial modules: a natural language understanding module, a dialogue state tracking module, a dialogue policy learning module, and a answer generation module. Since these systems components are usually trained independently, their optimization targets may not fully align with the overall system evaluation criteria BIBREF0. In addition, such a pipeline system often suffers from error propagation where error made by upstream modules are accumuated and got amplified to the downstream ones. To overcome the above limitations of pipeline task-oriented dialogue systems, much research has focused recently in designing end-to-end learning systems with neural network-based models. One key property of task-oriented dialogue model is that it is required to reason and plan over multiple dialogue turns by aggregating useful information during the conversation. Therefore, sequence-to-sequence models such as the encoder-decoder based neural network models are proven to be suitable for both task-oriented and non-task-oriented systems. Serban et al. proposed to build end-to-end dialogue systems using generative hierarchical recurrent encoder-decoder neural network BIBREF1. Li et al. presented persona-based models which incorporate background information and speaking style of interlocutors into LSTM-based seq2seq network so as to improve the modeling of human-like behavior BIBREF2. Wen et al. designed an end-to-end trainable neural dialogue model with modularly connected components BIBREF3. Bordes et al. BIBREF4 proposed a task-oriented dialogue model using end-to-end memory networks. At the same time, many works explored different kinds of networks to model the dialogue state, such as copy-augmented networks BIBREF5, gated memory networks BIBREF6, query-regression networks BIBREF7. These systems do not perform slot-filling or user goal tracking; they rank and select a response from a set of response candidates which are conditioned on the dialogue history. One of the significant effort in developing end-to-end task-oriented systems is the recent Sequicity framework BIBREF8. This framework also relies on the sequence-to-sequence model and can be optimized with supervised or reinforcement learning. The Sequicity framework introduces the concept of belief span (bspan), which is a text span that tracks the dialogue states at each turn. In this framework, the task-oriented dialogue problem is decomposed into two stages: bspan generation and response generation. This framework has been shown to significantly outperform state-of-the-art pipeline-based methods. The second line of work in SDS that is related to this work is concerned with multi-domain dialogue systems. As presented above, one of the key components of a dialogue system is dialogue state tracking, or belief tracking, which maintains the states of conversation. A state is usually composed of user's goals, evidences and information which is accumulated along the sequence of dialogue turns. While the user's goal and evidences are extracted from user's utterances, the useful information is usually aggregated from external resources such as knowledge bases or dialogue ontologies. Such knowledge bases contain slot type and slot value entries in one or several predefined domains. Most approaches have difficulty scaling up with multiple domains due to the dependency of their model parameters on the underlying knowledge bases. Recently, Ramadan et al. BIBREF9 has introduced a novel approach which utilizes semantic similarity between dialogue utterances and knowledge base terms, allowing the information to be shared across domains. This method has been shown not only to scale well to multi-domain dialogues, but also outperform existing state-of-the-art models in single-domain tracking tasks. The problem that we are interested in this work is task-oriented dialogue in mixed-domain settings. This is different from the multi-domain dialogue problem above in several aspects, as follows: First, we investigate the phenomenon of alternating between different dialogue domains in subsequent dialogue turns, where each turn is defined as a pair of user question and machine answer. That is, the domains are mixed between turns. For example, in the first turn, the user requests some information of a restaurant; then in the second turn, he switches to the a different domain, for example, he asks about the weather at a specific location. In a next turn, he would either switch to a new domain or come back to ask about some other property of the suggested restaurant. This is a realistic scenario which usually happens in practical chatbot applications in our observations. We prefer calling this problem mixed-domain dialogue rather than multiple-domain dialogue. Second, we study the effect of the mixed-domain setting in the context of multi-domain dialogue approaches to see how they perform in different experimental scenarios. The main findings of this work include: A specialized state tracking component in multiple domains still plays an important role and gives better results than a state-of-the-art end-to-end task-oriented dialogue system. A combination of specialized state tracking system and an end-to-end task-oriented dialogue system is beneficial in mix-domain dialogue systems. Our hybrid system is able to improve the belief tracking accuracy of about 28% of average absolute point on a standard multi-domain dialogue dataset. These experimental results give some useful insights on data preparation and acquisition in the development of the chatbot platform FPT.AI, which is currently deployed for many practical chatbot applications. The remainder of this paper is structured as follows. First, Section SECREF2 discusses briefly the two methods in building dialogue systems that our method relies on. Next, Section SECREF3 presents experimental settings and results. Finally, Section SECREF4 concludes the paper and gives some directions for future work.
682
how was annotation done?
Annotation was done with the help of annotators from Amazon Mechanical Turk on snippets of conversations
Authorship verification (AV) is a research subject in the field of digital text forensics that concerns itself with the question, whether two documents have been written by the same person. During the past two decades, an increasing number of proposed AV approaches can be observed. However, a closer look at the respective studies reveals that the underlying characteristics of these methods are rarely addressed, which raises doubts regarding their applicability in real forensic settings. The objective of this paper is to fill this gap by proposing clear criteria and properties that aim to improve the characterization of existing and future AV approaches. Based on these properties, we conduct three experiments using 12 existing AV approaches, including the current state of the art. The examined methods were trained, optimized and evaluated on three self-compiled corpora, where each corpus focuses on a different aspect of applicability. Our results indicate that part of the methods are able to cope with very challenging verification cases such as 250 characters long informal chat conversations (72.7% accuracy) or cases in which two scientific documents were written at different times with an average difference of 15.6 years (>75% accuracy). However, we also identified that all involved methods are prone to cross-topic verification cases.
Digital text forensics aims at examining the originality and credibility of information in electronic documents and, in this regard, at extracting and analyzing information about the authors of the respective texts BIBREF0 . Among the most important tasks of this field are authorship attribution (AA) and authorship verification (AV), where the former deals with the problem of identifying the most likely author of a document INLINEFORM0 with unknown authorship, given a set of texts of candidate authors. AV, on the other hand, focuses on the question whether INLINEFORM1 was in fact written by a known author INLINEFORM2 , where only a set of reference texts INLINEFORM3 of this author is given. Both disciplines are strongly related to each other, as any AA problem can be broken down into a series of AV problems BIBREF1 . Breaking down an AA problem into multiple AV problems is especially important in such scenarios, where the presence of the true author of INLINEFORM4 in the candidate set cannot be guaranteed. In the past two decades, researchers from different fields including linguistics, psychology, computer science and mathematics proposed numerous techniques and concepts that aim to solve the AV task. Probably due to the interdisciplinary nature of this research field, AV approaches were becoming more and more diverse, as can be seen in the respective literature. In 2013, for example, Veenman and Li BIBREF2 presented an AV method based on compression, which has its roots in the field of information theory. In 2015, Bagnall BIBREF3 introduced the first deep learning approach that makes use of language modeling, an important key concept in statistical natural language processing. In 2017, Castañeda and Calvo BIBREF4 proposed an AV method that applies a semantic space model through Latent Dirichlet Allocation, a generative statistical model used in information retrieval and computational linguistics. Despite the increasing number of AV approaches, a closer look at the respective studies reveals that only minor attention is paid to their underlying characteristics such as reliability and robustness. These, however, must be taken into account before AV methods can be applied in real forensic settings. The objective of this paper is to fill this gap and to propose important properties and criteria that are not only intended to characterize AV methods, but also allow their assessment in a more systematic manner. By this, we hope to contribute to the further development of this young research field. Based on the proposed properties, we investigate the applicability of 12 existing AV approaches on three self-compiled corpora, where each corpus involves a specific challenge. The rest of this paper is structured as follows. Section SECREF2 discusses the related work that served as an inspiration for our analysis. Section SECREF3 comprises the proposed criteria and properties to characterize AV methods. Section SECREF4 describes the methodology, consisting of the used corpora, examined AV methods, selected performance measures and experiments. Finally, Section SECREF5 concludes the work and outlines future work.
684
How do they measure correlation between the prediction and explanation quality?
They look at the performance accuracy of explanation and the prediction performance
The Coronavirus pandemic has taken the world by storm as also the social media. As the awareness about the ailment increased, so did messages, videos and posts acknowledging its presence. The social networking site, Twitter, demonstrated similar effect with the number of posts related to coronavirus showing an unprecedented growth in a very short span of time. This paper presents a statistical analysis of the twitter messages related to this disease posted since January 2020. Two types of empirical studies have been performed. The first is on word frequency and the second on sentiments of the individual tweet messages. Inspection of the word frequency is useful in characterizing the patterns or trends in the words used on the site. This would also reflect on the psychology of the twitter users at this critical juncture. Unigram, bigram and trigram frequencies have been modeled by power law distribution. The results have been validated by Sum of Square Error (SSE), R2 and Root Mean Square Error (RMSE). High values of R2 and low values of SSE and RMSE lay the grounds for the goodness of fit of this model. Sentiment analysis has been conducted to understand the general attitudes of the twitter users at this time. Both tweets by general public and WHO were part of the corpus. The results showed that the majority of the tweets had a positive polarity and only about 15% were negative.
Catastrophic global circumstances have a pronounced effect on the lives of human beings across the world. The ramifications of such a scenario are experienced in diverse and multiplicative ways spanning routine tasks, media and news reports, detrimental physical and mental health, and also routine conversations. A similar footprint has been left by the global pandemic Coronavirus particularly since February 2020. The outbreak has not only created havoc in the economic conditions, physical health, working conditions, and manufacturing sector to name a few but has also created a niche in the minds of the people worldwide. It has had serious repercussions on the psychological state of the humans that is most evident now. One of the best possible mechanisms of capturing human emotions is to analyze the content they post on social media websites like Twitter and Facebook. Not to be surprised, social media is ablaze with myriad content on Coronavirus reflecting facts, fears, numbers, and the overall thoughts dominating the people's minds at this time. This paper is an effort towards analysis of the present textual content posted by people on social media from a statistical perspective. Two techniques have been deployed to undertake statistical interpretation of text messages posted on twitter; first being word frequency analysis and second sentiment analysis. A well known and profoundly researched as well as used statistical tool for quantitative linguistics is word frequency analysis. Determining word frequencies in any document gives a strong idea about the patterns of word used and the sentimental content of the text. The analysis can be carried out in computational as well as statistical settings. An investigation of the probability distribution of word frequencies extracted from the Twitter text messages posted by different users during the coronavirus outbreak in 2020 has been presented. Power law has been shown to capture patterns in the text analysis BIBREF0, BIBREF1. Sentiment analysis is a technique to gauge the sentimental content of a writing. It can help understand attitudes in a text related to a particular subject. Sentiment analysis is a highly intriguing field of research that can assist in inferring the emotional content of a text. Sentiment analysis has been performed on two datasets, one pertaining to tweets by World Health Organization (WHO) and the other tweets with 1000 retweets. The sentimental quotient from the tweets has been deduced by computing the positive and negative polarities from the messages. The paper has been structured as follows. The next section presents a brief overview of some work in the area of word frequency analysis and emergence of power law. Section 3 details the analysis of the twitter data. Section 4 provides a discussion on the obtained results. Section 5 provides a discussion on the scope of sentiment analysis and outlines the methodology of sentiment analysis adopted in this paper. Section 6 presents the results of the sentiment analysis performed on the two datasets mentioned above. The final section concludes the paper.
686
What datasets are used to evaluate the introduced method?
They used a dataset from Taobao which contained a collection of conversation records between customers and customer service staffs. It contains over five kinds of conversations, including chit-chat, product and discount consultation, querying delivery progress and after-sales feedback.
Knowledge bases (KBs) of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge bases are typically incomplete, it is useful to be able to perform knowledge base completion or link prediction, i.e., predict whether a relationship not in the knowledge base is likely to be true. This article serves as a brief overview of embedding models of entities and relationships for knowledge base completion, summarizing up-to-date experimental results on standard benchmark datasets FB15k, WN18, FB15k-237, WN18RR, FB13 and WN11.
Before introducing the KB completion task in details, let us return to the classic Word2Vec example of a “royal” relationship between “ $\mathsf {king}$ ” and “ $\mathsf {man}$ ”, and between “ $\mathsf {queen}$ ” and “ $\mathsf {woman}$ .” As illustrated in this example: $v_{king} - v_{man} \approx v_{queen} - v_{woman}$ , word vectors learned from a large corpus can model relational similarities or linguistic regularities between pairs of words as translations in the projected vector space BIBREF0 , BIBREF1 . Figure 1 shows another example of a relational similarity between word pairs of countries and capital cities: $ v_{Japan} - v_{Tokyo} &\approx & v_{Germany} - v_{Berlin}\\ v_{Germany} - v_{Berlin} &\approx & v_{Italy} - v_{Rome} \\ v_{Italy} - v_{Rome} &\approx & v_{Portugal} - v_{Lisbon} $ Let us consider the country and capital pairs in Figure 1 to be pairs of entities rather than word types. That is, we now represent country and capital entities by low-dimensional and dense vectors. The relational similarity between word pairs is presumably to capture a “ $\mathsf {is\_capital\_of}$ ” relationship between country and capital entities. Also, we represent this relationship by a translation vector $v_{{is\_capital\_of}}$ in the entity vector space. Thus, we expect: $ v_{Tokyo} + v_{{is\_capital\_of}} - v_{Japan} &\approx & 0 \\ v_{Berlin} + v_{{is\_capital\_of}} - v_{Germany} &\approx & 0 \\ v_{Rome} + v_{{is\_capital\_of}} - v_{Italy} &\approx & 0 \\ v_{Lisbon} + v_{{is\_capital\_of}} - v_{Portugal} &\approx & 0 $ This intuition inspired the TransE model—a well-known embedding model for KB completion or link prediction in KBs BIBREF2 . Knowledge bases are collections of real-world triples, where each triple or fact $(h, r, t)$ in KBs represents some relation $r$ between a head entity $h$ and a tail entity $t$ . KBs can thus be formalized as directed multi-relational graphs, where nodes correspond to entities and edges linking the nodes encode various kinds of relationship BIBREF3 , BIBREF4 . Here entities are real-world things or objects such as persons, places, organizations, music tracks or movies. Each relation type defines a certain relationship between entities. For example, as illustrated in Figure 2 , the relation type “ $\mathsf {child\_of}$ ” relates person entities with each other, while the relation type “ $\mathsf {born\_in}$ ” relates person entities with place entities. Several KB examples include the domain-specific KB GeneOntology and popular generic KBs of WordNet BIBREF5 , YAGO BIBREF6 , Freebase BIBREF7 , NELL BIBREF8 and DBpedia BIBREF9 as well as commercial KBs such as Google's Knowledge Graph, Microsoft's Satori and Facebook's Open Graph. Nowadays, KBs are used in a number of commercial applications including search engines such as Google, Microsoft's Bing and Facebook's Graph search. They also are useful resources for many NLP tasks such as question answering BIBREF10 , BIBREF11 , word sense disambiguation BIBREF12 , BIBREF13 , semantic parsing BIBREF14 , BIBREF15 and co-reference resolution BIBREF16 , BIBREF17 . A main issue is that even very large KBs, such as Freebase and DBpedia, which contain billions of fact triples about the world, are still far from complete. In particular, in English DBpedia 2014, 60% of person entities miss a place of birth and 58% of the scientists do not have a fact about what they are known for BIBREF19 . In Freebase, 71% of 3 million person entities miss a place of birth, 75% do not have a nationality while 94% have no facts about their parents BIBREF20 . So, in terms of a specific application, question answering systems based on incomplete KBs would not provide a correct answer given a correctly interpreted question. For example, given the incomplete KB in Figure 2 , it would be impossible to answer the question “where was Jane born ?”, although the question is completely matched with existing entity and relation type information (i.e., “ $\mathsf {Jane}$ ” and “ $\mathsf {born\_in}$ ”) in KB. Consequently, much work has been devoted towards knowledge base completion to perform link prediction in KBs, which attempts to predict whether a relationship/triple not in the KB is likely to be true, i.e., to add new triples by leveraging existing triples in the KB BIBREF21 , BIBREF22 , BIBREF23 , BIBREF3 . For example, we would like to predict the missing tail entity in the incomplete triple $\mathsf {(Jane, born\_in, ?)}$ or predict whether the triple $\mathsf {(Jane, born\_in, Miami)}$ is correct or not. Embedding models for KB completion have been proven to give state-of-the-art link prediction performances, in which entities are represented by latent feature vectors while relation types are represented by latent feature vectors and/or matrices and/or third-order tensors BIBREF24 , BIBREF25 , BIBREF2 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF19 , BIBREF30 , BIBREF3 , BIBREF31 , BIBREF32 , BIBREF33 . This article briefly overviews the embedding models for KB completion, and then summarizes up-to-date experimental results on two standard evaluation tasks: i) the entity prediction task—which is also referred to as the link prediction task BIBREF2 —and ii) the triple classification task BIBREF34 .
687
How do they incorporate human advice?
by converting human advice to first-order logic format and use as an input to calculate gradient
Traditional chatbots usually need a mass of human dialogue data, especially when using supervised machine learning method. Though they can easily deal with single-turn question answering, for multi-turn the performance is usually unsatisfactory. In this paper, we present Lingke, an information retrieval augmented chatbot which is able to answer questions based on given product introduction document and deal with multi-turn conversations. We will introduce a fine-grained pipeline processing to distill responses based on unstructured documents, and attentive sequential context-response matching for multi-turn conversations.
$\dagger $ Corresponding author. This paper was partially supported by National Key Research and Development Program of China (No. 2017YFB0304100), National Natural Science Foundation of China (No. 61672343 and No. 61733011), Key Project of National Society Science Foundation of China (No. 15-ZDA041), The Art and Science Interdisciplinary Funds of Shanghai Jiao Tong University (No. 14JCRZ04). This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/. Recently, dialogue and interactive systems have been emerging with huge commercial values BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , especially in the e-commerce field BIBREF6 , BIBREF7 . Building a chatbot mainly faces two challenges, the lack of dialogue data and poor performance for multi-turn conversations. This paper describes a fine-grained information retrieval (IR) augmented multi-turn chatbot - Lingke. It can learn knowledge without human supervision from conversation records or given product introduction documents and generate proper response, which alleviates the problem of lacking dialogue corpus to train a chatbot. First, by using Apache Lucene to select top 2 sentences most relevant to the question and extracting subject-verb-object (SVO) triples from them, a set of candidate responses is generated. With regard to multi-turn conversations, we adopt a dialogue manager, including self-attention strategy to distill significant signal of utterances, and sequential utterance-response matching to connect responses with conversation utterances, which outperforms all other models in multi-turn response selection. An online demo is available via accessing http://47.96.2.5:8080/ServiceBot/demo/.
688
What affective-based features are used?
affective features provided by different emotion models such as Emolex, EmoSenticNet, Dictionary of Affect in Language, Affective Norms for English Words and Linguistics Inquiry and Word Count
We consider the task of KBP slot filling -- extracting relation information from newswire documents for knowledge base construction. We present our pipeline, which employs Relational Dependency Networks (RDNs) to learn linguistic patterns for relation extraction. Additionally, we demonstrate how several components such as weak supervision, word2vec features, joint learning and the use of human advice, can be incorporated in this relational framework. We evaluate the different components in the benchmark KBP 2015 task and show that RDNs effectively model a diverse set of features and perform competitively with current state-of-the-art relation extraction.
The problem of knowledge base population (KBP) – constructing a knowledge base (KB) of facts gleaned from a large corpus of unstructured data – poses several challenges for the NLP community. Commonly, this relation extraction task is decomposed into two subtasks – entity linking, in which entities are linked to already identified identities within the document or to entities in the existing KB, and slot filling, which identifies certain attributes about a target entity. We present our work-in-progress for KBP slot filling based on our probabilistic logic formalisms and present the different components of the system. Specifically, we employ Relational Dependency Networks BIBREF0 , a formalism that has been successfully used for joint learning and inference from stochastic, noisy, relational data. We consider our RDN system against the current state-of-the-art for KBP to demonstrate the effectiveness of our probabilistic relational framework. Additionally, we show how RDNs can effectively incorporate many popular approaches in relation extraction such as joint learning, weak supervision, word2vec features, and human advice, among others. We provide a comprehensive comparison of settings such as joint learning vs learning of individual relations, use of weak supervision vs gold standard labels, using expert advice vs only learning from data, etc. These questions are extremely interesting from a general machine learning perspective, but also critical to the NLP community. As we show empirically, some of the results such as human advice being useful in many relations and joint learning being beneficial in the cases where the relations are correlated among themselves are on the expected lines. However, some surprising observations include the fact that weak supervision is not as useful as expected and word2vec features are not as predictive as the other domain-specific features. We first present the proposed pipeline with all the different components of the learning system. Next we present the set of 14 relations that we learn on before presenting the experimental results. We finally discuss the results of these comparisons before concluding by presenting directions for future research.
692
How big is performance improvement proposed methods are used?
Data augmentation (es) improved Adv es by 20% comparing to baseline Data augmentation (cs) improved Adv cs by 16.5% comparing to baseline Data augmentation (cs+es) improved both Adv cs and Adv es by at least 10% comparing to baseline All models show improvements over adversarial sets
Latest development of neural models has connected the encoder and decoder through a self-attention mechanism. In particular, Transformer, which is solely based on self-attention, has led to breakthroughs in Natural Language Processing (NLP) tasks. However, the multi-head attention mechanism, as a key component of Transformer, limits the effective deployment of the model to a limited resource setting. In this paper, based on the ideas of tensor decomposition and parameters sharing, we propose a novel self-attention model (namely Multi-linear attention) with Block-Term Tensor Decomposition (BTD). We test and verify the proposed attention method on three language modeling tasks (i.e., PTB, WikiText-103 and One-billion) and a neural machine translation task (i.e., WMT-2016 English-German). Multi-linear attention can not only largely compress the model parameters but also obtain performance improvements, compared with a number of language modeling approaches, such as Transformer, Transformer-XL, and Transformer with tensor train decomposition.
In NLP, Neural language model pre-training has shown to be effective for improving many tasks BIBREF0 , BIBREF1 . Transformer BIBREF2 is based solely on the attention mechanism, and dispensing with recurrent and convolutions entirely. At present, this model has received extensive attentions and plays an key role in many neural language models, such as BERT BIBREF0 , GPT BIBREF3 and Universal Transformer BIBREF4 . However, in Transformer based model, a lot of model parameters may cause problems in training and deploying these parameters in a limited resource setting. Thus, the compression of large neural pre-training language model has been an essential problem in NLP research. In literature, there are some compression methods BIBREF5 , BIBREF6 , BIBREF7 proposed. When the vocabulary is large, the corresponding weight matrices can be enormous. Tensorized embedding (TE) BIBREF5 uses the way of tensor-train BIBREF8 to compress the embedding layers in Transformer-XL BIBREF9 . In TE BIBREF5 , researchers only study the compression of input embedding layers, rathar than the attention layer. Recently, Block-Term Tensor Decomposition(BTD) BIBREF10 is used to compress recurrent neural networks (RNNs) BIBREF6 . Ye et al. BIBREF6 propose a compact flexible structure to deal with the large number of model parameters instead by high dimensional inputs in training recurrent neural networks (RNNs). This method greatly reduces the parameters of RNNs and improves their training efficiency. Still, the model only considers the input layer compression by the idea of low-rank approximation. On the other hand, some methods BIBREF7 , BIBREF11 aim to develop a specific structure on its weight matrices and are successful in compressing the pre-trained models. However, the new structure after compressing can not be integrated into the model. In Transformer, the multi-head attention is a key part and it is constructed by a large number of parameters. Specifically, Ashish et.al BIBREF2 compute the attention function on a set of queries simultaneously, packed together into a matrix $Q$ , while the keys and values are also packed together into matrices $K$ and $V$ , respectively. The attention function then adopts a no-linear function $softmax$ over three matrices $Q$ , $K$ and $V$ . There are two challenges to find a high-quality compression method to compress the multi-head attention in Transformer. First, the self-attention function in Transformer is a non-linear function, which makes it difficult to compress. In order to address this challenge, we first prove that the output of the attention function of the self-attention model BIBREF2 can be linearly represented by a group of orthonormal base vectors. $Q$ , $K$ and $V$ can be considered as factor matrices. Then, by initializing a low rank core tensor, we use Tucker-decomposition BIBREF12 , BIBREF13 to reconstruct a new attention representation. In order to construct the multi-head mechanism and compress the model, we use the method of Block-Term Tensor Decomposition (BTD), which is a combination of CP decomposition BIBREF14 and Tucker decomposition BIBREF12 . The difference is that three factor matrices $Q,~K$ and $V$ are shared in constructing each 3-order block tensor. This process can lead to reduce many parameters. The second challenge is that the attention model after compressing can not be directly integrated into the encoder and decoder framework of Transformer BIBREF2 , BIBREF9 . In order to address this challenge, there are three steps as follows. First, the average of each block tensor can be computed; Second, some matrices can be given by tensor split. Third, the concatenation of these matrices can serve as the input to the next layer network in Transformer. After that, it can be integrated into the encoder and decoder framework of Transformer BIBREF2 , BIBREF9 and trained end-to-end. Moreover, we also prove that the 3-order tensor can reconstruct the scaled dot-product attention in Transformer by a sum on a particular dimension. Our method combines two ideas which are the low-rank approximation and parameters sharing at the same time. Therefore, it achieves the higher compression ratios. Although the self-attention (i.e., scaled dot-product attention) in Transformer can be reconstructed, we do not consider reconstructing it and choose to split the 3-order tensor (the output of Multi-linear attention) which is helpful for improving the accuracy in experiments. Our major contributions of this paper are as follows: In order to validate the benefits of our model, we test it on two NLP tasks, namely language modeling and neural machine translation. In our experiments, the multi-head attention can be replaced by the proposed model, namely multi-linear attention. We have observed that the standard multi-head attention can be compressed with higher compression ratios on One-Billion dtaset. As a result, we show that multi-linear attention not only considerably reduces the number of parameters, but also achieve promising experiments results, especially in language modeling tasks.
694
By how much does transfer learning improve performance on this task?
In task 1 best transfer learning strategy improves F1 score by 4.4% and accuracy score by 3.3%, in task 2 best transfer learning strategy improves F1 score by 2.9% and accuracy score by 1.7%
We study few-shot learning in natural language domains. Compared to many existing works that apply either metric-based or optimization-based meta-learning to image domain with low inter-task variance, we consider a more realistic setting, where tasks are diverse. However, it imposes tremendous difficulties to existing state-of-the-art metric-based algorithms since a single metric is insufficient to capture complex task variations in natural language domain. To alleviate the problem, we propose an adaptive metric learning approach that automatically determines the best weighted combination from a set of metrics obtained from meta-training tasks for a newly seen few-shot task. Extensive quantitative evaluations on real-world sentiment analysis and dialog intent classification datasets demonstrate that the proposed method performs favorably against state-of-the-art few shot learning algorithms in terms of predictive accuracy. We make our code and data available for further study.
Few-shot learning (FSL) BIBREF0 , BIBREF1 , BIBREF2 aims to learn classifiers from few examples per class. Recently, deep learning has been successfully exploited for FSL via learning meta-models from a large number of meta-training tasks. These meta-models can be then used for rapid-adaptation for the target/meta-testing tasks that only have few training examples. Examples of such meta-models include: (1) metric-/similarity-based models, which learn contextual, and task-specific similarity measures BIBREF3 , BIBREF4 , BIBREF5 ; and (2) optimization-based models, which receive the input of gradients from a FSL task and predict either model parameters or parameter updates BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 . In the past, FSL has mainly considered image domains, where all tasks are often sampled from one huge collection of data, such as Omniglot BIBREF10 and ImageNet BIBREF4 , making tasks come from a single domain thus related. Due to such a simplified setting, almost all previous works employ a common meta-model (metric-/optimization-based) for all few-shot tasks. However, this setting is far from the realistic scenarios in many real-world applications of few-shot text classification. For example, on an enterprise AI cloud service, many clients submit various tasks to train text classification models for business-specific purposes. The tasks could be classifying customers' comments or opinions on different products/services, monitoring public reactions to different policy changes, or determining users' intents in different types of personal assistant services. As most of the clients cannot collect enough data, their submitted tasks form a few-shot setting. Also, these tasks are significantly diverse, thus a common metric is insufficient to handle all these tasks. We consider a more realistic FSL setting where tasks are diverse. In such a scenario, the optimal meta-model may vary across tasks. Our solution is based on the metric-learning approach BIBREF5 and the key idea is to maintain multiple metrics for FSL. The meta-learner selects and combines multiple metrics for learning the target task using task clustering on the meta-training tasks. During the meta-training, we propose to first partition the meta-training tasks into clusters, making the tasks in each cluster likely to be related. Then within each cluster, we train a deep embedding function as the metric. This ensures the common metric is only shared across tasks within the same cluster. Further, during meta-testing, each target FSL task is assigned to a task-specific metric, which is a linear combination of the metrics defined by different clusters. In this way, the diverse few-shot tasks can derive different metrics from the previous learning experience. The key of the proposed FSL framework is the task clustering algorithm. Previous works BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 mainly focused on convex objectives, and assumed the number of classes is the same across different tasks (e.g. binary classification is often considered). To make task clustering (i) compatible with deep networks and (ii) able to handle tasks with a various number of labels, we propose a matrix-completion based task clustering algorithm. The algorithm utilizes task similarity measured by cross-task transfer performance, denoted by matrix $\textbf {S}$ . The $(i,j)$ -entry of $\textbf {S}$ is the estimated accuracy by adapting the learned representations on the $i$ -th (source) task to the $j$ -th (target) task. We rely on matrix completion to deal with missing and unreliable entries in $\textbf {S}$ and finally apply spectral clustering to generate the task partitions. To the best of our knowledge, our work is the first one addressing the diverse few-shot learning problem and reporting results on real-world few-shot text classification problems. The experimental results show that the proposed algorithm provides significant gains on few-shot sentiment classification and dialog intent classification tasks. It provides positive feedback on the idea of using multiple meta-models (metrics) to handle diverse FSL tasks, as well as the proposed task clustering algorithm on automatically detecting related tasks.
695
How much do they outperform previous state-of-the-art?
On subtask 3 best proposed model has F1 score of 92.18 compared to best previous F1 score of 88.58. On subtask 4 best proposed model has 85.9, 89.9 and 95.6 compared to best previous results of 82.9, 84.0 and 89.9 on 4-way, 3-way and binary aspect polarity.
We investigate different strategies for automatic offensive language classification on German Twitter data. For this, we employ a sequentially combined BiLSTM-CNN neural network. Based on this model, three transfer learning tasks to improve the classification performance with background knowledge are tested. We compare 1. Supervised category transfer: social media data annotated with near-offensive language categories, 2. Weakly-supervised category transfer: tweets annotated with emojis they contain, 3. Unsupervised category transfer: tweets annotated with topic clusters obtained by Latent Dirichlet Allocation (LDA). Further, we investigate the effect of three different strategies to mitigate negative effects of 'catastrophic forgetting' during transfer learning. Our results indicate that transfer learning in general improves offensive language detection. Best results are achieved from pre-training our model on the unsupervised topic clustering of tweets in combination with thematic user cluster information.
User-generated content in forums, blogs, and social media not only contributes to a deliberative exchange of opinions and ideas but is also contaminated with offensive language such as threats and discrimination against people, swear words or blunt insults. The automatic detection of such content can be a useful support for moderators of public platforms as well as for users who could receive warnings or would be enabled to filter unwanted content. Although this topic now has been studied for more than two decades, so far there has been little work on offensive language detection for German social media content. Regarding this, we present a new approach to detect offensive language as defined in the shared task of the GermEval 2018 workshop. For our contribution to the shared task, we focus on the question how to apply transfer learning for neural network-based text classification systems. In Germany, the growing interest in hate speech analysis and detection is closely related to recent political developments such as the increase of right-wing populism, and societal reactions to the ongoing influx of refugees seeking asylum BIBREF0 . Content analysis studies such as InstituteforStrategicDialogue.2018 have shown that a majority of hate speech comments in German Facebook is authored by a rather small group of very active users (5% of all accounts engaging in hate speech). The findings suggest that already such small groups are able to severely disturb social media debates for large audiences. From the perspective of natural language processing, the task of automatic detection of offensive language in social media is complex due to three major reasons. First, we can expect `atypical' language data due to incorrect spellings, false grammar and non-standard language variations such as slang terms, intensifiers, or emojis/emoticons. For the automatic detection of offensive language, it is not quite clear whether these irregularities should be treated as `noise' or as a signal. Second, the task cannot be reduced to an analysis of word-level semantics only, e.g. spotting offensive keyterms in the data. Instead, the assessment of whether or not a post contains offensive language can be highly dependent on sentence and discourse level semantics, as well as subjective criteria. In a crowd-sourcing experiment on `hate speech' annotation, Ross.2016 achieved only very low inter-rater agreement between annotators. Offensive language is probably somewhat easier to achieve agreement on, but still sentence-level semantics and context or `world knowledge' remains important. Third, there is a lack of a common definition of the actual phenomenon to tackle. Published studies focus on `hostile messages', `flames', `hate speech', `discrimination', `abusive language', or `offensive language'. Although certainly overlapping, each of these categories has been operationalized in a slightly different manner. Since category definitions do not match properly, publicly available annotated datasets and language resources for one task cannot be used directly to train classifiers for any respective other task.
698
How big is the provided treebank?
1448 sentences more than the dataset from Bhat et al., 2017
Automated text generation has been applied broadly in many domains such as marketing and robotics, and used to create chatbots, product reviews and write poetry. The ability to synthesize text, however, presents many potential risks, while access to the technology required to build generative models is becoming increasingly easy. This work is aligned with the efforts of the United Nations and other civil society organisations to highlight potential political and societal risks arising through the malicious use of text generation software, and their potential impact on human rights. As a case study, we present the findings of an experiment to generate remarks in the style of political leaders by fine-tuning a pretrained AWD- LSTM model on a dataset of speeches made at the UN General Assembly. This work highlights the ease with which this can be accomplished, as well as the threats of combining these techniques with other technologies.
The rise of Artificial Intelligence (AI) brings many potential benefits to society, as well as significant risks. These risks take a variety of forms, from autonomous weapons and sophisticated cyber-attacks, to the more subtle techniques of societal manipulation. In particular, the threat this technology poses to maintaining peace and political stability is especially relevant to the United Nations (UN) and other international organisations. In terms of applications of AI, several major risks to peace and political stability have been identified, including: the use of automated surveillance platforms to suppress dissent; fake news reports with realistic fabricated video and audio; and the manipulation of information availability BIBREF0 . While research into the field of computer-aided text generation has been ongoing for many years, it is not until more recently that capabilities in data-acquisition, computing and new theory, have come into existence that now enable the generation of highly accurate speech in every major language. Moreover, this availability of resources means that training a customised language model requires minimal investment and can be easily performed by an individual actor. There are also an increasing number of organisations publishing models trained on vast amounts of data (such as OpenAI's GPT2-117M model BIBREF1 ), in many cases removing the need to train from scratch what would still be considered highly complex and intensive models. Language models have many positive applications, including virtual assistants for engaging the elderly or people with disabilities, fraud prevention and hate speech recognition systems, yet the ability of these models to generate text can be used for malicious intent. Being able to synthesize and publish text in a particular style could have detrimental consequences augmenting those witnessed from the dissemination fake news articles and generated videos, or `deep fakes'. For instance, there have been examples of AI generated videos that depict politicians (Presidents Trump and Putin among others) making statements they did not truly make BIBREF2 . The potential harm this technology can cause is clear, and in combination with automatic speech generation presents even greater challenges. Furthermore, by utilising social media platforms, such textual content can now be disseminated widely and rapidly, and used for propaganda, disinformation and personal harm on a large scale. In this work, we present a case study highlighting the potential for AI models to generate realistic text in an international political context, and the ease with which this can be achieved (Section SECREF2 ). From this, we highlight the implications of these results on the political landscape, and from the point of view of promoting peace and security (Section SECREF3 ). We end with recommendations for the scientific and policy communities to aid in the mitigation of the possible negative consequences of this technology (Section SECREF4 ).
699
what dataset was used?
The dataset from a joint ADAPT-Microsoft project
Code-switching is a phenomenon of mixing grammatical structures of two or more languages under varied social constraints. The code-switching data differ so radically from the benchmark corpora used in NLP community that the application of standard technologies to these data degrades their performance sharply. Unlike standard corpora, these data often need to go through additional processes such as language identification, normalization and/or back-transliteration for their efficient processing. In this paper, we investigate these indispensable processes and other problems associated with syntactic parsing of code-switching data and propose methods to mitigate their effects. In particular, we study dependency parsing of code-switching data of Hindi and English multilingual speakers from Twitter. We present a treebank of Hindi-English code-switching tweets under Universal Dependencies scheme and propose a neural stacking model for parsing that efficiently leverages part-of-speech tag and syntactic tree annotations in the code-switching treebank and the preexisting Hindi and English treebanks. We also present normalization and back-transliteration models with a decoding process tailored for code-switching data. Results show that our neural stacking parser is 1.5% LAS points better than the augmented parsing model and our decoding process improves results by 3.8% LAS points over the first-best normalization and/or back-transliteration.
Code-switching (henceforth CS) is the juxtaposition, within the same speech utterance, of grammatical units such as words, phrases, and clauses belonging to two or more different languages BIBREF0 . The phenomenon is prevalent in multilingual societies where speakers share more than one language and is often prompted by multiple social factors BIBREF1 . Moreover, code-switching is mostly prominent in colloquial language use in daily conversations, both online and offline. Most of the benchmark corpora used in NLP for training and evaluation are based on edited monolingual texts which strictly adhere to the norms of a language related, for example, to orthography, morphology, and syntax. Social media data in general and CS data, in particular, deviate from these norms implicitly set forth by the choice of corpora used in the community. This is the reason why the current technologies often perform miserably on social media data, be it monolingual or mixed language data BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . CS data offers additional challenges over the monolingual social media data as the phenomenon of code-switching transforms the data in many ways, for example, by creating new lexical forms and syntactic structures by mixing morphology and syntax of two languages making it much more diverse than any monolingual corpora BIBREF4 . As the current computational models fail to cater to the complexities of CS data, there is often a need for dedicated techniques tailored to its specific characteristics. Given the peculiar nature of CS data, it has been widely studied in linguistics literature BIBREF8 , BIBREF0 , BIBREF1 , and more recently, there has been a surge in studies concerning CS data in NLP as well BIBREF9 , BIBREF9 , BIBREF3 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . Besides the individual computational works, a series of shared-tasks and workshops on preprocessing and shallow syntactic analysis of CS data have also been conducted at multiple venues such as Empirical Methods in NLP (EMNLP 2014 and 2016), International Conference on NLP (ICON 2015 and 2016) and Forum for Information Retrieval Evaluation (FIRE 2015 and 2016). Most of these works have attempted to address preliminary tasks such as language identification, normalization and/or back-transliteration as these data often need to go through these additional processes for their efficient processing. In this paper, we investigate these indispensable processes and other problems associated with syntactic parsing of code-switching data and propose methods to mitigate their effects. In particular, we study dependency parsing of Hindi-English code-switching data of multilingual Indian speakers from Twitter. Hindi-English code-switching presents an interesting scenario for the parsing community. Mixing among typologically diverse languages will intensify structural variations which will make parsing more challenging. For example, there will be many sentences containing: (1) both SOV and SVO word orders, (2) both head-initial and head-final genitives, (3) both prepositional and postpositional phrases, etc. More importantly, none among the Hindi and English treebanks would provide any training instance for these mixed structures within individual sentences. In this paper, we present the first code-switching treebank that provides syntactic annotations required for parsing mixed-grammar syntactic structures. Moreover, we present a parsing pipeline designed explicitly for Hindi-English CS data. The pipeline comprises of several modules such as a language identification system, a back-transliteration system, and a dependency parser. The gist of these modules and our overall research contributions are listed as follows:
700
What are the citation intent labels in the datasets?
Background, extends, uses, motivation, compare/contrast, and future work for the ACL-ARC dataset. Background, method, result comparison for the SciCite dataset.
We present ALL-IN-1, a simple model for multilingual text classification that does not require any parallel data. It is based on a traditional Support Vector Machine classifier exploiting multilingual word embeddings and character n-grams. Our model is simple, easily extendable yet very effective, overall ranking 1st (out of 12 teams) in the IJCNLP 2017 shared task on customer feedback analysis in four languages: English, French, Japanese and Spanish.
Customer feedback analysis is the task of classifying short text messages into a set of predefined labels (e.g., bug, request). It is an important step towards effective customer support. However, a real bottleneck for successful classification of customer feedback in a multilingual environment is the limited transferability of such models, i.e., typically each time a new language is encountered a new model is built from scratch. This is clearly impractical, as maintaining separate models is cumbersome, besides the fact that existing annotations are simply not leveraged. In this paper we present our submission to the IJCNLP 2017 shared task on customer feedback analysis, in which data from four languages was available (English, French, Japanese and Spanish). Our goal was to build a single system for all four languages, and compare it to the traditional approach of creating separate systems for each language. We hypothesize that a single system is beneficial, as it can provide positive transfer, particularly for the languages for which less data is available. The contributions of this paper are:
703
How is quality of annotation measured?
Annotators went through various phases to make sure their annotations did not deviate from the mean.
This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-Ris very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make XLM-R code, data, and models publicly available.
The goal of this paper is to improve cross-lingual language understanding (XLU), by carefully studying the effects of training unsupervised cross-lingual representations at a very large scale. We present XLM-R, a transformer-based multilingual masked language model pre-trained on text in 100 languages, which obtains state-of-the-art performance on cross-lingual classification, sequence labeling and question answering. Multilingual masked language models (MLM) like mBERT BIBREF0 and XLM BIBREF1 have pushed the state-of-the-art on cross-lingual understanding tasks by jointly pretraining large Transformer models BIBREF2 on many languages. These models allow for effective cross-lingual transfer, as seen in a number of benchmarks including cross-lingual natural language inference BIBREF3, BIBREF4, BIBREF5, question answering BIBREF6, BIBREF7, and named entity recognition BIBREF8, BIBREF9. However, all of these studies pre-train on Wikipedia, which provides a relatively limited scale especially for lower resource languages. In this paper, we first present a comprehensive analysis of the trade-offs and limitations of multilingual language models at scale, inspired by recent monolingual scaling efforts BIBREF10. We measure the trade-off between high-resource and low-resource languages and the impact of language sampling and vocabulary size. The experiments expose a trade-off as we scale the number of languages for a fixed model capacity: more languages leads to better cross-lingual performance on low-resource languages up until a point, after which the overall performance on monolingual and cross-lingual benchmarks degrades. We refer to this tradeoff as the curse of multilinguality, and show that it can be alleviated by simply increasing model capacity. We argue, however, that this remains an important limitation for future XLU systems which may aim to improve performance with more modest computational budgets. Our best model XLM-RoBERTa (XLM-R) outperforms mBERT on cross-lingual classification by up to 21% accuracy on low-resource languages like Swahili and Urdu. It outperforms the previous state of the art by 3.9% average accuracy on XNLI, 2.1% average F1-score on Named Entity Recognition, and 8.4% average F1-score on cross-lingual Question Answering. We also evaluate monolingual fine tuning on the GLUE and XNLI benchmarks, where XLM-R obtains results competitive with state-of-the-art monolingual models, including RoBERTa BIBREF10. These results demonstrate, for the first time, that it is possible to have a single large model for all languages, without sacrificing per-language performance. We will make our code, models and data publicly available, with the hope that this will help research in multilingual NLP and low-resource language understanding.
705
What accuracy score do they obtain?
the best performing model obtained an accuracy of 0.86
Knowledge graphs have emerged as an important model for studying complex multi-relational data. This has given rise to the construction of numerous large scale but incomplete knowledge graphs encoding information extracted from various resources. An effective and scalable approach to jointly learn over multiple graphs and eventually construct a unified graph is a crucial next step for the success of knowledge-based inference for many downstream applications. To this end, we propose LinkNBed, a deep relational learning framework that learns entity and relationship representations across multiple graphs. We identify entity linkage across graphs as a vital component to achieve our goal. We design a novel objective that leverage entity linkage and build an efficient multi-task training procedure. Experiments on link prediction and entity linkage demonstrate substantial improvements over the state-of-the-art relational learning approaches.
Reasoning over multi-relational data is a key concept in Artificial Intelligence and knowledge graphs have appeared at the forefront as an effective tool to model such multi-relational data. Knowledge graphs have found increasing importance due to its wider range of important applications such as information retrieval BIBREF0 , natural language processing BIBREF1 , recommender systems BIBREF2 , question-answering BIBREF3 and many more. This has led to the increased efforts in constructing numerous large-scale Knowledge Bases (e.g. Freebase BIBREF4 , DBpedia BIBREF5 , Google's Knowledge graph BIBREF6 , Yago BIBREF7 and NELL BIBREF8 ), that can cater to these applications, by representing information available on the web in relational format. All knowledge graphs share common drawback of incompleteness and sparsity and hence most existing relational learning techniques focus on using observed triplets in an incomplete graph to infer unobserved triplets for that graph BIBREF9 . Neural embedding techniques that learn vector space representations of entities and relationships have achieved remarkable success in this task. However, these techniques only focus on learning from a single graph. In addition to incompleteness property, these knowledge graphs also share a set of overlapping entities and relationships with varying information about them. This makes a compelling case to design a technique that can learn over multiple graphs and eventually aid in constructing a unified giant graph out of them. While research on learning representations over single graph has progressed rapidly in recent years BIBREF10 , BIBREF6 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , there is a conspicuous lack of principled approach to tackle the unique challenges involved in learning across multiple graphs. One approach to multi-graph representation learning could be to first solve graph alignment problem to merge the graphs and then use existing relational learning methods on merged graph. Unfortunately, graph alignment is an important but still unsolved problem and there exist several techniques addressing its challenges BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 in limited settings. The key challenges for the graph alignment problem emanate from the fact that the real world data are noisy and intricate in nature. The noisy or sparse data make it difficult to learn robust alignment features, and data abundance leads to computational challenges due to the combinatorial permutations needed for alignment. These challenges are compounded in multi-relational settings due to heterogeneous nodes and edges in such graphs. Recently, deep learning has shown significant impact in learning useful information over noisy, large-scale and heterogeneous graph data BIBREF19 . We, therefore, posit that combining graph alignment task with deep representation learning across multi-relational graphs has potential to induce a synergistic effect on both tasks. Specifically, we identify that a key component of graph alignment process—entity linkage—also plays a vital role in learning across graphs. For instance, the embeddings learned over two knowledge graphs for an actor should be closer to one another compared to the embeddings of all the other entities. Similarly, the entities that are already aligned together across the two graphs should produce better embeddings due to the shared context and data. To model this phenomenon, we propose LinkNBed, a novel deep learning framework that jointly performs representation learning and graph linkage task. To achieve this, we identify key challenges involved in the learning process and make the following contributions to address them:
707
What percentage of improvement in inference speed is obtained by the proposed method over the newest state-of-the-art methods?
Across 4 datasets, the best performing proposed model (CNN) achieved an average of 363% improvement over the state of the art method (LR-CNN)
Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.
2144 of all 7111 (30.15%) living languages today are African languages BIBREF1. But only a small portion of linguistic resources for NLP research are built for African languages. As a result, there are only few NLP publications: In all ACL conferences in 2019, only 5 out of 2695 (0.19%) author affiliations were based in Africa BIBREF2. This stark contrast of linguistic richness versus poor representation of African languages in NLP is caused by multiple factors. First of all, African societies do not see hope for African languages being accepted as primary means of communication BIBREF3. As a result, few efforts to fund NLP or translation for African languages exist, despite the potential impact. This lack of focus has had a ripple effect. The few existing resources are not easily discoverable, published in closed journals, non-indexed local conferences, or remain undigitized, surviving only in private collections BIBREF4. This opaqueness impedes researchers' ability to reproduce and build upon existing results, and to develop, compete on and progress public benchmarks BIBREF5. African researchers are disproportionately affected by socio-economic factors, and are often hindered by visa issues BIBREF6 and costs of flights from and within Africa BIBREF7. They are distributed and disconnected on the continent, and rarely have the opportunity to commune, collaborate and share. Furthermore, African languages are of high linguistic complexity and variety, with diverse morphologies and phonologies, including lexical and grammatical tonal patterns, and many are practiced within multilingual societies with frequent code switching BIBREF8, BIBREF9, BIBREF10. Because of this complexity, cross-lingual generalization from success in languages like English are not guaranteed.
708
What is the metric that is measures in this paper?
error rate in a minimal pair ABX discrimination task
Recently, many works have tried to utilizing word lexicon to augment the performance of Chinese named entity recognition (NER). As a representative work in this line, Lattice-LSTM \cite{zhang2018chinese} has achieved new state-of-the-art performance on several benchmark Chinese NER datasets. However, Lattice-LSTM suffers from a complicated model architecture, resulting in low computational efficiency. This will heavily limit its application in many industrial areas, which require real-time NER response. In this work, we ask the question: if we can simplify the usage of lexicon and, at the same time, achieve comparative performance with Lattice-LSTM for Chinese NER? ::: Started with this question and motivated by the idea of Lattice-LSTM, we propose a concise but effective method to incorporate the lexicon information into the vector representations of characters. This way, our method can avoid introducing a complicated sequence modeling architecture to model the lexicon information. Instead, it only needs to subtly adjust the character representation layer of the neural sequence model. Experimental study on four benchmark Chinese NER datasets shows that our method can achieve much faster inference speed, comparative or better performance over Lattice-LSTM and its follwees. It also shows that our method can be easily transferred across difference neural architectures.
Named Entity Recognition (NER) is concerned with identifying named entities, such as person, location, product, and organization names, in unstructured text. In languages where words are naturally separated (e.g., English), NER was conventionally formulated as a sequence labeling problem, and the state-of-the-art results have been achieved by those neural-network-based models BIBREF1, BIBREF2, BIBREF3, BIBREF4. Compared with NER in English, Chinese NER is more difficult since sentences in Chinese are not previously segmented. Thus, one common practice in Chinese NER is first performing word segmentation using an existing CWS system and then applying a word-level sequence labeling model to the segmented sentence BIBREF5, BIBREF6. However, it is inevitable that the CWS system will wrongly segment the query sequence. This will, in turn, result in entity boundary detection errors and even entity category prediction errors in the following NER. Take the character sequence “南京市 (Nanjing) / 长江大桥 (Yangtze River Bridge)" as an example, where “/" indicates the gold segmentation result. If the sequence is segmented into “南京 (Nanjing) / 市长 (mayor) / 江大桥 (Daqiao Jiang)", the word-based NER system is definitely not able to correctly recognize “南京市 (Nanjing)" and “长江大桥 (Yangtze River Bridge)" as two entities of the location type. Instead, it is possible to incorrectly treat “南京 (Nanjing)" as a location entity and predict “江大桥 (Daqiao Jiang)" to be a person's name. Therefore, some works resort to performing Chinese NER directly on the character level, and it has been shown that this practice can achieve better performance BIBREF7, BIBREF8, BIBREF9, BIBREF0. A drawback of the purely character-based NER method is that word information, which has been proved to be useful, is not fully exploited. With this consideration, BIBREF0 proposed to incorporating word lexicon into the character-based NER model. In addition, instead of heuristically choosing a word for the character if it matches multiple words of the lexicon, they proposed to preserving all matched words of the character, leaving the following NER model to determine which matched word to apply. To achieve this, they introduced an elaborate modification to the LSTM-based sequence modeling layer of the LSTM-CRF model BIBREF1 to jointly model the character sequence and all of its matched words. Experimental studies on four public Chinese NER datasets show that Lattice-LSTM can achieve comparative or better performance on Chinese NER over existing methods. Although successful, there exists a big problem in Lattice-LSTM that limits its application in many industrial areas, where real-time NER responses are needed. That is, its model architecture is quite complicated. This slows down its inference speed and makes it difficult to perform training and inference in parallel. In addition, it is far from easy to transfer the structure of Lattice-LSTM to other neural-network architectures (e.g., convolutional neural networks and transformers), which may be more suitable for some specific datasets. In this work, we aim to find a easier way to achieve the idea of Lattice-LSTM, i.e., incorporating all matched words of the sentence to the character-based NER model. The first principle of our method design is to achieve a fast inference speed. To this end, we propose to encoding the matched words, obtained from the lexicon, into the representations of characters. Compared with Lattice-LSTM, this method is more concise and easier to implement. It can avoid complicated model architecture design thus has much faster inference speed. It can also be quickly adapted to any appropriate neural architectures without redesign. Given an existing neural character-based NER model, we only have to modify its character representation layer to successfully introduce the word lexicon. In addition, experimental studies on four public Chinese NER datasets show that our method can even achieve better performance than Lattice-LSTM when applying the LSTM-CRF model. Our source code is published at https://github.com/v-mipeng/LexiconAugmentedNER.
710
Was evaluation metrics and criteria were used to evaluate the output of the cascaded multimodal speech translation?
BLEU scores
Attentional sequence-to-sequence models have become the new standard for machine translation, but one challenge of such models is a significant increase in training and decoding cost compared to phrase-based systems. Here, we focus on efficient decoding, with a goal of achieving accuracy close the state-of-the-art in neural machine translation (NMT), while achieving CPU decoding speed/throughput close to that of a phrasal decoder. We approach this problem from two angles: First, we describe several techniques for speeding up an NMT beam search decoder, which obtain a 4.4x speedup over a very efficient baseline decoder without changing the decoder output. Second, we propose a simple but powerful network architecture which uses an RNN (GRU/LSTM) layer at bottom, followed by a series of stacked fully-connected layers applied at every timestep. This architecture achieves similar accuracy to a deep recurrent model, at a small fraction of the training and decoding cost. By combining these techniques, our best system achieves a very competitive accuracy of 38.3 BLEU on WMT English-French NewsTest2014, while decoding at 100 words/sec on single-threaded CPU. We believe this is the best published accuracy/speed trade-off of an NMT system.
Attentional sequence-to-sequence models have become the new standard for machine translation over the last two years, and with the unprecedented improvements in translation accuracy comes a new set of technical challenges. One of the biggest challenges is the high training and decoding costs of these neural machine translation (NMT) system, which is often at least an order of magnitude higher than a phrase-based system trained on the same data. For instance, phrasal MT systems were able achieve single-threaded decoding speeds of 100-500 words/sec on decade-old CPUs BIBREF0 , while BIBREF1 reported single-threaded decoding speeds of 8-10 words/sec on a shallow NMT system. BIBREF2 was able to reach CPU decoding speeds of 100 words/sec for a deep model, but used 44 CPU cores to do so. There has been recent work in speeding up decoding by reducing the search space BIBREF3 , but little in computational improvements. In this work, we consider a production scenario which requires low-latency, high-throughput NMT decoding. We focus on CPU-based decoders, since GPU/FPGA/ASIC-based decoders require specialized hardware deployment and logistical constraints such as batch processing. Efficient CPU decoders can also be used for on-device mobile translation. We focus on single-threaded decoding and single-sentence processing, since multiple threads can be used to reduce latency but not total throughput. We approach this problem from two angles: In Section "Decoder Speed Improvements" , we describe a number of techniques for improving the speed of the decoder, and obtain a 4.4x speedup over a highly efficient baseline. These speedups do not affect decoding results, so they can be applied universally. In Section "Model Improvements" , we describe a simple but powerful network architecture which uses a single RNN (GRU/LSTM) layer at the bottom with a large number of fully-connected (FC) layers on top, and obtains improvements similar to a deep RNN model at a fraction of the training and decoding cost.
711
What are the two decoding functions?
a linear projection and a bijective function with continuous transformation though ‘affine coupling layer’ of (Dinh et al.,2016).
This paper describes the cascaded multimodal speech translation systems developed by Imperial College London for the IWSLT 2019 evaluation campaign. The architecture consists of an automatic speech recognition (ASR) system followed by a Transformer-based multimodal machine translation (MMT) system. While the ASR component is identical across the experiments, the MMT model varies in terms of the way of integrating the visual context (simple conditioning vs. attention), the type of visual features exploited (pooled, convolutional, action categories) and the underlying architecture. For the latter, we explore both the canonical transformer and its deliberation version with additive and cascade variants which differ in how they integrate the textual attention. Upon conducting extensive experiments, we found that (i) the explored visual integration schemes often harm the translation performance for the transformer and additive deliberation, but considerably improve the cascade deliberation; (ii) the transformer and cascade deliberation integrate the visual modality better than the additive deliberation, as shown by the incongruence analysis.
The recently introduced How2 dataset BIBREF2 has stimulated research around multimodal language understanding through the availability of 300h instructional videos, English subtitles and their Portuguese translations. For example, BIBREF3 successfully demonstrates that semantically rich action-based visual features are helpful in the context of machine translation (MT), especially in the presence of input noise that manifests itself as missing source words. Therefore, we hypothesize that a speech-to-text translation (STT) system may also benefit from the visual context, especially in the traditional cascaded framework BIBREF4, BIBREF5 where noisy automatic transcripts are obtained from an automatic speech recognition system (ASR) and further translated into the target language using a machine translation (MT) component. The dataset enables the design of such multimodal STT systems, since we have access to a bilingual corpora as well as the corresponding audio-visual stream. Hence, in this paper, we propose a cascaded multimodal STT with two components: (i) an English ASR system trained on the How2 dataset and (ii) a transformer-based BIBREF0 visually grounded MMT system. MMT is a relatively new research topic which is interested in leveraging auxiliary modalities such as audio or vision in order to improve translation performance BIBREF6. MMT has proved effective in scenarios such as for disambiguation BIBREF7 or when the source sentences are corrupted BIBREF8. So far, MMT has mostly focused on integrating visual features into neural MT (NMT) systems using visual attention through convolutional feature maps BIBREF9, BIBREF10 or visual conditioning of encoder/decoder blocks through fully-connected features BIBREF11, BIBREF12, BIBREF13, BIBREF14. Inspired by previous research in MMT, we explore several multimodal integration schemes using action-level video features. Specifically, we experiment with visually conditioning the encoder output and adding visual attention to the decoder. We further extend the proposed schemes to the deliberation variant BIBREF1 of the canonical transformer in two ways: additive and cascade multimodal deliberation, which are distinct in their textual attention regimes. Overall, the results show that multimodality in general leads to performance degradation for the canonical transformer and the additive deliberation variant, but can result in substantial improvements for the cascade deliberation. Our incongruence analysis BIBREF15 reveals that the transformer and cascade deliberation are more sensitive to and therefore more reliant on visual features for translation, whereas the additive deliberation is much less impacted. We also observe that incongruence sensitivity and translation performance are not necessarily correlated.
715
What are the domains covered in the dataset?
Alarm Bank Bus Calendar Event Flight Home Hotel Media Movie Music RentalCar Restaurant RideShare Service Travel Weather
Over the past decade, knowledge graphs became popular for capturing structured domain knowledge. Relational learning models enable the prediction of missing links inside knowledge graphs. More specifically, latent distance approaches model the relationships among entities via a distance between latent representations. Translating embedding models (e.g., TransE) are among the most popular latent distance approaches which use one distance function to learn multiple relation patterns. However, they are not capable of capturing symmetric relations. They also force relations with reflexive patterns to become symmetric and transitive. In order to improve distance based embedding, we propose multi-distance embeddings (MDE). Our solution is based on the idea that by learning independent embedding vectors for each entity and relation one can aggregate contrasting distance functions. Benefiting from MDE, we also develop supplementary distances resolving the above-mentioned limitations of TransE. We further propose an extended loss function for distance based embeddings and show that MDE and TransE are fully expressive using this loss function. Furthermore, we obtain a bound on the size of their embeddings for full expressivity. Our empirical results show that MDE significantly improves the translating embeddings and outperforms several state-of-the-art embedding models on benchmark datasets.
While machine learning methods conventionally model functions given sample inputs and outputs, a subset of statistical relational learning(SRL) BIBREF0 , BIBREF1 approaches specifically aim to model “things” (entities) and relations between them. These methods usually model human knowledge which is structured in the form of multi-relational Knowledge Graphs(KG). KGs allow semantically rich queries in search engines, natural language processing (NLP) and question answering. However, they usually miss a substantial portion of true relations, i.e. they are incomplete. Therefore, the prediction of missing links/relations in KGs is a crucial challenge for SRL approaches. A KG usually consists of a set of facts. A fact is a triple (head, relation, tail) where head and tail are called entities. Among the SRL models, distance based knowledge graph embeddings are popular because of their simplicity, their low number of parameters, and their efficiency on large scale datasets. Specifically, their simplicity allows integrating them into many models. Previous studies have integrated them with logical rule embeddings BIBREF2 , have adopted them to encode temporal information BIBREF3 and have applied them to find equivalent entities between multi-language datasets BIBREF4 . Since the introduction of the first multi-relational distance based method BIBREF5 many variations were published (e.g., TransH BIBREF6 , TransR BIBREF7 , TransD BIBREF8 , STransE BIBREF9 ) that – despite their improvement in the accuracy of the model – suffer from several inherent limitations of TransE that restrict their expressiveness. As BIBREF10 , BIBREF11 noted, within the family of distance based embeddings, usually reflexive relations are forced to be symmetric and transitive. In addition, those approaches are unable to learn symmetric relations. In this work, we put forward a distance based approach that addresses the limitations of these distance based models. Since our approach consists of several distances as objectives, we dubbed it multi-distance embeddings (MDE). We show that 1. TransE and MDE are fully expressive, 2. MDE is able to learn several relational patterns, 3. It is extendable, 4. It shows competitive performance in the empirical evaluations and 5. We also develop an algorithm to find the limits for the limit-based loss function to use in embedding models.
717
How are the two different models trained?
They pre-train the models using 600000 articles as an unsupervised dataset and then fine-tune the models on small training set.
Neural machine translation (NMT) is a recent and effective technique which led to remarkable improvements in comparison of conventional machine translation techniques. Proposed neural machine translation model developed for the Gujarati language contains encoder-decoder with attention mechanism. In India, almost all the languages are originated from their ancestral language - Sanskrit. They are having inevitable similarities including lexical and named entity similarity. Translating into Indic languages is always be a challenging task. In this paper, we have presented the neural machine translation system (NMT) that can efficiently translate Indic languages like Hindi and Gujarati that together covers more than 58.49 percentage of total speakers in the country. We have compared the performance of our NMT model with automatic evaluation matrices such as BLEU, perplexity and TER matrix. The comparison of our network with Google translate is also presented where it outperformed with a margin of 6 BLEU score on English-Gujarati translation.
India is a highly diverse multilingual country in the world. In India, people of different regions use their own regional speaking languages, which makes India a country having world's second highest number of languages. Human spoken languages in India belongs to several language families. Two main of those families are typically known as Indo-Aryan languages having 78.05 percentage Indian speakers BIBREF0 and Dravidian languages having 19.64 BIBREF0 percentage Indian speakers. Hindi and Gujarati are among constitutional languages of India having nearly 601,688,479 BIBREF0 Indian speakers almost 59 BIBREF0 percentage of total country population. Constitute of India under Article 343 offers English as second additional official language having only 226,449 BIBREF0 Indian speakers and nearly 0.02 percentages of total country population BIBREF0. Communication and information exchange among people is necessary for sharing knowledge, feelings, opinions, facts, and thoughts. Variation of English is used globally for human communication. The content available on the Internet is exceptionally dominated by English. Only 20 percent of the world population speaks in English, while in India it is only 0.02 BIBREF0. It is not possible to have a human translator in the country having this much language diversity. In order to bridge this vast language gap we need effective and accurate computational approaches, which require minimum human intervention. This task can be effectively done using machine translation. Machine Translation (MT) is described as a task of computationally translate human spoken or natural language text or speech from one language to another with minimum human intervention. Machine translation aims to generate translations which have the same meaning as a source sentence and grammatically correct in the target language. Initial work on MT started in early 1950s BIBREF1, and has advanced rapidly since the 1990s due to the availability of more computational capacity and training data. Then after, number of approaches have been proposed to achieve more and more accurate machine translation as, Rule-based translation, Knowledge-based translation, Corpus-based translation, Hybrid translation, and Statistical machine translation(SMT) BIBREF1. All the approaches have their own merits and demerits. Among these, SMT which is a subcategory of Corpus based translation, is widely used as it is able to produce better results compared to other previously available techniques. The usage of the Neural networks in machine translation become popular in recent years around the globe and the novel technique of machine translation with the usage of neural network is known as Neural Machine Translation or NMT. In recent years, many works has been carried out on NMT. Little has been done on Indian languages as well BIBREF1. We found the NMT approach on Indic languages is still a challenging task, especially on bilingual machine translation. In our past research work, we have worked on sequence-to-sequence model based machine translation system for Hindi languageBIBREF2. In this work, we have improved that model and applied for English-Gujarati language pair. We have developed a system that uses neural model based on Attention mechanism. Our proposed attention based NMT model is tested with evaluation matrices as BLEU, perplexity and TER. In section 2 overview of related work carried out in the domain of machine translation is described in brief, section 3 gives fundamentals of machine translation process with neural network using attention mechanism, section 4 gives a comparative analysis of various automatic evaluation matrices, section 5 introduce the proposed bilingual neural machine translation models, section 6 shows the implementation and generated results with our attention based NMT model is shown in section 7, conclusion of the paper is presented in section 8.
718
Are this techniques used in training multilingual models, on what languages?
English to French and English to German
We investigate the recently developed Bidirectional Encoder Representations from Transformers (BERT) model for the hyperpartisan news detection task. Using a subset of hand-labeled articles from SemEval as a validation set, we test the performance of different parameters for BERT models. We find that accuracy from two different BERT models using different proportions of the articles is consistently high, with our best-performing model on the validation set achieving 85% accuracy and the best-performing model on the test set achieving 77%. We further determined that our model exhibits strong consistency, labeling independent slices of the same article identically. Finally, we find that randomizing the order of word pieces dramatically reduces validation accuracy (to approximately 60%), but that shuffling groups of four or more word pieces maintains an accuracy of about 80%, indicating the model mainly gains value from local context.
SemEval Task 4 BIBREF1 tasked participating teams with identifying news articles that are misleading to their readers, a phenomenon often associated with “fake news” distributed by partisan sources BIBREF2 . We approach the problem through transfer learning to fine-tune a model for the document classification task. We use the BERT model based on the implementation of the github repository pytorch-pretrained-bert on some of the data provided by Task 4 of SemEval. BERT has been used to learn useful representations for a variety of natural language tasks, achieving state of the art performance in these tasks after being fine-tuned BIBREF0 . It is a language representation model that is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. Thus, it may be able to adequately account for complex characteristics as such blind, prejudiced reasoning and extreme bias that are important to reliably identifying hyperpartisanship in articles. We show that BERT performs well on hyperpartisan sentiment classification. We use unsupervised learning on the set of 600,000 source-labeled articles provided as part of the task, then train using supervised learning for the 645 hand-labeled articles. We believe that learning on source-labeled articles would bias our model to learn the partisanship of a source, instead of the article. Additionally, the accuracy of the model on validation data labeled by article differs heavily when the articles are labeled by publisher. Thus, we decided to use a small subset of the hand-labeled articles as our validation set for all of our experiments. As the articles are too large for the model to be trained on the full text each time, we consider the number of word-pieces that the model uses from each article a hyperparameter. A second major issue we explore is what information the model is using to make decisions. This is particularly important for BERT because neural models are often viewed like black boxes. This view is problematic for a task like hyperpartisan news detection where users may reasonably want explanations as to why an article was flagged. We specifically explore how much of the article is needed by the model, how consistent the model behaves on an article, and whether the model focuses on individual words and phrases or if it uses more global understanding. We find that the model only needs a short amount of context (100 word pieces), is very consistent throughout an article, and most of the model's accuracy arises from locally examining the article. In this paper, we demonstrate the effectiveness of BERT models for the hyperpartisan news classification task, with validation accuracy as high as 85% and test accuracy as high as 77% . We also make significant investigations into the importance of different factors relating to the articles and training in BERT's success. The remainder of this paper is organized as follows. Section SECREF2 describes previous work on the BERT model and semi-supervised learning. Section SECREF3 outlines our model, data, and experiments. Our results are presented in Section SECREF4 , with their ramifications discussed in Section SECREF5 . We close with an introduction to our system's namesake, fictional journalist Clint Buchanan, in Section SECREF6 .
720
What metric is used to measure performance?
Accuracy and F1 score for supervised tasks, Pearson's and Spearman's correlation for unsupervised tasks
Network embeddings, which learn low-dimensional representations for each vertex in a large-scale network, have received considerable attention in recent years. For a wide range of applications, vertices in a network are typically accompanied by rich textual information such as user profiles, paper abstracts, etc. We propose to incorporate semantic features into network embeddings by matching important words between text sequences for all pairs of vertices. We introduce a word-by-word alignment framework that measures the compatibility of embeddings between word pairs, and then adaptively accumulates these alignment features with a simple yet effective aggregation function. In experiments, we evaluate the proposed framework on three real-world benchmarks for downstream tasks, including link prediction and multi-label vertex classification. Results demonstrate that our model outperforms state-of-the-art network embedding methods by a large margin.
Networks are ubiquitous, with prominent examples including social networks (e.g., Facebook, Twitter) or citation networks of research papers (e.g., arXiv). When analyzing data from these real-world networks, traditional methods often represent vertices (nodes) as one-hot representations (containing the connectivity information of each vertex with respect to all other vertices), usually suffering from issues related to the inherent sparsity of large-scale networks. This results in models that are not able to fully capture the relationships between vertices of the network BIBREF0 , BIBREF1 . Alternatively, network embedding (i.e., network representation learning) has been considered, representing each vertex of a network with a low-dimensional vector that preserves information on its similarity relative to other vertices. This approach has attracted considerable attention in recent years BIBREF2 , BIBREF0 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . Traditional network embedding approaches focus primarily on learning representations of vertices that preserve local structure, as well as internal structural properties of the network. For instance, Isomap BIBREF9 , LINE BIBREF3 , and Grarep BIBREF10 were proposed to preserve first-, second-, and higher-order proximity between nodes, respectively. DeepWalk BIBREF0 , which learns vertex representations from random-walk sequences, similarly, only takes into account structural information of the network. However, in real-world networks, vertices usually contain rich textual information (e.g., user profiles in Facebook, paper abstracts in arXiv, user-generated content on Twitter, etc.), which may be leveraged effectively for learning more informative embeddings. To address this opportunity, BIBREF11 proposed text-associated DeepWalk, to incorporate textual information into the vectorial representations of vertices (embeddings). BIBREF12 employed deep recurrent neural networks to integrate the information from vertex-associated text into network representations. Further, BIBREF13 proposed to more effectively model the semantic relationships between vertices using a mutual attention mechanism. Although these methods have demonstrated performance gains over structure-only network embeddings, the relationship between text sequences for a pair of vertices is accounted for solely by comparing their sentence embeddings. However, as shown in Figure 1 , to assess the similarity between two research papers, a more effective strategy would compare and align (via local-weighting) individual important words (keywords) within a pair of abstracts, while information from other words (e.g., stop words) that tend to be less relevant can be effectively ignored (down-weighted). This alignment mechanism is difficult to accomplish in models where text sequences are first embedded into a common space and then compared in pairs BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 . We propose to learn a semantic-aware Network Embedding (NE) that incorporates word-level alignment features abstracted from text sequences associated with vertex pairs. Given a pair of sentences, our model first aligns each word within one sentence with keywords from the other sentence (adaptively up-weighted via an attention mechanism), producing a set of fine-grained matching vectors. These features are then accumulated via a simple but efficient aggregation function, obtaining the final representation for the sentence. As a result, the word-by-word alignment features (as illustrated in Figure 1 ) are explicitly and effectively captured by our model. Further, the learned network embeddings under our framework are adaptive to the specific (local) vertices that are considered, and thus are context-aware and especially suitable for downstream tasks, such as link prediction. Moreover, since the word-by-word matching procedure introduced here is highly parallelizable and does not require any complex encoding networks, such as Long Short-Term Memory (LSTM) or Convolutional Neural Networks (CNNs), our framework requires significantly less time for training, which is attractive for large-scale network applications. We evaluate our approach on three real-world datasets spanning distinct network-embedding-based applications: link prediction, vertex classification and visualization. We show that the proposed word-by-word alignment mechanism efficiently incorporates textual information into the network embedding, and consistently exhibits superior performance relative to several competitive baselines. Analyses considering the extracted word-by-word pairs further validate the effectiveness of the proposed framework.
722
How do Zipf and Herdan-Heap's laws differ?
Zipf's law describes change of word frequency rate, while Heaps-Herdan describes different word number in large texts (assumed that Hepas-Herdan is consequence of Zipf's)
Recently, bidirectional recurrent network language models (bi-RNNLMs) have been shown to outperform standard, unidirectional, recurrent neural network language models (uni-RNNLMs) on a range of speech recognition tasks. This indicates that future word context information beyond the word history can be useful. However, bi-RNNLMs pose a number of challenges as they make use of the complete previous and future word context information. This impacts both training efficiency and their use within a lattice rescoring framework. In this paper these issues are addressed by proposing a novel neural network structure, succeeding word RNNLMs (su-RNNLMs). Instead of using a recurrent unit to capture the complete future word contexts, a feedforward unit is used to model a finite number of succeeding, future, words. This model can be trained much more efficiently than bi-RNNLMs and can also be used for lattice rescoring. Experimental results on a meeting transcription task (AMI) show the proposed model consistently outperformed uni-RNNLMs and yield only a slight degradation compared to bi-RNNLMs in N-best rescoring. Additionally, performance improvements can be obtained using lattice rescoring and subsequent confusion network decoding.
Language models (LMs) are crucial components in many applications, such as speech recognition and machine translation. The aim of language models is to compute the probability of any given sentence INLINEFORM0 , which can be calculated as DISPLAYFORM0 The task of LMs is to calculate the probability of word INLINEFORM0 given its previous history INLINEFORM1 . INLINEFORM2 -gram LMs BIBREF0 and neural network based language mdoels (NNLMs) BIBREF1 , BIBREF2 are two widely used language models. In INLINEFORM3 -gram LMs, the most recent INLINEFORM4 words are used as an approximation of the complete history, thus DISPLAYFORM0 This INLINEFORM0 -gram assumption can also be used to construct a INLINEFORM1 -gram feedforward NNLMs BIBREF1 . In contrast, recurrent neural network LMs (RNNLMs) model the complete history via a recurrent connection. Most of previous work on language models has focused on utilising history information, the future word context information has not been extensively investigated. There have been several attempts to incorporate future context information into recurrent neural network language models. Individual forward and backward RNNLMs can be built, and these two LMs combined with a log-linear interpolation BIBREF3 . In BIBREF4 , succeeding words were incorporated into RNNLM within a Maximum Entropy framework. BIBREF5 investigated the use of bidirectional RNNLMs (bi-RNNLMs) for speech recognition. For a broadcast news task, sigmoid based RNNLMs gave small gains, while no performance improvement was obtained when using long short-term memory (LSTM) based RNNLMs. More recently, bi-RNNLMs can produce consistent, and significant, performance improvements over unidirectional RNNLMs (uni-RNNLMs) on a range of speech recognition tasks BIBREF6 . Though they can yield performance gain, bi-RNNLMs pose several challenges for both model training and inference as they require the complete previous and future word context information to be taken into account. It is difficult to parallelise training efficiently. Lattice rescoring is also complicated for these LMs as future context needs to be incorporated. This means that the form of approximation used for uni-RNNLMs BIBREF7 is not suitable to apply. Hence, N-best rescoring is normally used BIBREF4 , BIBREF5 , BIBREF6 . However, the ability to manipulate lattices is very important in many speech applications. Lattices can be used for a wide range of downstream applications, such as confidence score estimation BIBREF8 , keyword search BIBREF9 and confusion network decoding BIBREF10 . In order to address these issues, a novel model structure, succeeding word RNNLMs (su-RNNLMs), is proposed in this paper. Instead of using a recurrent unit to capture the complete future word context as in bi-RNNLMs, a feedforward unit is used to model a small, fixed-length number of succeeding words. This allows existing efficient training BIBREF11 and lattice rescoring BIBREF7 algorithms developed for uni-RNNLMs to be extended to the proposed su-RNNLMs. Using these extended algorithms, compact lattices can be generated with su-RNNLMs supporting lattice based downstream processing. The rest of this paper is organized as follows. Section SECREF2 gives a brief review of RNNLMs, including both unidirectional and bidirectional RNNLMs. The proposed model with succeeding words (su-RNNLMs) is introduced in Section SECREF3 , followed by a description of the lattice rescoring algorithm in Section SECREF4 . Section SECREF5 discusses the interpolation of language models. The experimental results are presented in Section SECREF6 and conclusions are drawn in Section SECREF7 .
724
How are the synthetic examples generated?
Random perturbation of Wikipedia sentences using mask-filling with BERT, backtranslation and randomly drop out
Automatic text summarization is generally considered as a challenging task in the NLP community. One of the challenges is the publicly available and large dataset that is relatively rare and difficult to construct. The problem is even worse for low-resource languages such as Indonesian. In this paper, we present IndoSum, a new benchmark dataset for Indonesian text summarization. The dataset consists of news articles and manually constructed summaries. Notably, the dataset is almost 200x larger than the previous Indonesian summarization dataset of the same domain. We evaluated various extractive summarization approaches and obtained encouraging results which demonstrate the usefulness of the dataset and provide baselines for future research. The code and the dataset are available online under permissive licenses.
The goal of text summarization task is to produce a summary from a set of documents. The summary should retain important information and be reasonably shorter than the original documents BIBREF0 . When the set of documents contains only a single document, the task is usually referred to as single-document summarization. There are two kinds of summarization characterized by how the summary is produced: extractive and abstractive. Extractive summarization attempts to extract few important sentences verbatim from the original document. In contrast, abstractive summarization tries to produce an abstract which may contain sentences that do not exist in or are paraphrased from the original document. Despite quite a few number of research on Indonesian text summarization, none of them were trained nor evaluated on a large, publicly available dataset. Also, although ROUGE BIBREF1 is the standard intrinsic evaluation metric for English text summarization, for Indonesian it does not seem so. Previous works rarely state explicitly that their evaluation was performed with ROUGE. The lack of a benchmark dataset and the different evaluation metrics make comparing among Indonesian text summarization research difficult. In this work, we introduce IndoSum, a new benchmark dataset for Indonesian text summarization, and evaluated several well-known extractive single-document summarization methods on the dataset. The dataset consists of online news articles and has almost 200 times more documents than the next largest one of the same domain BIBREF2 . To encourage further research in this area, we make our dataset publicly available. In short, the contribution of this work is two-fold: The state-of-the-art result on the dataset, although impressive, is still significantly lower than the maximum possible ROUGE score. This result suggests that the dataset is sufficiently challenging to be used as evaluation benchmark for future research on Indonesian text summarization.
725
By how much does the new parser outperform the current state-of-the-art?
Proposed method achieves 94.5 UAS and 92.4 LAS compared to 94.3 and 92.2 of best state-of-the -art greedy based parser. Best state-of-the art parser overall achieves 95.8 UAS and 94.6 LAS.
Text generation has made significant advances in the last few years. Yet, evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a learned evaluation metric based on BERT that can model human judgments with a few thousand possibly biased training examples. A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. BLEURT provides state-of-the-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset. In contrast to a vanilla BERT-based approach, it yields superior results even when the training data is scarce and out-of-distribution.
In the last few years, research in natural text generation (NLG) has made significant progress, driven largely by the neural encoder-decoder paradigm BIBREF0, BIBREF1 which can tackle a wide array of tasks including translation BIBREF2, summarization BIBREF3, BIBREF4, structured-data-to-text generation BIBREF5, BIBREF6, BIBREF7 dialog BIBREF8, BIBREF9 and image captioning BIBREF10. However, progress is increasingly impeded by the shortcomings of existing metrics BIBREF7, BIBREF11, BIBREF12. Human evaluation is often the best indicator of the quality of a system. However, designing crowd sourcing experiments is an expensive and high-latency process, which does not easily fit in a daily model development pipeline. Therefore, NLG researchers commonly use automatic evaluation metrics, which provide an acceptable proxy for quality and are very cheap to compute. This paper investigates sentence-level, reference-based metrics, which describe the extent to which a candidate sentence is similar to a reference one. The exact definition of similarity may range from string overlap to logical entailment. The first generation of metrics relied on handcrafted rules that measure the surface similarity between the sentences. To illustrate, BLEU BIBREF13 and ROUGE BIBREF14, two popular metrics, rely on N-gram overlap. Because those metrics are only sensitive to lexical variation, they cannot appropriately reward semantic or syntactic variations of a given reference. Thus, they have been repeatedly shown to correlate poorly with human judgment, in particular when all the systems to compare have a similar level of accuracy BIBREF15, BIBREF16, BIBREF17. Increasingly, NLG researchers have addressed those problems by injecting learned components in their metrics. To illustrate, consider the WMT Metrics Shared Task, an annual benchmark in which translation metrics are compared on their ability to imitate human assessments. The last two years of the competition were largely dominated by neural net-based approaches, RUSE, YiSi and ESIM BIBREF18, BIBREF11. Current approaches largely fall into two categories. Fully learned metrics, such as BEER, RUSE, and ESIM are trained end-to-end, and they typically rely on handcrafted features and/or learned embeddings. Conversely, hybrid metrics, such as YiSi and BERTscore combine trained elements, e.g., contextual embeddings, with handwritten logic, e.g., as token alignment rules. The first category typically offers great expressivity: if a training set of human ratings data is available, the metrics may take full advantage of it and fit the ratings distribution tightly. Furthermore, learned metrics can be tuned to measure task-specific properties, such as fluency, faithfulness, grammatically, or style. On the other hand, hybrid metrics offer robustness. They may provide better results when there is little to no training data, and they do not rely on the assumption that training and test data are identically distributed. And indeed, the iid assumption is particularly problematic in NLG evaluation because of domain drifts, that have been the main target of the metrics literature, but also because of quality drifts: NLG systems tend to get better over time, and therefore a model trained on ratings data from 2015 may fail to distinguish top performing systems in 2019, especially for newer research tasks. An ideal learned metric would be able to both take full advantage of available ratings data for training, and be robust to distribution drifts, i.e., it should be able to extrapolate. Our insight is that it is possible to combine expressivity and robustness by pre-training a fully learned metric on large amounts of synthetic data, before fine-tuning it on human ratings. To this end, we introduce Bleurt, a text generation metric based on BERT BIBREF19. A key ingredient of Bleurt is a novel pre-training scheme, which uses random perturbations of Wikipedia sentences augmented with a diverse set of lexical and semantic-level supervision signals. To demonstrate our approach, we train Bleurt for English and evaluate it under different generalization regimes. We first verify that it provides state-of-the-art results on all recent years of the WMT Metrics Shared task (2017 to 2019, to-English language pairs). We then stress-test its ability to cope with quality drifts with a synthetic benchmark based on WMT 2017. Finally, we show that it can easily adapt to a different domain with three tasks from a data-to-text dataset, WebNLG 2017 BIBREF20. Ablations show that our synthetic pretraining scheme increases performance in the iid setting, and is critical to ensure robustness when the training data is scarce, skewed, or out-of-domain.
726
What experimental evaluation is used?
root mean square error between the actual and the predicted price of Bitcoin for every minute
We present a novel transition system, based on the Covington non-projective parser, introducing non-local transitions that can directly create arcs involving nodes to the left of the current focus positions. This avoids the need for long sequences of No-Arc transitions to create long-distance arcs, thus alleviating error propagation. The resulting parser outperforms the original version and achieves the best accuracy on the Stanford Dependencies conversion of the Penn Treebank among greedy transition-based algorithms.
Greedy transition-based parsers are popular in NLP, as they provide competitive accuracy with high efficiency. They syntactically analyze a sentence by greedily applying transitions, which read it from left to right and produce a dependency tree. However, this greedy process is prone to error propagation: one wrong choice of transition can lead the parser to an erroneous state, causing more incorrect decisions. This is especially crucial for long attachments requiring a larger number of transitions. In addition, transition-based parsers traditionally focus on only two words of the sentence and their local context to choose the next transition. The lack of a global perspective favors the presence of errors when creating arcs involving multiple transitions. As expected, transition-based parsers build short arcs more accurately than long ones BIBREF0 . Previous research such as BIBREF1 and BIBREF2 proves that the widely-used projective arc-eager transition-based parser of Nivre2003 benefits from shortening the length of transition sequences by creating non-local attachments. In particular, they augmented the original transition system with new actions whose behavior entails more than one arc-eager transition and involves a context beyond the traditional two focus words. attardi06 and sartorio13 also extended the arc-standard transition-based algorithm BIBREF3 with the same success. In the same vein, we present a novel unrestricted non-projective transition system based on the well-known algorithm by covington01fundamental that shortens the transition sequence necessary to parse a given sentence by the original algorithm, which becomes linear instead of quadratic with respect to sentence length. To achieve that, we propose new transitions that affect non-local words and are equivalent to one or more Covington actions, in a similar way to the transitions defined by Qi2017 based on the arc-eager parser. Experiments show that this novel variant significantly outperforms the original one in all datasets tested, and achieves the best reported accuracy for a greedy dependency parser on the Stanford Dependencies conversion of the WSJ Penn Treebank.
730
Could you tell me more about the metrics used for performance evaluation?
BLUE utilizes different metrics for each of the tasks: Pearson correlation coefficient, F-1 scores, micro-averaging, and accuracy
In this paper, we describe the system submitted for the shared task on Social Media Mining for Health Applications by the team Light. Previous works demonstrate that LSTMs have achieved remarkable performance in natural language processing tasks. We deploy an ensemble of two LSTM models. The first one is a pretrained language model appended with a classifier and takes words as input, while the second one is a LSTM model with an attention unit over it which takes character tri-gram as input. We call the ensemble of these two models: Neural-DrugNet. Our system ranks 2nd in the second shared task: Automatic classification of posts describing medication intake.
In recent years, there has been a rapid growth in the usage of social media. People post their day-to-day happenings on regular basis. BIBREF0 propose four tasks for detecting drug names, classifying medication intake, classifying adverse drug reaction and detecting vaccination behavior from tweets. We participated in the Task2 and Task4. The major contribution of the work can be summarized as a neural network based on ensemble of two LSTM models which we call Neural DrugNet. We discuss our model in section 2. Section 3 contains the details about the experiments and pre-processing. In Section 4, we discuss the results and propose future works.
731
Why does the model improve in monolingual spaces as well?
because word pair similarity increases if the two words translate to similar parts of the cross-lingual embedding space
Inspired by the success of the General Language Understanding Evaluation benchmark, we introduce the Biomedical Language Understanding Evaluation (BLUE) benchmark to facilitate research in the development of pre-training language representations in the biomedicine domain. The benchmark consists of five tasks with ten datasets that cover both biomedical and clinical texts with different dataset sizes and difficulties. We also evaluate several baselines based on BERT and ELMo and find that the BERT model pre-trained on PubMed abstracts and MIMIC-III clinical notes achieves the best results. We make the datasets, pre-trained models, and codes publicly available at https://github.com/ncbi-nlp/BLUE_Benchmark.
With the growing amount of biomedical information available in textual form, there have been significant advances in the development of pre-training language representations that can be applied to a range of different tasks in the biomedical domain, such as pre-trained word embeddings, sentence embeddings, and contextual representations BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . In the general domain, we have recently observed that the General Language Understanding Evaluation (GLUE) benchmark BIBREF5 has been successfully promoting the development of language representations of general purpose BIBREF2 , BIBREF6 , BIBREF7 . To the best of our knowledge, however, there is no publicly available benchmarking in the biomedicine domain. To facilitate research on language representations in the biomedicine domain, we present the Biomedical Language Understanding Evaluation (BLUE) benchmark, which consists of five different biomedicine text-mining tasks with ten corpora. Here, we rely on preexisting datasets because they have been widely used by the BioNLP community as shared tasks BIBREF8 . These tasks cover a diverse range of text genres (biomedical literature and clinical notes), dataset sizes, and degrees of difficulty and, more importantly, highlight common biomedicine text-mining challenges. We expect that the models that perform better on all or most tasks in BLUE will address other biomedicine tasks more robustly. To better understand the challenge posed by BLUE, we conduct experiments with two baselines: One makes use of the BERT model BIBREF7 and one makes use of ELMo BIBREF2 . Both are state-of-the-art language representation models and demonstrate promising results in NLP tasks of general purpose. We find that the BERT model pre-trained on PubMed abstracts BIBREF9 and MIMIC-III clinical notes BIBREF10 achieves the best results, and is significantly superior to other models in the clinical domain. This demonstrates the importance of pre-training among different text genres. In summary, we offer: (i) five tasks with ten biomedical and clinical text-mining corpora with different sizes and levels of difficulty, (ii) codes for data construction and model evaluation for fair comparisons, (iii) pretrained BERT models on PubMed abstracts and MIMIC-III, and (iv) baseline results.
733
How is annotation projection done when languages have different word order?
Word alignments are generated for parallel text, and aligned words are assumed to also share AMR node alignments.
Natural Language Processing (NLP) systems often make use of machine learning techniques that are unfamiliar to end-users who are interested in analyzing clinical records. Although NLP has been widely used in extracting information from clinical text, current systems generally do not support model revision based on feedback from domain experts. We present a prototype tool that allows end users to visualize and review the outputs of an NLP system that extracts binary variables from clinical text. Our tool combines multiple visualizations to help the users understand these results and make any necessary corrections, thus forming a feedback loop and helping improve the accuracy of the NLP models. We have tested our prototype in a formative think-aloud user study with clinicians and researchers involved in colonoscopy research. Results from semi-structured interviews and a System Usability Scale (SUS) analysis show that the users are able to quickly start refining NLP models, despite having very little or no experience with machine learning. Observations from these sessions suggest revisions to the interface to better support review workflow and interpretation of results.
Electronic Health Records (EHRs) are organized collections of information about individual patients. They are designed such that they can be shared across different settings for providing health care services. The Institute of Medicine committee on improving the patient record has recognized the importance of using EHRs to inform decision support systems and support data-driven quality measures BIBREF0 . One of the biggest challenges in achieving this goal is the difficulty of extracting information from large quantities of EHR data stored as unstructured free text. Clinicians often make use of narratives and first-person stories to document interactions, findings and analyses in patient cases BIBREF1 . As a result, finding information from these volumes of health care records typically requires the use of NLP techniques to automate the extraction process. There has been a long history of research in the application of NLP methods in the clinical domain BIBREF2 . Researchers have developed models for automatically detecting outbreak of diseases such as influenza BIBREF3 , identifying adverse drug reactions BIBREF4 , BIBREF5 , BIBREF6 , and measuring the quality of colonoscopy procedures BIBREF7 , among others. Due to the complexity of clinical text, the accuracy of these techniques may vary BIBREF8 . Current tools also lack provision for end-users to inspect NLP outcomes and make corrections that might improve these results. Due to these factors, Chapman et. al. BIBREF2 have identified “lack of user-centered development" as one of the barriers in NLP adoption in the clinical domain. There is a need to focus on development of NLP systems that are not only generalizable for use in different tasks but are also usable without excessive dependence on NLP developers. In this paper, we have explored the design of user-interfaces for use by end users (clinicians and clinical researchers) to support the review and annotation of clinical text using natural language processing. We have developed an interactive web-based tool that facilitates both the review of binary variables extracted from clinical records, and the provision of feedback that can be used to improve the accuracy of NLP models. Our goal is to close the natural language processing gap by providing clinical researchers with highly-usable tools that will facilitate the process of reviewing NLP output, identifying errors in model prediction, and providing feedback that can be used to retrain or extend models to make them more effective. 2em
734
What's the precision of the system?
0.8320 on semantic typing, 0.7194 on entity matching
Abstract Meaning Representation (AMR) annotation efforts have mostly focused on English. In order to train parsers on other languages, we propose a method based on annotation projection, which involves exploiting annotations in a source language and a parallel corpus of the source language and a target language. Using English as the source language, we show promising results for Italian, Spanish, German and Chinese as target languages. Besides evaluating the target parsers on non-gold datasets, we further propose an evaluation method that exploits the English gold annotations and does not require access to gold annotations for the target languages. This is achieved by inverting the projection process: a new English parser is learned from the target language parser and evaluated on the existing English gold standard.
Abstract Meaning Representation (AMR) parsing is the process of converting natural language sentences into their corresponding AMR representations BIBREF0 . An AMR is a graph with nodes representing the concepts of the sentence and edges representing the semantic relations between them. Most available AMR datasets large enough to train statistical models consist of pairs of English sentences and AMR graphs. The cross-lingual properties of AMR across languages has been the subject of preliminary discussions. The AMR guidelines state that AMR is not an interlingua BIBREF0 and bojar2014comparing categorizes different kinds of divergences in the annotation between English AMRs and Czech AMRs. xue2014not show that structurally aligning English AMRs with Czech and Chinese AMRs is not always possible but that refined annotation guidelines suffice to resolve some of these cases. We extend this line of research by exploring whether divergences among languages can be overcome, i.e., we investigate whether it is possible to maintain the AMR annotated for English as a semantic representation for sentences written in other languages, as in Figure 1 . We implement AMR parsers for Italian, Spanish, German and Chinese using annotation projection, where existing annotations are projected from a source language (English) to a target language through a parallel corpus BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . By evaluating the parsers and manually analyzing their output, we show that the parsers are able to recover the AMR structures even when there exist structural differences between the languages, i.e., although AMR is not an interlingua it can act as one. This method also provides a quick way to prototype multilingual AMR parsers, assuming that Part-of-speech (POS) taggers, Named Entity Recognition (NER) taggers and dependency parsers are available for the target languages. We also propose an alternative approach, where Machine Translation (MT) is used to translate the input sentences into English so that an available English AMR parser can be employed. This method is an even quicker solution which only requires translation models between the target languages and English. Due to the lack of gold standard in the target languages, we exploit the English data to evaluate the parsers for the target languages. (Henceforth, we will use the term target parser to indicate a parser for a target language.) We achieve this by first learning the target parser from the gold standard English parser, and then inverting this process to learn a new English parser from the target parser. We then evaluate the resulting English parser against the gold standard. We call this “full-cycle” evaluation. Similarly to evangcross, we also directly evaluate the target parser on “silver” data, obtained by parsing the English side of a parallel corpus. In order to assess the reliability of these evaluation methods, we collected gold standard datasets for Italian, Spanish, German and Chinese by acquiring professional translations of the AMR gold standard data to these languages. We hypothesize that the full-cycle score can be used as a more reliable proxy than the silver score for evaluating the target parser. We provide evidence to this claim by comparing the three evaluation procedures (silver, full-cycle, and gold) across languages and parsers. Our main contributions are:
736
Which of the two ensembles yields the best performance?
Answer with content missing: (Table 2) CONCAT ensemble
Many businesses and consumers are extending the capabilities of voice-based services such as Amazon Alexa, Google Home, Microsoft Cortana, and Apple Siri to create custom voice experiences (also known as skills). As the number of these experiences increases, a key problem is the discovery of skills that can be used to address a user's request. In this paper, we focus on conversational skill discovery and present a conversational agent which engages in a dialog with users to help them find the skills that fulfill their needs. To this end, we start with a rule-based agent and improve it by using reinforcement learning. In this way, we enable the agent to adapt to different user attributes and conversational styles as it interacts with users. We evaluate our approach in a real production setting by deploying the agent to interact with real users, and show the effectiveness of the conversational agent in helping users find the skills that serve their request.
Modern speech-based assistants, such as Amazon Alexa, Google Home, Microsoft Cortana, and Apple Siri, enable users to complete daily tasks such as shopping, setting reminders, and playing games using voice commands. Such human-like interfaces create a rich experience for users by enabling them to complete many tasks hands- and eyes-free in a conversational manner. Furthermore, these services offer tools to enable developers and customers to create custom voice experiences (skills) and as a result extend the capabilities of the assistant. Amazon's Alexa Skills Kit BIBREF0, Google's Actions and Microsoft's Cortana Skills Kit are examples of such tools. As the number of skills (with potentially overlapping functionality) increases, it becomes more difficult for end users to find the skills that can address their request. To mitigate the skill discovery problem, recently researchers have proposed solutions for personalized domain selection and continuous domain adaptation in speech-based assistants BIBREF1, BIBREF2. Although such solutions help users find skills, in scenarios such as searching for a game where many different skills exist and user's preferences change, routing the user to a particular experience would not be satisfactory. In such cases, the assistant should initiate a conversation with the user, making recommendations, asking for preferences, and allowing the user to browse through different options. Similar to other search problems, personalization is important for conversational skill discovery and can be achieved at two levels: 1) personalization of skill recommendations, and 2) personalization of the interaction. Users have evolving attributes (e.g., first-time vs returning user) and different conversational styles and preferences (e.g., brief vs verbose communication) which affect how they respond to what the agent is proposing and its recommendations. By personalizing the interaction according to user attributes, conversational styles and preferences, the speech-based assistant can help speed up the conversation process BIBREF3 and increase user satisfaction. However, existing works are limited with respect to considering user's evolving attributes and diverse multi-aspect preferences BIBREF4, such as preferences with respect to how the conversational agent interacts with them. In this paper, we focus on conversational discovery of skills to guide customers from an intent to a specific skill or set of skills that can serve their request. To this end, we start with a rule-based agent and improve it by using reinforcement learning (RL), enabling the agent to adapt to different conversational styles as it interacts with users. In summary, the contributions of this paper are as follows: 1) We introduce the problem of conversational skill discovery for large-scale virtual assistants. 2) We describe a solution which enables the assistant to adapt to user's attributes (e.g., first-time user vs returning user) and conversational styles (e.g., brief vs. verbose). 3) We conduct experiments in a real production setting by deploying the agent to interact with real users in large scale, showing that the personalized policy learned using RL significantly outperforms a one-fits-all rule-based agent in terms of success rate (measured in terms of number of dialogs which result in launching a skill) with significantly shorter dialogs.
739
How was a quality control performed so that the text is noisy but the annotations are accurate?
The authors believe that the Wikilinks corpus contains ground truth annotations while being noisy. They discard mentions that cannot have ground-truth verified by comparison with Wikipedia.
The general trend in NLP is towards increasing model capacity and performance via deeper neural networks. However, simply stacking more layers of the popular Transformer architecture for machine translation results in poor convergence and high computational overhead. Our empirical analysis suggests that convergence is poor due to gradient vanishing caused by the interaction between residual connections and layer normalization. We propose depth-scaled initialization (DS-Init), which decreases parameter variance at the initialization stage, and reduces output variance of residual connections so as to ease gradient back-propagation through normalization layers. To address computational cost, we propose a merged attention sublayer (MAtt) which combines a simplified averagebased self-attention sublayer and the encoderdecoder attention sublayer on the decoder side. Results on WMT and IWSLT translation tasks with five translation directions show that deep Transformers with DS-Init and MAtt can substantially outperform their base counterpart in terms of BLEU (+1.1 BLEU on average for 12-layer models), while matching the decoding speed of the baseline model thanks to the efficiency improvements of MAtt.
The capability of deep neural models of handling complex dependencies has benefited various artificial intelligence tasks, such as image recognition where test error was reduced by scaling VGG nets BIBREF0 up to hundreds of convolutional layers BIBREF1. In NLP, deep self-attention networks have enabled large-scale pretrained language models such as BERT BIBREF2 and GPT BIBREF3 to boost state-of-the-art (SOTA) performance on downstream applications. By contrast, though neural machine translation (NMT) gained encouraging improvement when shifting from a shallow architecture BIBREF4 to deeper ones BIBREF5, BIBREF6, BIBREF7, BIBREF8, the Transformer BIBREF9, a currently SOTA architecture, achieves best results with merely 6 encoder and decoder layers, and no gains were reported by BIBREF9 from further increasing its depth on standard datasets. We start by analysing why the Transformer does not scale well to larger model depth. We find that the architecture suffers from gradient vanishing as shown in Figure FIGREF2, leading to poor convergence. An in-depth analysis reveals that the Transformer is not norm-preserving due to the involvement of and the interaction between residual connection (RC) BIBREF1 and layer normalization (LN) BIBREF10. To address this issue, we propose depth-scaled initialization (DS-Init) to improve norm preservation. We ascribe the gradient vanishing to the large output variance of RC and resort to strategies that could reduce it without model structure adjustment. Concretely, DS-Init scales down the variance of parameters in the $l$-th layer with a discount factor of $\frac{1}{\sqrt{l}}$ at the initialization stage alone, where $l$ denotes the layer depth starting from 1. The intuition is that parameters with small variance in upper layers would narrow the output variance of corresponding RCs, improving norm preservation as shown by the dashed lines in Figure FIGREF2. In this way, DS-Init enables the convergence of deep Transformer models to satisfactory local optima. Another bottleneck for deep Transformers is the increase in computational cost for both training and decoding. To combat this, we propose a merged attention network (MAtt). MAtt simplifies the decoder by replacing the separate self-attention and encoder-decoder attention sublayers with a new sublayer that combines an efficient variant of average-based self-attention (AAN) BIBREF11 and the encoder-decoder attention. We simplify the AAN by reducing the number of linear transformations, reducing both the number of model parameters and computational cost. The merged sublayer benefits from parallel calculation of (average-based) self-attention and encoder-decoder attention, and reduces the depth of each decoder block. We conduct extensive experiments on WMT and IWSLT translation tasks, covering five translation tasks with varying data conditions and translation directions. Our results show that deep Transformers with DS-Init and MAtt can substantially outperform their base counterpart in terms of BLEU (+1.1 BLEU on average for 12-layer models), while matching the decoding speed of the baseline model thanks to the efficiency improvements of MAtt. Our contributions are summarized as follows: We analyze the vanishing gradient issue in the Transformer, and identify the interaction of residual connections and layer normalization as its source. To address this problem, we introduce depth-scaled initialization (DS-Init). To reduce the computational cost of training deep Transformers, we introduce a merged attention model (MAtt). MAtt combines a simplified average-attention model and the encoder-decoder attention into a single sublayer, allowing for parallel computation. We conduct extensive experiments and verify that deep Transformers with DS-Init and MAtt improve translation quality while preserving decoding efficiency.
740
Is it a neural model? How is it trained?
No, it is a probabilistic model trained by finding feature weights through gradient ascent
We address the task of Named Entity Disambiguation (NED) for noisy text. We present WikilinksNED, a large-scale NED dataset of text fragments from the web, which is significantly noisier and more challenging than existing news-based datasets. To capture the limited and noisy local context surrounding each mention, we design a neural model and train it with a novel method for sampling informative negative examples. We also describe a new way of initializing word and entity embeddings that significantly improves performance. Our model significantly outperforms existing state-of-the-art methods on WikilinksNED while achieving comparable performance on a smaller newswire dataset.
Named Entity Disambiguation (NED) is the task of linking mentions of entities in text to a given knowledge base, such as Freebase or Wikipedia. NED is a key component in Entity Linking (EL) systems, focusing on the disambiguation task itself, independently from the tasks of Named Entity Recognition (detecting mention bounds) and Candidate Generation (retrieving the set of potential candidate entities). NED has been recognized as an important component in NLP tasks such as semantic parsing BIBREF0 . Current research on NED is mostly driven by a number of standard datasets, such as CoNLL-YAGO BIBREF1 , TAC KBP BIBREF2 and ACE BIBREF3 . These datasets are based on news corpora and Wikipedia, which are naturally coherent, well-structured, and rich in context. Global disambiguation models BIBREF4 , BIBREF5 , BIBREF6 leverage this coherency by jointly disambiguating all the mentions in a single document. However, domains such as web-page fragments, social media, or search queries, are often short, noisy, and less coherent; such domains lack the necessary contextual information for global methods to pay off, and present a more challenging setting in general. In this work, we investigate the task of NED in a setting where only local and noisy context is available. In particular, we create a dataset of 3.2M short text fragments extracted from web pages, each containing a mention of a named entity. Our dataset is far larger than previously collected datasets, and contains 18K unique mentions linking to over 100K unique entities. We have empirically found it to be noisier and more challenging than existing datasets. For example: “I had no choice but to experiment with other indoor games. I was born in Atlantic City so the obvious next choice was Monopoly. I played until I became a successful Captain of Industry.” This short fragment is considerably less structured and with a more personal tone than a typical news article. It references the entity Monopoly_(Game), however expressions such as “experiment” and “Industry” can distract a naive disambiguation model because they are also related the much more common entity Monopoly (economics term). Some sense of local semantics must be considered in order to separate the useful signals (e.g. “indoor games”, “played”) from the noisy ones. We therefore propose a new model that leverages local contextual information to disambiguate entities. Our neural approach (based on RNNs with attention) leverages the vast amount of training data in WikilinksNED to learn representations for entity and context, allowing it to extract signals from noisy and unexpected context patterns. While convolutional neural networks BIBREF7 , BIBREF8 and probabilistic attention BIBREF9 have been applied to the task, this is the first model to use RNNs and a neural attention model for NED. RNNs account for the sequential nature of textual context while the attention model is applied to reduce the impact of noise in the text. Our experiments show that our model significantly outperforms existing state-of-the-art NED algorithms on WikilinksNED, suggesting that RNNs with attention are able to model short and noisy context better than current approaches. In addition, we evaluate our algorithm on CoNLL-YAGO BIBREF1 , a dataset of annotated news articles. We use a simple domain adaptation technique since CoNLL-YAGO lacks a large enough training set for our model, and achieve comparable results to other state-of-the-art methods. These experiments highlight the difference between the two datasets, indicating that our NED benchmark is substantially more challenging. Code and data used for our experiments can be found at https://github.com/yotam-happy/NEDforNoisyText
743
How they evaluate quality of generated output?
Through human evaluation where they are asked to evaluate the generated output on a likert scale.
Image description task has been invariably examined in a static manner with qualitative presumptions held to be universally applicable, regardless of the scope or target of the description. In practice, however, different viewers may pay attention to different aspects of the image, and yield different descriptions or interpretations under various contexts. Such diversity in perspectives is difficult to derive with conventional image description techniques. In this paper, we propose a customized image narrative generation task, in which the users are interactively engaged in the generation process by providing answers to the questions. We further attempt to learn the user's interest via repeating such interactive stages, and to automatically reflect the interest in descriptions for new images. Experimental results demonstrate that our model can generate a variety of descriptions from single image that cover a wider range of topics than conventional models, while being customizable to the target user of interaction.
Recent advances in visual language field enabled by deep learning techniques have succeeded in bridging the gap between vision and language in a variety of tasks, ranging from describing the image BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 to answering questions about the image BIBREF4 , BIBREF5 . Such achievements were possible under the premise that there exists a set of ground truth references that are universally applicable regardless of the target, scope, or context. In real-world setting, however, image descriptions are prone to an infinitely wide range of variabilities, as different viewers may pay attention to different aspects of the image in different contexts, resulting in a variety of descriptions or interpretations. Due to its subjective nature, such diversity is difficult to obtain with conventional image description techniques. In this paper, we propose a customized image narrative generation task, in which we attempt to actively engage the users in the description generation process by asking questions and directly obtaining their answers, thus learning and reflecting their interest in the description. We use the term image narrative to differentiate our image description from conventional one, in which the objective is fixed as depicting factual aspects of global elements. In contrast, image narratives in our model cover a much wider range of topics, including subjective, local, or inferential elements. We first describe a model for automatic image narrative generation from single image without user interaction. We develop a self Q&A model to take advantage of wide array of contents available in visual question answering (VQA) task, and demonstrate that our model can generate image descriptions that are richer in contents than previous models. We then apply the model to interactive environment by directly obtaining the answers to the questions from the users. Through a wide range of experiments, we demonstrate that such interaction enables us not only to customize the image description by reflecting the user's choice in the current image of interest, but also to automatically apply the learned preference to new images (Figure 1 ).
744
What are the four forums the data comes from?
Darkode, Hack Forums, Blackhat and Nulled.
We propose a novel text generation task, namely Curiosity-driven Question Generation. We start from the observation that the Question Generation task has traditionally been considered as the dual problem of Question Answering, hence tackling the problem of generating a question given the text that contains its answer. Such questions can be used to evaluate machine reading comprehension. However, in real life, and especially in conversational settings, humans tend to ask questions with the goal of enriching their knowledge and/or clarifying aspects of previously gathered information. We refer to these inquisitive questions as Curiosity-driven: these questions are generated with the goal of obtaining new information (the answer) which is not present in the input text. In this work, we experiment on this new task using a conversational Question Answering (QA) dataset; further, since the majority of QA dataset are not built in a conversational manner, we describe a methodology to derive data for this novel task from non-conversational QA data. We investigate several automated metrics to measure the different properties of Curious Questions, and experiment different approaches on the Curiosity-driven Question Generation task, including model pre-training and reinforcement learning. Finally, we report a qualitative evaluation of the generated outputs.
The growing interest in Machine Reading Comprehension (MRC) has sparked significant research efforts on Question Generation (QG), the dual task to Question Answering (QA). In QA, the objective is to produce an adequate response given a query and a text; conversely, for QG, the task is generally defined as generating relevant question given a source text, focusing on a specific answer span. To our knowledge, all works tackling QG have thus far focused exclusively on generating relevant questions which can be answered given the source text: for instance, given AAAI was founded in 1979 as input, a question likely to be automatically generated would be When was AAAI founded?, where the answer 1979 is a span of the input. Such questions are useful to evaluate reading comprehension for both machines BIBREF0, BIBREF1 and humans BIBREF2. However, the human ability of asking questions goes well beyond evaluation: asking questions is essential in education BIBREF3 and has been proven to be fundamental for children cognitive development BIBREF4. Curiosity is baked into the human experience. It allows to extend one's comprehension and knowledge by asking questions that, while being relevant to context, are not directly answerable by it, thus being inquisitive and curious. The significance of such kind of questions is two-fold: first, they allow for gathering novel relevant information, e.g. a student asking for clarification; second, they are also tightly linked to one's understanding of the context, e.g. a teacher testing a student's knowledge by asking questions whose answers require a deeper understanding of the context and more complex reasoning. From an applicative point of view, we deem the ability to generate curious, inquisitive, questions as highly beneficial for a broad range of scenarios: i) in the context of human-machine interaction (e.g. robots, chat-bots, educational tools), where the communication with the users could be more natural; ii) during the learning process itself, which could be partially driven in a self-supervised manner, reminiscent of how humans learn by exploring and interacting with their environment. To our knowledge, this is the first paper attempting to tackle Curiosity-driven neural question generation. The contributions of this paper can be summarized as follow: we propose a new natural language generation task: curiosity-driven question generation; we propose a method to derive data for the task from popular non-conversational QA datasets; we experiment using language model pre-training and reinforcement learning, on two different datasets; we report a human evaluation analysis to assess both the pertinence of the automatic metrics used and the efficacy of the proposed dataset-creation method above.
749
How are sentence embeddings incorporated into the speech recognition system?
BERT generates sentence embeddings that represent words in context. These sentence embeddings are merged into a single conversational-context vector that is used to calculate a gated embedding and is later combined with the output of the decoder h to provide the gated activations for the next hidden layer.
Encoder-decoder models typically only employ words that are frequently used in the training corpus to reduce the computational costs and exclude noise. However, this vocabulary set may still include words that interfere with learning in encoder-decoder models. This paper proposes a method for selecting more suitable words for learning encoders by utilizing not only frequency, but also co-occurrence information, which we capture using the HITS algorithm. We apply our proposed method to two tasks: machine translation and grammatical error correction. For Japanese-to-English translation, this method achieves a BLEU score that is 0.56 points more than that of a baseline. It also outperforms the baseline method for English grammatical error correction, with an F0.5-measure that is 1.48 points higher.
Encoder-decoder models BIBREF0 are effective in tasks such as machine translation ( BIBREF1 , BIBREF1 ; BIBREF2 , BIBREF2 ) and grammatical error correction BIBREF3 . Vocabulary in encoder-decoder models is generally selected from the training corpus in descending order of frequency, and low-frequency words are replaced with an unknown word token <unk>. The so-called out-of-vocabulary (OOV) words are replaced with <unk> to not increase the decoder's complexity and to reduce noise. However, naive frequency-based OOV replacement may lead to loss of information that is necessary for modeling context in the encoder. This study hypothesizes that vocabulary constructed using unigram frequency includes words that interfere with learning in encoder-decoder models. That is, we presume that vocabulary selection that considers co-occurrence information selects fewer noisy words for learning robust encoders in encoder-decoder models. We apply the hyperlink-induced topic search (HITS) algorithm to extract the co-occurrence relations between words. Intuitively, the removal of words that rarely co-occur with others yields better encoder models than ones that include noisy low-frequency words. This study examines two tasks, machine translation (MT) and grammatical error correction (GEC) to confirm the effect of decreasing noisy words, with a focus on the vocabulary of the encoder side, because the vocabulary on the decoder side is relatively limited. In a Japanese-to-English MT experiment, our method achieves a BLEU score that is 0.56 points more than that of the frequency-based method. Further, it outperforms the frequency-based method for English GEC, with an $\mathrm {F_{0.5}}$ -measure that is 1.48 points higher. The main contributions of this study are as follows:
750
How different is the dataset size of source and target?
the training dataset is large while the target dataset is usually much smaller
We present a novel conversational-context aware end-to-end speech recognizer based on a gated neural network that incorporates conversational-context/word/speech embeddings. Unlike conventional speech recognition models, our model learns longer conversational-context information that spans across sentences and is consequently better at recognizing long conversations. Specifically, we propose to use the text-based external word and/or sentence embeddings (i.e., fastText, BERT) within an end-to-end framework, yielding a significant improvement in word error rate with better conversational-context representation. We evaluated the models on the Switchboard conversational speech corpus and show that our model outperforms standard end-to-end speech recognition models.
In a long conversation, there exists a tendency of semantically related words, or phrases reoccur across sentences, or there exists topical coherence. Existing speech recognition systems are built at individual, isolated utterance level in order to make building systems computationally feasible. However, this may lose important conversational context information. There have been many studies that have attempted to inject a longer context information BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , all of these models are developed on text data for language modeling task. There has been recent work attempted to use the conversational-context information within a end-to-end speech recognition framework BIBREF6 , BIBREF7 , BIBREF8 . The new end-to-end speech recognition approach BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 integrates all available information within a single neural network model, allows to make fusing conversational-context information possible. However, these are limited to encode only one preceding utterance and learn from a few hundred hours of annotated speech corpus, leading to minimal improvements. Meanwhile, neural language models, such as fastText BIBREF17 , BIBREF18 , BIBREF19 , ELMo BIBREF20 , OpenAI GPT BIBREF21 , and Bidirectional Encoder Representations from Transformers (BERT) BIBREF22 , that encode words and sentences in fixed-length dense vectors, embeddings, have achieved impressive results on various natural language processing tasks. Such general word/sentence embeddings learned on large text corpora (i.e., Wikipedia) has been used extensively and plugged in a variety of downstream tasks, such as question-answering and natural language inference, BIBREF22 , BIBREF20 , BIBREF23 , to drastically improve their performance in the form of transfer learning. In this paper, we create a conversational-context aware end-to-end speech recognizer capable of incorporating a conversational-context to better process long conversations. Specifically, we propose to exploit external word and/or sentence embeddings which trained on massive amount of text resources, (i.e. fastText, BERT) so that the model can learn better conversational-context representations. So far, the use of such pre-trained embeddings have found limited success in the speech recognition task. We also add a gating mechanism to the decoder network that can integrate all the available embeddings (word, speech, conversational-context) efficiently with increase representational power using multiplicative interactions. Additionally, we explore a way to train our speech recognition model even with text-only data in the form of pre-training and joint-training approaches. We evaluate our model on the Switchboard conversational speech corpus BIBREF24 , BIBREF25 , and show that our model outperforms the sentence-level end-to-end speech recognition model. The main contributions of our work are as follows:
753
What type of documents are supported by the annotation platform?
Variety of formats supported (PDF, Word...), user can define content elements of document
Open book question answering is a type of natural language based QA (NLQA) where questions are expected to be answered with respect to a given set of open book facts, and common knowledge about a topic. Recently a challenge involving such QA, OpenBookQA, has been proposed. Unlike most other NLQA tasks that focus on linguistic understanding, OpenBookQA requires deeper reasoning involving linguistic understanding as well as reasoning with common knowledge. In this paper we address QA with respect to the OpenBookQA dataset and combine state of the art language models with abductive information retrieval (IR), information gain based re-ranking, passage selection and weighted scoring to achieve 72.0% accuracy, an 11.6% improvement over the current state of the art.
Natural language based question answering (NLQA) not only involves linguistic understanding, but often involves reasoning with various kinds of knowledge. In recent years, many NLQA datasets and challenges have been proposed, for example, SQuAD BIBREF0 , TriviaQA BIBREF1 and MultiRC BIBREF2 , and each of them have their own focus, sometimes by design and other times by virtue of their development methodology. Many of these datasets and challenges try to mimic human question answering settings. One such setting is open book question answering where humans are asked to answer questions in a setup where they can refer to books and other materials related to their questions. In such a setting, the focus is not on memorization but, as mentioned in BIBREF3 , on “deeper understanding of the materials and its application to new situations BIBREF4 , BIBREF5 .” In BIBREF3 , they propose the OpenBookQA dataset mimicking this setting. The OpenBookQA dataset has a collection of questions and four answer choices for each question. The dataset comes with 1326 facts representing an open book. It is expected that answering each question requires at least one of these facts. In addition it requires common knowledge. To obtain relevant common knowledge we use an IR system BIBREF6 front end to a set of knowledge rich sentences. Compared to reading comprehension based QA (RCQA) setup where the answers to a question is usually found in the given small paragraph, in the OpenBookQA setup the open book part is much larger (than a small paragraph) and is not complete as additional common knowledge may be required. This leads to multiple challenges. First, finding the relevant facts in an open book (which is much bigger than the small paragraphs in the RCQA setting) is a challenge. Then, finding the relevant common knowledge using the IR front end is an even bigger challenge, especially since standard IR approaches can be misled by distractions. For example, Table 1 shows a sample question from the OpenBookQA dataset. We can see the retrieved missing knowledge contains words which overlap with both answer options A and B. Introduction of such knowledge sentences increases confusion for the question answering model. Finally, reasoning involving both facts from open book, and common knowledge leads to multi-hop reasoning with respect to natural language text, which is also a challenge. We address the first two challenges and make the following contributions in this paper: (a) We improve on knowledge extraction from the OpenBook present in the dataset. We use semantic textual similarity models that are trained with different datasets for this task; (b) We propose natural language abduction to generate queries for retrieving missing knowledge; (c) We show how to use Information Gain based Re-ranking to reduce distractions and remove redundant information; (d) We provide an analysis of the dataset and the limitations of BERT Large model for such a question answering task. The current best model on the leaderboard of OpenBookQA is the BERT Large model BIBREF7 . It has an accuracy of 60.4% and does not use external knowledge. Our knowledge selection and retrieval techniques achieves an accuracy of 72%, with a margin of 11.6% on the current state of the art. We study how the accuracy of the BERT Large model varies with varying number of knowledge facts extracted from the OpenBook and through IR.
756
What are the strong baselines you have?
optimize single task with no synthetic data
Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown to deliver insightful explanations in the form of input space relevances for understanding feed-forward neural network classification decisions. In the present work, we extend the usage of LRP to recurrent neural networks. We propose a specific propagation rule applicable to multiplicative connections as they arise in recurrent network architectures such as LSTMs and GRUs. We apply our technique to a word-based bi-directional LSTM model on a five-class sentiment prediction task, and evaluate the resulting LRP relevances both qualitatively and quantitatively, obtaining better results than a gradient-based related method which was used in previous work.
Semantic composition plays an important role in sentiment analysis of phrases and sentences. This includes detecting the scope and impact of negation in reversing a sentiment's polarity, as well as quantifying the influence of modifiers, such as degree adverbs and intensifiers, in rescaling the sentiment's intensity BIBREF0 . Recently, a trend emerged for tackling these challenges via deep learning models such as convolutional and recurrent neural networks, as observed e.g. on the SemEval-2016 Task for Sentiment Analysis in Twitter BIBREF1 . As these models become increasingly predictive, one also needs to make sure that they work as intended, in particular, their decisions should be made as transparent as possible. Some forms of transparency are readily obtained from the structure of the model, e.g. recursive nets BIBREF2 , where sentiment can be probed at each node of a parsing tree. Another type of analysis seeks to determine what input features were important for reaching the final top-layer prediction. Recent work in this direction has focused on bringing measures of feature importance to state-of-the-art models such as deep convolutional neural networks for vision BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , or to general deep neural networks for text BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . Some of these techniques are based on the model's local gradient information while other methods seek to redistribute the function's value on the input variables, typically by reverse propagation in the neural network graph BIBREF12 , BIBREF5 , BIBREF13 . We refer the reader to BIBREF14 for an overview on methods for understanding and interpreting deep neural network predictions. BIBREF5 proposed specific propagation rules for neural networks (LRP rules). These rules were shown to produce better explanations than e.g. gradient-based techniques BIBREF15 , and were also successfully transferred to neural networks for text data BIBREF16 . In this paper, we extend LRP with a rule that handles multiplicative interactions in the LSTM model, a particularly suitable model for modeling long-range interactions in texts such as those occurring in sentiment analysis. We then apply the extended LRP method to a bi-directional LSTM trained on a five-class sentiment prediction task. It allows us to produce reliable explanations of which words are responsible for attributing sentiment in individual texts, compared to the explanations obtained by using a gradient-based approach.
757
What are causal attribution networks?
networks where nodes represent causes and effects, and directed edges represent cause-effect relationships proposed by humans
Recently advancements in sequence-to-sequence neural network architectures have led to an improved natural language understanding. When building a neural network-based Natural Language Understanding component, one main challenge is to collect enough training data. The generation of a synthetic dataset is an inexpensive and quick way to collect data. Since this data often has less variety than real natural language, neural networks often have problems to generalize to unseen utterances during testing. In this work, we address this challenge by using multi-task learning. We train out-of-domain real data alongside in-domain synthetic data to improve natural language understanding. We evaluate this approach in the domain of airline travel information with two synthetic datasets. As out-of-domain real data, we test two datasets based on the subtitles of movies and series. By using an attention-based encoder-decoder model, we were able to improve the F1-score over strong baselines from 80.76 % to 84.98 % in the smaller synthetic dataset.
One of the main challenges in building a Natural Language Understanding (NLU) component for a specific task is the necessary human effort to encode the task's specific knowledge. In traditional NLU components, this was done by creating hand-written rules. In today's state-of-the-art NLU components, significant amounts of human effort have to be used for collecting the training data. For example, when building an NLU component for airplane travel information, there are a lot of possibilities to express the situation that someone wants to book a flight from New York to Pittsburgh. In order to build a system, we need to have seen many of them in the training data. Although more and more data has been collected and datasets with this data have been published BIBREF0 , the datasets often consist of data from another domain, which is needed for a certain NLU component. An inexpensive and quick way to collect data for a domain is to generate a synthetic dataset where templates are filled with various values. A problem with such synthetic datasets is to encode enough variety of natural language to be able to generalize to unseen utterances during training. To do this, an enormous amount of effort will be needed. In this work, we address this challenge by combining task-specific synthetic data and real data from another domain. The multi-task framework enables us to combine these two knowledge sources and therefore improve natural language understanding. In this work, the NLU component is based on an attention-based encoder-decoder model BIBREF1 . We evaluate the approach on the commonly used travel information task and used as an out-of-domain task the subtitles of movies and series.
760
how did they ask if a tweet was racist?
if it includes negative utterances, negative generalizations and insults concerning ethnicity, nationality, religion and culture.
This research project aimed to overcome the challenge of analysing human language relationships, facilitate the grouping of languages and formation of genealogical relationship between them by developing automated comparison techniques. Techniques were based on the phonetic representation of certain key words and concept. Example word sets included numbers 1-10 (curated), large database of numbers 1-10 and sheep counting numbers 1-10 (other sources), colours (curated), basic words (curated). ::: To enable comparison within the sets the measure of Edit distance was calculated based on Levenshtein distance metric. This metric between two strings is the minimum number of single-character edits, operations including: insertions, deletions or substitutions. To explore which words exhibit more or less variation, which words are more preserved and examine how languages could be grouped based on linguistic distances within sets, several data analytics techniques were involved. Those included density evaluation, hierarchical clustering, silhouette, mean, standard deviation and Bhattacharya coefficient calculations. These techniques lead to the development of a workflow which was later implemented by combining Unix shell scripts, a developed R package and SWI Prolog. This proved to be computationally efficient and permitted the fast exploration of large language sets and their analysis.
The need to uncover presumed underlying linguistic evolutionary principles and analyse correlation between world's languages has entailed this research. For centuries people have been speculating about the origins of language, however this subject is still obscure. Non-automated linguistic analysis of language relationships has been complicated and very time-consuming. Consequently, this research aims to apply a computational approach to compare human languages. It is based on the phonetic representation of certain key words and concept. This comparison of word similarity aims to facilitate the grouping of languages and the analysis of the formation of genealogical relationship between languages. This report contains a thorough description of the proposed methods, developed techniques and discussion of the results. During this projects several collections of words were gathered and examined, including colour words and numbers. The methods included edit distance, phonetic substitution table, hierarchical clustering with a cut and other analysis methods. They all aimed to provide an insight regarding both technical data summary and its visual representation.
763
How does the model compute the likelihood of executing to the correction semantic denotation?
By treating logical forms as a latent variable and training a discriminative log-linear model over logical form y given x.
Collaborative robotics requires effective communication between a robot and a human partner. This work proposes a set of interpretive principles for how a robotic arm can use pointing actions to communicate task information to people by extending existing models from the related literature. These principles are evaluated through studies where English-speaking human subjects view animations of simulated robots instructing pick-and-place tasks. The evaluation distinguishes two classes of pointing actions that arise in pick-and-place tasks: referential pointing (identifying objects) and locating pointing (identifying locations). The study indicates that human subjects show greater flexibility in interpreting the intent of referential pointing compared to locating pointing, which needs to be more deliberate. The results also demonstrate the effects of variation in the environment and task context on the interpretation of pointing. Our corpus, experiments and design principles advance models of context, common sense reasoning and communication in embodied communication.
Recent years have seen a rapid increase of robotic deployment, beyond traditional applications in cordoned-off workcells in factories, into new, more collaborative use-cases. For example, social robotics and service robotics have targeted scenarios like rehabilitation, where a robot operates in close proximity to a human. While industrial applications envision full autonomy, these collaborative scenarios involve interaction between robots and humans and require effective communication. For instance, a robot that is not able to reach an object may ask for a pick-and-place to be executed in the context of collaborative assembly. Or, in the context of a robotic assistant, a robot may ask for confirmation of a pick-and-place requested by a person. When the robot's form permits, researchers can design such interactions using principles informed by research on embodied face-to-face human–human communication. In particular, by realizing pointing gestures, an articulated robotic arm with a directional end-effector can exploit a fundamental ingredient of human communication BIBREF0. This has motivated roboticists to study simple pointing gestures that identify objects BIBREF1, BIBREF2, BIBREF3. This paper develops an empirically-grounded approach to robotic pointing that extends the range of physical settings, task contexts and communicative goals of robotic gestures. This is a step towards the richer and diverse interpretations that human pointing exhibits BIBREF4. This work has two key contributions. First, we create a systematic dataset, involving over 7000 human judgments, where crowd workers describe their interpretation of animations of simulated robots instructing pick-and-place tasks. Planned comparisons allow us to compare pointing actions that identify objects (referential pointing) with those that identify locations (locating pointing). They also allow us to quantify the effect of accompanying speech, task constraints and scene complexity, as well as variation in the spatial content of the scene. This new resource documents important differences in the way pointing is interpreted in different cases. For example, referential pointing is typically robust to the exactness of the pointing gesture, whereas locating pointing is much more sensitive and requires more deliberate pointing to ensure a correct interpretation. The Experiment Design section explains the overall process of data collection, the power analysis for the preregistered protocol, and the content presented to subjects across conditions. The second contribution is a set of interpretive principles, inspired by the literature on vague communication, that summarize the findings about robot pointing. They suggest that pointing selects from a set of candidate interpretations determined by the type of information specified, the possibilities presented by the scene, and the options compatible with the current task. In particular, we propose that pointing picks out all candidates that are not significantly further from the pointing ray than the closest alternatives. Based on our empirical results, we present design principles that formalize the relevant notions of “available alternatives” and “significantly further away”, which can be used in future pointing robots. The Analysis and Design Principles sections explain and justify this approach.
766
What are state of the art methods authors compare their work with?
ISOT dataset: LLVM Liar dataset: Hybrid CNN and LSTM with attention
Speech recognition and other natural language tasks have long benefited from voting-based algorithms as a method to aggregate outputs from several systems to achieve a higher accuracy than any of the individual systems. Diarization, the task of segmenting an audio stream into speaker-homogeneous and co-indexed regions, has so far not seen the benefit of this strategy because the structure of the task does not lend itself to a simple voting approach. This paper presents DOVER (diarization output voting error reduction), an algorithm for weighted voting among diarization hypotheses, in the spirit of the ROVER algorithm for combining speech recognition hypotheses. We evaluate the algorithm for diarization of meeting recordings with multiple microphones, and find that it consistently reduces diarization error rate over the average of results from individual channels, and often improves on the single best channel chosen by an oracle.
Speaker diarization is the task of segmenting an audio recording in time, indexing each segment by speaker identity. In the standard version of the task BIBREF0, the goal is not to identify known speakers, but to co-index segments that are attributed to the same speaker; in other words, the task implies finding speaker boundaries and grouping segments that belong to the same speaker (including determining the number of distinct speakers). Often diarization is run, in parallel or in sequence, with speech recognition with the goal of achieving speaker-attributed speech-to-text transcription BIBREF1. Ensemble classifiers BIBREF2 are a common way of boosting the performance of machine learning systems, by pooling the outputs of multiple classifiers. In speech processing, they have been used extensively whenever multiple, separately trained speech recognizers are available, and the goal is to achieve better performance with little additional integration or modeling overhead. The most well-known of these methods in speech processing is ROVER (recognition output voting for error reduction) BIBREF3. ROVER aligns the outputs of multiple recognizers word-by-word, and then decides on the most probable word at each position by simple majority or confidence-weighted vote. Confusion network combination (CNC) is a generalization of this idea that makes use of multiple word hypotheses (e.g., in lattice or n-best form) from each recognizer BIBREF4, BIBREF5. Given the pervasive use and effectiveness of ensemble methods, it is perhaps surprising that so far no ensemble algorithm has been used widely for diarization. In this paper we present such an algorithm and apply it to the problem of combining the diarization output obtained from parallel recording channels. This scenario arises naturally when processing speech captured by multiple microphones, even when the raw signals are combined using beamforming (because multiple beams can be formed and later combined for improved accuracy, as described in BIBREF6). In a nod to the ROVER algorithm, we call the algorithm DOVER (diarization output voting for error reduction). As discussed later, while DOVER is not a variant of ROVER, a duality can be observed between the two algorithms. Section SECREF2 presents the DOVER algorithm. Section SECREF3 describes the experiments we ran to test it on two different datasets involving multi-microphone speech capture. Section SECREF4 concludes and points out open problems and future directions.
770
How much improvement do they get?
Their GTRS approach got an improvement of 3.89% compared to SVM and 27.91% compared to Pawlak.
We examine whether neural natural language processing (NLP) systems reflect historical biases in training data. We define a general benchmark to quantify gender bias in a variety of neural NLP tasks. Our empirical evaluation with state-of-the-art neural coreference resolution and textbook RNN-based language models trained on benchmark datasets finds significant gender bias in how models view occupations. We then mitigate bias with CDA: a generic methodology for corpus augmentation via causal interventions that breaks associations between gendered and gender-neutral words. We empirically show that CDA effectively decreases gender bias while preserving accuracy. We also explore the space of mitigation strategies with CDA, a prior approach to word embedding debiasing (WED), and their compositions. We show that CDA outperforms WED, drastically so when word embeddings are trained. For pre-trained embeddings, the two methods can be effectively composed. We also find that as training proceeds on the original data set with gradient descent the gender bias grows as the loss reduces, indicating that the optimization encourages bias; CDA mitigates this behavior.
Natural language processing (NLP) with neural networks has grown in importance over the last few years. They provide state-of-the-art models for tasks like coreference resolution, language modeling, and machine translation BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . However, since these models are trained on human language texts, a natural question is whether they exhibit bias based on gender or other characteristics, and, if so, how should this bias be mitigated. This is the question that we address in this paper. Prior work provides evidence of bias in autocomplete suggestions BIBREF5 and differences in accuracy of speech recognition based on gender and dialect BIBREF6 on popular online platforms. Word embeddings, initial pre-processors in many NLP tasks, embed words of a natural language into a vector space of limited dimension to use as their semantic representation. BIBREF7 and BIBREF8 observed that popular word embeddings including word2vec BIBREF9 exhibit gender bias mirroring stereotypical gender associations such as the eponymous BIBREF7 "Man is to computer programmer as Woman is to homemaker". Yet the question of how to measure bias in a general way for neural NLP tasks has not been studied. Our first contribution is a general benchmark to quantify gender bias in a variety of neural NLP tasks. Our definition of bias loosely follows the idea of causal testing: matched pairs of individuals (instances) that differ in only a targeted concept (like gender) are evaluated by a model and the difference in outcomes (or scores) is interpreted as the causal influence of the concept in the scrutinized model. The definition is parametric in the scoring function and the target concept. Natural scoring functions exist for a number of neural natural language processing tasks. We instantiate the definition for two important tasks—coreference resolution and language modeling. Coreference resolution is the task of finding words and expressions referring to the same entity in a natural language text. The goal of language modeling is to model the distribution of word sequences. For neural coreference resolution models, we measure the gender coreference score disparity between gender-neutral words and gendered words like the disparity between “doctor” and “he” relative to “doctor” and “she” pictured as edge weights in Figure FIGREF2 . For language models, we measure the disparities of emission log-likelihood of gender-neutral words conditioned on gendered sentence prefixes as is shown in Figure FIGREF2 . Our empirical evaluation with state-of-the-art neural coreference resolution and textbook RNN-based language models BIBREF2 , BIBREF1 , BIBREF10 trained on benchmark datasets finds gender bias in these models . Next we turn our attention to mitigating the bias. BIBREF7 introduced a technique for debiasing word embeddings which has been shown to mitigate unwanted associations in analogy tasks while preserving the embedding's semantic properties. Given their widespread use, a natural question is whether this technique is sufficient to eliminate bias from downstream tasks like coreference resolution and language modeling. As our second contribution, we explore this question empirically. We find that while the technique does reduce bias, the residual bias is considerable. We further discover that debiasing models that make use of embeddings that are co-trained with their other parameters BIBREF1 , BIBREF10 exhibit a significant drop in accuracy. Our third contribution is counterfactual data augmentation (CDA): a generic methodology to mitigate bias in neural NLP tasks. For each training instance, the method adds a copy with an intervention on its targeted words, replacing each with its partner, while maintaining the same, non-intervened, ground truth. The method results in a dataset of matched pairs with ground truth independent of the target distinction (see Figure FIGREF2 and Figure FIGREF2 for examples). This encourages learning algorithms to not pick up on the distinction. Our empirical evaluation shows that CDA effectively decreases gender bias while preserving accuracy. We also explore the space of mitigation strategies with CDA, a prior approach to word embedding debiasing (WED), and their compositions. We show that CDA outperforms WED, drastically so when word embeddings are co-trained. For pre-trained embeddings, the two methods can be effectively composed. We also find that as training proceeds on the original data set with gradient descent the gender bias grows as the loss reduces, indicating that the optimization encourages bias; CDA mitigates this behavior. In the body of this paper we present necessary background (Section SECREF2 ), our methods (Sections SECREF3 and SECREF4 ), their evaluation (Section SECREF5 ), and speculate on future research (Section SECREF6 ).
773
Which languages do they test on?
Answer with content missing: (Applications section) We use Wikipedia articles in five languages (Kinyarwanda, Lao, Pashto, Tok Pisin, and a subset of English) as well as the Na dataset of Adams et al. (2017). Select: Kinyarwanda, Lao, Pashto, Tok Pisin, and a subset of English
Natural Language Inference (NLI) datasets often contain hypothesis-only biases---artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models that are more robust to such biases and better transfer across datasets. In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise. We evaluate our methods on synthetic and existing NLI datasets by training on datasets containing biases and testing on datasets containing no (or different) hypothesis-only biases. Our results indicate that these methods can make NLI models more robust to dataset-specific artifacts, transferring better than a baseline architecture in 9 out of 12 NLI datasets. Additionally, we provide an extensive analysis of the interplay of our methods with known biases in NLI datasets, as well as the effects of encouraging models to ignore biases and fine-tuning on target datasets.
Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts BIBREF0 , BIBREF1 . In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone). The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI. However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts BIBREF2 , BIBREF3 , BIBREF4 . For instance, in some datasets, negation words like “not” and “nobody” are often associated with a relationship of contradiction. As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases. Recent studies have tried to create new NLI datasets that do not contain such artifacts, but many approaches to dealing with this issue remain unsatisfactory: constructing new datasets BIBREF5 is costly and may still result in other artifacts; filtering “easy” examples and defining a harder subset is useful for evaluation purposes BIBREF2 , but difficult to do on a large scale that enables training; and compiling adversarial examples BIBREF6 is informative but again limited by scale or diversity. Instead, our goal is to develop methods that overcome these biases as datasets may still contain undesired artifacts despite annotation efforts. Typical NLI models learn to predict an entailment label discriminatively given a premise-hypothesis pair (Figure 1 ), enabling them to learn hypothesis-only biases. Instead, we predict the premise given the hypothesis and the entailment label, which by design cannot be solved using data artifacts. While this objective is intractable, it motivates two approximate training methods for standard NLI classifiers that are more resistant to biases. Our first method uses a hypothesis-only classifier (Figure 1 ) and the second uses negative sampling by swapping premises between premise-hypothesis pairs (Figure 1 ). We evaluate the ability of our methods to generalize better in synthetic and naturalistic settings. First, using a controlled, synthetic dataset, we demonstrate that, unlike the baseline, our methods enable a model to ignore the artifacts and learn to correctly identify the desired relationship between the two texts. Second, we train models on an NLI dataset that is known to be biased and evaluate on other datasets that may have different or no biases. We observe improved results compared to a fully discriminative baseline in 9 out of 12 target datasets, indicating that our methods generate models that are more robust to annotation artifacts. An extensive analysis reveals that our methods are most effective when the target datasets have different biases from the source dataset or no noticeable biases. We also observe that the more we encourage the model to ignore biases, the better it transfers, but this comes at the expense of performance on the source dataset. Finally, we show that our methods can better exploit small amounts of training data in a target dataset, especially when it has different biases from the source data. In this paper, we focus on the transferability of our methods from biased datasets to ones having different or no biases. Elsewhere BIBREF7 , we have analyzed the effect of these methods on the learned language representations, suggesting that they may indeed be less biased. However, we caution that complete removal of biases remains difficult and is dependent on the techniques used. The choice of whether to remove bias also depends on the goal; in an in-domain scenario certain biases may be helpful and should not necessarily be removed. In summary, in this paper we make the following contributions:
774
What limitations are mentioned?
deciding publisher partisanship, risk annotator bias because of short description text provided to annotators
We propose a simple data augmentation protocol aimed at providing a compositional inductive bias in conditional and unconditional sequence models. Under this protocol, synthetic training examples are constructed by taking real training examples and replacing (possibly discontinuous) fragments with other fragments that appear in at least one similar environment. The protocol is model-agnostic and useful for a variety of tasks. Applied to neural sequence-to-sequence models, it reduces relative error rate by up to 87% on problems from the diagnostic SCAN tasks and 16% on a semantic parsing task. Applied to n-gram language modeling, it reduces perplexity by roughly 1% on small datasets in several languages.
This paper proposes a data augmentation protocol for sequence modeling problems. Our approach aims to supply a simple and model-agnostic bias toward compositional reuse of previously observed sequence fragments in novel environments. Consider a language modeling task in which we wish to estimate a probability distribution over a family of sentences with the following finite sample as training data: In language processing problems, we often want models to analyze this dataset compositionally and infer that ( SECREF6 ) is also probable but ( UID7 ) is not: This generalization amounts to to an inference about syntactic categories BIBREF0 : because cat and wug are interchangeable in the environment the...sang, they are also likely interchangeable elsewhere. Human learners make judgments like ( SECREF5 ) about novel lexical items BIBREF1 and fragments of novel languages BIBREF2 . But we do not expect such judgments from unstructured sequence models trained to maximize the likelihood of the training data in ( SECREF1 ). A large body of work in natural language processing provides generalization to data like ( SECREF6 ) by adding structure to the learned predictor BIBREF3 , BIBREF4 , BIBREF5 . But on real-world datasets, such models are typically worse than “black-box” function approximators like neural networks even when the black-box models fail to place probability mass on either example in ( SECREF5 ) BIBREF6 . To the extent that we believe ( SECREF6 ) to capture an important inductive bias, we would like to find a way of softly encouraging it without tampering with the structure of predictors that work well at scale. In this paper, we introduce a procedure for generating synthetic training examples by recombining real ones, such that ( SECREF6 ) is assigned nontrivial probability because it already appears in the training dataset. The basic operation underlying our proposal (which we call geca, for “good-enough compositional augmentation”) is depicted in fig:teaser: if two (possibly discontinuous) fragments of training examples appear in some common environment, then any additional environment where the first fragment appears is also a valid environment for the second. geca is crude: as a linguistic principle, it is both limited and imprecise. As discussed in Sections UID17 and SECREF5 , it captures a narrow slice of the many phenomena studied under the heading of “compositionality”, while also making a number of incorrect predictions about real languages. Nevertheless, geca appears to be quite effective across a range of learning problems. In semantic parsing, it gives improvements comparable to the data augmentation approach of BIBREF7 on INLINEFORM0 -calculus expressions, better performance than that approach on a different split of the data designed to test generalization more rigorously, and better performance on a different meaning representation language. Outside of semantic parsing, it solves two representative problems from the scan dataset of BIBREF8 that are synthetic but precise in the notion of compositionality they test. Finally, it helps with some (unconditional) low-resource language modeling problems in a typologically diverse set of languages.
776
What are the baselines?
CNN, LSTM, BERT
In this work, we analyze the performance of general deep reinforcement learning algorithms for a task-oriented language grounding problem, where language input contains multiple sub-goals and their order of execution is non-linear. ::: We generate a simple instructional language for the GridWorld environment, that is built around three language elements (order connectors) defining the order of execution: one linear - "comma" and two non-linear - "but first", "but before". We apply one of the deep reinforcement learning baselines - Double DQN with frame stacking and ablate several extensions such as Prioritized Experience Replay and Gated-Attention architecture. ::: Our results show that the introduction of non-linear order connectors improves the success rate on instructions with a higher number of sub-goals in 2-3 times, but it still does not exceed 20%. Also, we observe that the usage of Gated-Attention provides no competitive advantage against concatenation in this setting. Source code and experiments' results are available at this https URL
Task-oriented language grounding refers to the process of extracting semantically meaningful representations of language by mapping it to visual elements and actions in the environment in order to perform the task specified by the instruction BIBREF0. Recent works in this paradigm focus on wide spectrum of natural language instructions' semantics: different characteristics of referents (colors, relative positions, relative sizes), multiple tasks, and multiple sub-goals BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. In this work, we are interested in the language input with the semantics of multiple sub-goals, focusing on the order of execution, as the natural language contains elements that may lead to the non-linear order of execution (e.g. “Take the garbage out, but first wash the dishes“). We refer to this kind of elements as non-linear order connectors in the setting of task-oriented language grounding. In particular, we want to answer: what is the performance of general deep reinforcement learning algorithms with this kind of language input? Can it successfully learn all of the training instructions? Can it generalize to an unseen number of sub-goals? To answer these questions, we generate an instruction's language for a modified GridWorld environment, where the agent needs to visit items specified in a given instruction. The language is built around three order connectors: one linear – “comma“, and two non-linear – “but first“, and “but before“, producing instructions like “Go to the red, go to the blue, but first go to the green“. In contrast to BIBREF6, where the sub-goals are separated in advance and the order is known, in this work, we specifically aim to study whether the agent can learn to determine the order of execution based on the connectors present in the language input. We apply one of the offline deep reinforcement learning baselines – Dueling DQN and examine the impact of several extensions such as Gated-Attention architecture BIBREF0 and Prioritized Experience Replay BIBREF7. First, we discover that training for both non-linear and linear order connectors at the same time improves the performance on the latter when compared to training using only linear ones. Second, we observe that the generalization to an unseen number of sub-goals using general methods of deep reinforcement learning is possible but still very limited for both linear and non-linear connectors. Third, we find that there is no advantage of using gated-attention against simple concatenation in this setting. And fourth, we observe that the usage of prioritized experience replay in this setup may be enough to achieve the training performance near to perfect.
777
What semantic features help in detecting whether a piece of text is genuine or generated? of
No feature is given, only discussion that semantic features are use in practice and yet to be discovered how to embed that knowledge into statistical decision theory framework.
In this paper, we introduce CAIL2019-SCM, Chinese AI and Law 2019 Similar Case Matching dataset. CAIL2019-SCM contains 8,964 triplets of cases published by the Supreme People's Court of China. CAIL2019-SCM focuses on detecting similar cases, and the participants are required to check which two cases are more similar in the triplets. There are 711 teams who participated in this year's competition, and the best team has reached a score of 71.88. We have also implemented several baselines to help researchers better understand this task. The dataset and more details can be found from this https URL.
Similar Case Matching (SCM) plays a major role in legal system, especially in common law legal system. The most similar cases in the past determine the judgment results of cases in common law systems. As a result, legal professionals often spend much time finding and judging similar cases to prove fairness in judgment. As automatically finding similar cases can benefit to the legal system, we select SCM as one of the tasks of CAIL2019. Chinese AI and Law Challenge (CAIL) is a competition of applying artificial intelligence technology to legal tasks. The goal of the competition is to use AI to help the legal system. CAIL was first held in 2018, and the main task of CAIL2018 BIBREF0, BIBREF1 is predicting the judgment results from the fact description. The judgment results include the accusation, applicable articles, and the term of penalty. CAIL2019 contains three different tasks, including Legal Question-Answering, Legal Case Element Prediction, and Similar Case Matching. Furthermore, we will focus on SCM in this paper. More specifically, CAIL2019-SCM contains 8,964 triplets of legal documents. Every legal documents is collected from China Judgments Online. In order to ensure the similarity of the cases in one triplet, all selected documents are related to Private Lending. Every document in the triplet contains the fact description. CAIL2019-SCM requires researchers to decide which two cases are more similar in a triplet. By detecting similar cases in triplets, we can apply this algorithm for ranking all documents to find the most similar document in the database. There are 247 teams who have participated CAIL2019-SCM, and the best team has reached a score of $71.88$, which is about 20 points higher than the baseline. The results show that the existing methods have made great progress on this task, but there is still much room for improvement. In other words, CAIL2019-SCM can benefit the research of legal case matching. Furthermore, there are several main challenges of CAIL2019-SCM: (1) The difference between documents may be small, and then it is hard to decide which two documents are more similar. Moreover, the similarity is defined by legal workers. We must utilize legal knowledge into this task rather than calculate similarity on the lexical level. (2) The length of the documents is quite long. Most documents contain more than 512 characters, and then it is hard for existing methods to capture document level information. In the following parts, we will give more details about CAIL2019-SCM, including related works about SCM, the task definition, the construction of the dataset, and several experiments on the dataset.
778
Which models do they try out?
DocQA, SAN, QANet, ASReader, LM, Random Guess
Some consider large-scale language models that can generate long and coherent pieces of text as dangerous, since they may be used in misinformation campaigns. Here we formulate large-scale language model output detection as a hypothesis testing problem to classify text as genuine or generated. We show that error exponents for particular language models are bounded in terms of their perplexity, a standard measure of language generation performance. Under the assumption that human language is stationary and ergodic, the formulation is extended from considering specific language models to considering maximum likelihood language models, among the class of k-order Markov approximations; error probabilities are characterized. Some discussion of incorporating semantic side information is also given.
Building on a long history of language generation models that are based on statistical knowledge that people have BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, large-scale, neural network-based language models (LMs) that write paragraph-length text with the coherence of human writing have emerged BIBREF6, BIBREF7, BIBREF8. Such models have raised concerns about misuse in generating fake news, misleading reviews, and hate speech BIBREF9, BIBREF10, BIBREF8, BIBREF11, BIBREF12. The alarming consequences of such machine-generated misinformation present an urgent need to discern fake content from genuine, as it is becoming more and more difficult for people to do so without cognitive support tools BIBREF13. Several recent studies have used supervised learning to develop classifiers for this task BIBREF8, BIBREF14, BIBREF9, BIBREF15, BIBREF16 and interpreted their properties. Here we take inspiration from our recent work on information-theoretic limits for detecting audiovisual deepfakes generated by GANs BIBREF17 to develop information-theoretic limits for detecting the outputs of language models. In particular, we build on the information-theoretic study of authentication BIBREF18 to use a formal hypothesis testing framework for detecting the outputs of language models. In establishing fundamental limits of detection, we consider two settings. First, we characterize the error exponent for a particular language model in terms of standard performance metrics such as cross-entropy and perplexity. As far as we know, these informational performance metrics had not previously emerged from a formal operational theorem. Second, we consider not just a setting with a specific language model with given performance metrics, but rather consider a universal setting where we take a generic view of language models as empirical maximum likelihood $k$-order Markov approximations of stationary, ergodic random processes. Results on estimation of such random processes are revisited in the context of the error probability, using a conjectured extension of the reverse Pinsker inequality. In closing, we discuss how the semantics of generated text may be a form of side information in detection.
790
What are the competing models?
TEACHER FORCING (TF), SCHEDULED SAMPLING (SS), SEQGAN, RANKGAN, LEAKGAN.
Dependency parsing is one of the important natural language processing tasks that assigns syntactic trees to texts. Due to the wider availability of dependency corpora and improved parsing and machine learning techniques, parsing accuracies of supervised learning-based systems have been significantly improved. However, due to the nature of supervised learning, those parsing systems highly rely on the manually annotated training corpora. They work reasonably good on the in-domain data but the performance drops significantly when tested on out-of-domain texts. To bridge the performance gap between in-domain and out-of-domain, this thesis investigates three semi-supervised techniques for out-of-domain dependency parsing, namely co-training, self-training and dependency language models. Our approaches use easily obtainable unlabelled data to improve out-of-domain parsing accuracies without the need of expensive corpora annotation. The evaluations on several English domains and multi-lingual data show quite good improvements on parsing accuracy. Overall this work conducted a survey of semi-supervised methods for out-of-domain dependency parsing, where I extended and compared a number of important semi-supervised methods in a unified framework. The comparison between those techniques shows that self-training works equally well as co-training on out-of-domain parsing, while dependency language models can improve both in- and out-of-domain accuracies.
[display] 1 0px Semi-Supervised Methods for Out-of-Domain Dependency Parsing Juntao Yu School of Computer Science
791
How is the input triple translated to a slot-filling task?
The relation R(x,y) is mapped onto a question q whose answer is y
Exposure bias describes the phenomenon that a language model trained under the teacher forcing schema may perform poorly at the inference stage when its predictions are conditioned on its previous predictions unseen from the training corpus. Recently, several generative adversarial networks (GANs) and reinforcement learning (RL) methods have been introduced to alleviate this problem. Nonetheless, a common issue in RL and GANs training is the sparsity of reward signals. In this paper, we adopt two simple strategies, multi-range reinforcing, and multi-entropy sampling, to amplify and denoise the reward signal. Our model produces an improvement over competing models with regards to BLEU scores and road exam, a new metric we designed to measure the robustness against exposure bias in language models.
Likelihood-based language models with deep neural networks have been widely adopted to tackle language tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. By far, one of the most popular training strategies is teacher forcing, which derives from the general maximum likelihood estimation (MLE) principle BIBREF4. Under the teacher forcing schema, a model is trained to make predictions conditioned on ground-truth inputs. Although this strategy enables effective training of large neural networks, it is susceptible to aggravate exposure bias: a model may perform poorly at the inference stage, once its self-generated prefix diverges from the previously learned ground-truth data BIBREF5. A common approach to mitigate this problem is to impose supervision upon the model's own exploration. To this objective, existing literature have introduced REINFORCE BIBREF6 and actor-critic (AC) methods BIBREF7 (including language GANs BIBREF8), which offer direct feedback on a model's self-generated sequences, so the model can later, at the inference stage, deal with previously unseen exploratory paths. However, due to the well-known issue of reward sparseness and the potential noises in the critic's feedback, these methods are reported to risk compromising the generation quality, specifically in terms of precision. In this paper, we adopt two simple strategies, multi-range reinforcing and multi-entropy sampling to overcome the reward sparseness during training. With the tricks applied, our model demonstrates a significant improvement over competing models. In addition, we propose road exam as a new metric to reveal a model's robustness against exposure bias.
794
How is module that analyzes behavioral state trained?
pre-trained to identify the presence of behavior from a sequence of word using the Couples Therapy Corpus
We introduce a model for the linguistic hedges `very' and `quite' within the label semantics framework, and combined with the prototype and conceptual spaces theories of concepts. The proposed model emerges naturally from the representational framework we use and as such, has a clear semantic grounding. We give generalisations of these hedge models and show that they can be composed with themselves and with other functions, going on to examine their behaviour in the limit of composition.
The modelling of natural language relies on the idea that languages are compositional, i.e. that the meaning of a sentence is a function of the meanings of the words in the sentence, as proposed by BIBREF0 . Whether or not this principle tells the whole story, it is certainly important as we undoubtedly manage to create and understand novel combinations of words. Fuzzy set theory has long been considered a useful framework for the modelling of natural language expressions, as it provides a functional calculus for concept combination BIBREF1 , BIBREF2 . A simple example of compositionality is hedged concepts. Hedges are words such as `very', `quite', `more or less', `extremely'. They are usually modelled as transforming the membership function of a base concept to either narrow or broaden the extent of application of that concept. So, given a concept `short', the term `very short' applies to fewer objects than `short', and `quite short' to more. Modelling a hedge as a transformation of a concept allows us to determine membership of an object in the hedged concept as a function of its membership in the base concept, rather than building the hedged concept from scratch BIBREF3 . Linguistic hedges have been widely applied, including in fuzzy classifiers BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 and database queries BIBREF8 , BIBREF9 . Using linguistic hedges in these applications allows increased accuracy in rules or queries whilst maintaining human interpretability of results BIBREF10 , BIBREF11 . This motivates the need for a semantically grounded account of linguistic hedges: if hedged results are more interpretable then the hedges used must themselves be meaningful. In the following we provide an account of linguistic hedges that is both functional, and semantically grounded. In its most basic formulation, the operation requires no additional parameters, although we also show that the formulae can be generalised if necessary. Our account of linguistic hedges uses the label semantics framework to model concepts BIBREF12 . This is a random set approach which quantifies an agent's subjective uncertainty about the extent of application of a concept. We refer to this uncertainty as semantic uncertainty BIBREF13 to emphasise that it concerns the definition of concepts and categories, in contrast to stochastic uncertainty which concerns the state of the world. In BIBREF13 the label semantics approach is combined with conceptual spaces BIBREF14 and prototype theory BIBREF15 , to give a formalisation of concepts as based on a prototype and a threshold, located in a conceptual space. This approach is discussed in detail in section "Conceptual Spaces" . An outline of the paper is then as follows: section "Approaches to linguistic hedges" discusses different approaches to linguistic hedges from the literature, and compares these with our model. Subsequently, in section "Label semantics approach to linguistic hedges" , we give formulations of the hedges `very' and `quite'. These are formed by considering the dependence of the threshold of a hedged concept on the threshold of the original concept. We give a basic model and two generalisations, show that the models can be composed and investigate the behaviour in the limit of composition. Section "Discussion" compares our results to those in the literature and proposes further lines of research.
795
Can the model add new relations to the knowledge graph, or just new entities?
The model does not add new relations to the knowledge graph.
Most current language modeling techniques only exploit co-occurrence, semantic and syntactic information from the sequence of words. However, a range of information such as the state of the speaker and dynamics of the interaction might be useful. In this work we derive motivation from psycholinguistics and propose the addition of behavioral information into the context of language modeling. We propose the augmentation of language models with an additional module which analyzes the behavioral state of the current context. This behavioral information is used to gate the outputs of the language model before the final word prediction output. We show that the addition of behavioral context in language models achieves lower perplexities on behavior-rich datasets. We also confirm the validity of the proposed models on a variety of model architectures and improve on previous state-of-the-art models with generic domain Penn Treebank Corpus.
Recurrent neural network language models (RNNLM) can theoretically model the word history over an arbitrarily long length of time and thus have been shown to perform better than traditional n-gram models BIBREF0. Recent prior work has continuously improved the performance of RNNLMs through hyper-parameter tuning, training optimization methods, and development of new network architectures BIBREF1, BIBREF2, BIBREF3, BIBREF4. On the other hand, many work have proposed the use of domain knowledge and additional information such as topics or parts-of-speech to improve language models. While syntactic tendencies can be inferred from a few preceding words, semantic coherence may require longer context and high level understanding of natural language, both of which are difficult to learn through purely statistical methods. This problem can be overcome by exploiting external information to capture long-range semantic dependencies. One common way of achieving this is by incorporating part-of-speech (POS) tags into the RNNLM as an additional feature to predict the next word BIBREF5, BIBREF6. Other useful linguistic features include conversation-type, which was shown to improve language modeling when combined with POS tags BIBREF7. Further improvements were achieved through the addition of socio-situational setting information and other linguistic features such as lemmas and topic BIBREF8. The use of topic information to provide semantic context to language models has also been studied extensively BIBREF9, BIBREF10, BIBREF11, BIBREF12. Topic models are useful for extracting high level semantic structure via latent topics which can aid in better modeling of longer documents. Recently, however, empirical studies involving investigation of different network architectures, hyper-parameter tuning, and optimization techniques have yielded better performance than the addition of contextual information BIBREF13, BIBREF14. In contrast to the majority of work that focus on improving the neural network aspects of RNNLM, we introduce psycholinguistic signals along with linguistic units to improve the fundamental language model. In this work, we utilize behavioral information embedded in the language to aid the language model. We hypothesize that different psychological behavior states incite differences in the use of language BIBREF15, BIBREF16, and thus modeling these tendencies can provide useful information in statistical language modeling. And although not directly related, behavioral information may also correlate with conversation-type and topic. Thus, we propose the use of psycholinguistic behavior signals as a gating mechanism to augment typical language models. Effectively inferring behaviors from sources like spoken text, written articles can lead to personification of the language models in the speaker-writer arena.
800
What is the training and test data used?
Tweets related to a Bank of America DDos attack were used as training data. The test datasets contain tweets related to attacks to Bank of America, PNC and Wells Fargo.
Most existing works on dialog systems only consider conversation content while neglecting the personality of the user the bot is interacting with, which begets several unsolved issues. In this paper, we present a personalized end-to-end model in an attempt to leverage personalization in goal-oriented dialogs. We first introduce a Profile Model which encodes user profiles into distributed embeddings and refers to conversation history from other similar users. Then a Preference Model captures user preferences over knowledge base entities to handle the ambiguity in user requests. The two models are combined into the Personalized MemN2N. Experiments show that the proposed model achieves qualitative performance improvements over state-of-the-art methods. As for human evaluation, it also outperforms other approaches in terms of task completion rate and user satisfaction.
There has been growing research interest in training dialog systems with end-to-end models BIBREF0 , BIBREF1 , BIBREF2 in recent years. These models are directly trained on past dialogs, without assumptions on the domain or dialog state structure BIBREF3 . One of their limitations is that they select responses only according to the content of the conversation and are thus incapable of adapting to users with different personalities. Specifically, common issues with such content-based models include: (i) the inability to adjust language style flexibly BIBREF4 ; (ii) the lack of a dynamic conversation policy based on the interlocutor's profile BIBREF5 ; and (iii) the incapability of handling ambiguities in user requests. Figure FIGREF1 illustrates these problems with an example. The conversation happens in a restaurant reservation scenario. First, the responses from the content-based model are plain and boring, and not able to adjust appellations and language styles like the personalized model. Second, in the recommendation phase, the content-based model can only provide candidates in a random order, while a personalized model can change recommendation policy dynamically, and in this case, match the user dietary. Third, the word “contact” can be interpreted into “phone” or “social media” contact information in the knowledge base. Instead of choosing one randomly, the personalized model handles this ambiguity based on the learned fact that young people prefer social media account while the elders prefer phone number. Psychologists have proven that during a dialog humans tend to adapt to their interlocutor to facilitate understanding, which enhances conversational efficiency BIBREF6 , BIBREF7 , BIBREF8 . To improve agent intelligence, we may polish our model to learn such human behaviors in conversations. A big challenge in building personalized dialog systems is how to utilize the user profile and generate personalized responses correspondingly. To overcome it, existing works BIBREF9 , BIBREF4 often conduct extra procedures to incorporate personalization in training, such as intermediate supervision and pre-training of user profiles, which are complex and time-consuming. In contrast, our work is totally end-to-end. In this paper, we propose a Profile Model and a Preference Model to leverage user profiles and preferences. The Profile Model learns user personalities with distributed profile representation, and uses a global memory to store conversation context from other users with similar profiles. In this way, it can choose a proper language style and change recommendation policy based on the user profile. To address the problem of ambiguity, the Preference Model learns user preferences among ambiguous candidates by building a connection between the user profile and the knowledge base. Since these two models are both under the MemN2N framework and make contributions to personalization in different aspects, we combine them into the Personalized MemN2N. Our experiments on a goal-oriented dialog corpus, the personalized bAbI dialog dataset, show that leveraging personal information can significantly improve the performance of dialog systems. The Personalized MemN2N outperforms current state-of-the-art methods with over 7% improvement in terms of per-response accuracy. A test with real human users also illustrates that the proposed model leads to better outcomes, including higher task completion rate and user satisfaction.
802
What writing styles are present in the corpus?
current news, historical news, free time, sports, juridical news pieces, personal adverts, editorials.
In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment.
Over the last couple of years, the MeToo movement has facilitated several discussions about sexual abuse. Social media, especially Twitter, was one of the leading platforms where people shared their experiences of sexual harassment, expressed their opinions, and also offered support to victims. A large portion of these tweets was tagged with a dedicated hashtag #MeToo, and it was one of the main trending topics in many countries. The movement was viral on social media and the hashtag used over 19 million times in a year. The MeToo movement has been described as an essential development against the culture of sexual misconduct by many feminists, activists, and politicians. It is one of the primary examples of successful digital activism facilitated by social media platforms. The movement generated many conversations on stigmatized issues like sexual abuse and violence, which were not often discussed before because of the associated fear of shame or retaliation. This creates an opportunity for researchers to study how people express their opinion on a sensitive topic in an informal setting like social media. However, this is only possible if there are annotated datasets that explore different linguistic facets of such social media narratives. Twitter served as a platform for many different types of narratives during the MeToo movement BIBREF0. It was used for sharing personal stories of abuse, offering support and resources to victims, and expressing support or opposition towards the movement BIBREF1. It was also used to allege individuals of sexual misconduct, refute such claims, and sometimes voice hateful or sarcastic comments about the campaign or individuals. In some cases, people also misused hashtag to share irrelevant or uninformative content. To capture all these complex narratives, we decided to curate a dataset of tweets related to the MeToo movement that is annotated for various linguistic aspects. In this paper, we present a new dataset (MeTooMA) that contains 9,973 tweets associated with the MeToo movement annotated for relevance, stance, hate speech, sarcasm, and dialogue acts. We introduce and annotate three new dialogue acts that are specific to the movement: Allegation, Refutation, and Justification. The dataset also contains geographical information about the tweets: from which country it was posted. We expect this dataset would be of great interest and use to both computational and socio-linguists. For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media across multiple countries.
806
What meta-information is being transferred?
high-order representation of a relation, loss gradient of relation meta
The ever-growing datasets published on Linked Open Data mainly contain encyclopedic information. However, there is a lack of quality structured and semantically annotated datasets extracted from unstructured real-time sources. In this paper, we present principles for developing a knowledge graph of interlinked events using the case study of news headlines published on Twitter which is a real-time and eventful source of fresh information. We represent the essential pipeline containing the required tasks ranging from choosing background data model, event annotation (i.e., event recognition and classification), entity annotation and eventually interlinking events. The state-of-the-art is limited to domain-specific scenarios for recognizing and classifying events, whereas this paper plays the role of a domain-agnostic road-map for developing a knowledge graph of interlinked events.
Several successful efforts have led to publishing huge RDF (Resource Description Framework) datasets on Linked Open Data (LOD) such as DBpedia BIBREF0 and LinkedGeoData BIBREF1 . However, these sources are limited to either structured or semi-structured data. So far, a significant portion of the Web content consists of textual data from social network feeds, blogs, news, logs, etc. Although the Natural Language Processing (NLP) community has developed approaches to extract essential information from plain text (e.g., BIBREF2 , BIBREF3 , BIBREF4 ), there is convenient support for knowledge graph construction. Further, several lexical analysis based approaches extract only a limited form of metadata that is inadequate for supporting applications such as question answering systems. For example, the query “Give me the list of reported events by BBC and CNN about the number of killed people in Yemen in the last four days”, about a recent event (containing restrictions such as location and time) poses several challenges to the current state of Linked Data and relevant information extraction techniques. The query seeks “fresh” information (e.g., last four days) whereas the current version of Linked Data is encyclopedic and historical, and does not contain appropriate information present in a temporally annotated data stream. Further, the query specifies provenance (e.g., published by BBC and CNN) that might not always be available on Linked Data. Crucially, the example query asks about a specific type of event (i.e., reports of war caused killing people) with multiple arguments (e.g., in this case, location argument occurred in Yemen). In spite of recent progress BIBREF5 , BIBREF6 , BIBREF7 , there is still no standardized mechanism for (i) selecting background data model, (ii) recognizing and classifying specific event types, (iii) identifying and labeling associated arguments (i.e., entities as well as relations), (iv) interlinking events, and (v) representing events. In fact, most of the state-of-the-art solutions are ad hoc and limited. In this paper, we provide a systematic pipeline for developing knowledge graph of interlinked events. As a proof-of-concept, we show a case study of headline news on Twitter. The main contributions of this paper include: The remainder of this paper is organized as follows. Section SECREF2 is dedicated to notation and problem statement. Section SECREF3 outlines the required steps for developing a knowledge graph of interlinked events. Section SECREF4 frames our contribution in the context of related work. Section SECREF5 concludes the paper with suggestions for future work.
809
How much is performance hurt when using too small amount of layers in encoder?
comparing to the results from reducing the number of layers in the decoder, the BLEU score was 69.93 which is less than 1% in case of test2016 and in case of test2017 it was less by 0.2 %. In terms of TER it had higher score by 0.7 in case of test2016 and 0.1 in case of test2017.
While many methods for learning vector space embeddings have been proposed in the field of Natural Language Processing, these methods typically do not distinguish between categories and individuals. Intuitively, if individuals are represented as vectors, we can think of categories as (soft) regions in the embedding space. Unfortunately, meaningful regions can be difficult to estimate, especially since we often have few examples of individuals that belong to a given category. To address this issue, we rely on the fact that different categories are often highly interdependent. In particular, categories often have conceptual neighbors, which are disjoint from but closely related to the given category (e.g.\ fruit and vegetable). Our hypothesis is that more accurate category representations can be learned by relying on the assumption that the regions representing such conceptual neighbors should be adjacent in the embedding space. We propose a simple method for identifying conceptual neighbors and then show that incorporating these conceptual neighbors indeed leads to more accurate region based representations.
Vector space embeddings are commonly used to represent entities in fields such as machine learning (ML) BIBREF0, natural language processing (NLP) BIBREF1, information retrieval (IR) BIBREF2 and cognitive science BIBREF3. An important point, however, is that such representations usually represent both individuals and categories as vectors BIBREF4, BIBREF5, BIBREF6. Note that in this paper, we use the term category to denote natural groupings of individuals, as it is used in cognitive science, with individuals referring to the objects from the considered domain of discourse. For example, the individuals carrot and cucumber belong to the vegetable category. We use the term entities as an umbrella term covering both individuals and categories. Given that a category corresponds to a set of individuals (i.e. its instances), modelling them as (possibly imprecise) regions in the embedding space seems more natural than using vectors. In fact, it has been shown that the vector representations of individuals that belong to the same category are indeed often clustered together in learned vector space embeddings BIBREF7, BIBREF8. The view of categories being regions is also common in cognitive science BIBREF3. However, learning region representations of categories is a challenging problem, because we typically only have a handful of examples of individuals that belong to a given category. One common assumption is that natural categories can be modelled using convex regions BIBREF3, which simplifies the estimation problem. For instance, based on this assumption, BIBREF9 modelled categories using Gaussian distributions and showed that these distributions can be used for knowledge base completion. Unfortunately, this strategy still requires a relatively high number of training examples to be successful. However, when learning categories, humans do not only rely on examples. For instance, there is evidence that when learning the meaning of nouns, children rely on the default assumption that these nouns denote mutually exclusive categories BIBREF10. In this paper, we will in particular take advantage of the fact that many natural categories are organized into so-called contrast sets BIBREF11. These are sets of closely related categories which exhaustively cover some sub-domain, and which are assumed to be mutually exclusive; e.g. the set of all common color names, the set $\lbrace \text{fruit},\text{vegetable}\rbrace $ or the set $\lbrace \text{NLP}, \text{IR}, \text{ML}\rbrace $. Categories from the same contrast set often compete for coverage. For instance, we can think of the NLP domain as consisting of research topics that involve processing textual information which are not covered by the IR and ML domains. Categories which compete for coverage in this way are known as conceptual neighbors BIBREF12; e.g. NLP and IR, red and orange, fruit and vegetable. Note that the exact boundary between two conceptual neighbors may be vague (e.g. tomato can be classified as fruit or as vegetable). In this paper, we propose a method for learning region representations of categories which takes advantage of conceptual neighborhood, especially in scenarios where the number of available training examples is small. The main idea is illustrated in Figure FIGREF2, which depicts a situation where we are given some examples of a target category $C$ as well as some related categories $N_1,N_2,N_3,N_4$. If we have to estimate a region from the examples of $C$ alone, the small elliptical region shown in red would be a reasonable choice. More generally, a standard approach would be to estimate a Gaussian distribution from the given examples. However, vector space embeddings typically have hundreds of dimensions, while the number of known examples of the target category is often far lower (e.g. 2 or 3). In such settings we will almost inevitably underestimate the coverage of the category. However, in the example from Figure FIGREF2, if we take into account the knowledge that $N_1,N_2,N_3,N_4$ are conceptual neighbors of $C$, the much larger, shaded region becomes a more natural choice for representing $C$. Indeed, the fact that e.g. $C$ and $N_1$ are conceptual neighbors suggests that any point in between the examples of these categories needs to be contained either in the region representing $C$ or the region representing $N_1$. In the spirit of prototype approaches to categorization BIBREF13, without any further information it makes sense to assume that their boundary is more or less half-way in between the known examples. The contribution of this paper is two-fold. First, we propose a method for identifying conceptual neighbors from text corpora. We essentially treat this problem as a standard text classification problem, by relying on categories with large numbers of training examples to generate a suitable distant supervision signal. Second, we show that the predicted conceptual neighbors can effectively be used to learn better category representations.
810
What neural machine translation models can learn in terms of transfer learning?
Multilingual Neural Machine Translation Models
In automatic post-editing (APE) it makes sense to condition post-editing (pe) decisions on both the source (src) and the machine translated text (mt) as input. This has led to multi-source encoder based APE approaches. A research challenge now is the search for architectures that best support the capture, preparation and provision of src and mt information and its integration with pe decisions. In this paper we present a new multi-source APE model, called transference. Unlike previous approaches, it (i) uses a transformer encoder block for src, (ii) followed by a decoder block, but without masking for self-attention on mt, which effectively acts as second encoder combining src -> mt, and (iii) feeds this representation into a final decoder block generating pe. Our model outperforms the state-of-the-art by 1 BLEU point on the WMT 2016, 2017, and 2018 English--German APE shared tasks (PBSMT and NMT). We further investigate the importance of our newly introduced second encoder and find that a too small amount of layers does hurt the performance, while reducing the number of layers of the decoder does not matter much.
The performance of state-of-the-art MT systems is not perfect, thus, human interventions are still required to correct machine translated texts into publishable quality translations BIBREF0. Automatic post-editing (APE) is a method that aims to automatically correct errors made by MT systems before performing actual human post-editing (PE) BIBREF1, thereby reducing the translators' workload and increasing productivity BIBREF2. APE systems trained on human PE data serve as MT post-processing modules to improve the overall performance. APE can therefore be viewed as a 2nd-stage MT system, translating predictable error patterns in MT output to their corresponding corrections. APE training data minimally involves MT output ($mt$) and the human post-edited ($pe$) version of $mt$, but additionally using the source ($src$) has been shown to provide further benefits BIBREF3, BIBREF4, BIBREF5. To provide awareness of errors in $mt$ originating from $src$, attention mechanisms BIBREF6 allow modeling of non-local dependencies in the input or output sequences, and importantly also global dependencies between them (in our case $src$, $mt$ and $pe$). The transformer architecture BIBREF7 is built solely upon such attention mechanisms completely replacing recurrence and convolutions. The transformer uses positional encoding to encode the input and output sequences, and computes both self- and cross-attention through so-called multi-head attentions, which are facilitated by parallelization. Such multi-head attention allows to jointly attend to information at different positions from different representation subspaces, e.g. utilizing and combining information from $src$, $mt$, and $pe$. In this paper, we present a multi-source neural APE architecture called transference. Our model contains a source encoder which encodes $src$ information, a second encoder ($enc_{src \rightarrow mt}$) which takes the encoded representation from the source encoder ($enc_{src}$), combines this with the self-attention-based encoding of $mt$ ($enc_{mt}$), and prepares a representation for the decoder ($dec_{pe}$) via cross-attention. Our second encoder ($enc_{src \rightarrow mt}$) can also be viewed as a standard transformer decoding block, however, without masking, which acts as an encoder. We thus recombine the different blocks of the transformer architecture and repurpose them for the APE task in a simple yet effective way. The suggested architecture is inspired by the two-step approach professional translators tend to use during post-editing: first, the source segment is compared to the corresponding translation suggestion (similar to what our $enc_{src \rightarrow mt}$ is doing), then corrections to the MT output are applied based on the encountered errors (in the same way that our $dec_{pe}$ uses the encoded representation of $enc_{src \rightarrow mt}$ to produce the final translation). The paper makes the following contributions: (i) we propose a new multi-encoder model for APE that consists only of standard transformer encoding and decoding blocks, (ii) by using a mix of self- and cross-attention we provide a representation of both $src$ and $mt$ for the decoder, allowing it to better capture errors in $mt$ originating from $src$; this advances the state-of-the-art in APE in terms of BLEU and TER, and (iii), we analyze the effect of varying the number of encoder and decoder layers BIBREF8, indicating that the encoders contribute more than decoders in transformer-based neural APE.
819
How does the semi-automatic construction process work?
Automatic transcription of 5000 tokens through sequential neural models trained on the annotated part of the corpus
The complex organization of syntax in hierarchical structures is one of the core design features of human language. Duality of patterning refers for instance to the organization of the meaningful elements in a language at two distinct levels: a combinatorial level where meaningless forms are combined into meaningful forms and a compositional level where meaningful forms are composed into larger lexical units. The question remains wide open regarding how such a structure could have emerged. Furthermore a clear mathematical framework to quantify this phenomenon is still lacking. The aim of this paper is that of addressing these two aspects in a self-consistent way. First, we introduce suitable measures to quantify the level of combinatoriality and compositionality in a language, and present a framework to estimate these observables in human natural languages. Second, we show that the theoretical predictions of a multi-agents modeling scheme, namely the Blending Game, are in surprisingly good agreement with empirical data. In the Blending Game a population of individuals plays language games aiming at success in communication. It is remarkable that the two sides of duality of patterning emerge simultaneously as a consequence of a pure cultural dynamics in a simulated environment that contains meaningful relations, provided a simple constraint on message transmission fidelity is also considered.
In a seminal paper, Charles Hockett BIBREF0 identified duality of patterning as one of the core design features of human language. A language exhibits duality of patterning when it is organized at two distinct levels. At a first level, meaningless forms (typically referred to as phonemes) are combined into meaningful units (henceforth this property will be referred to as combinatoriality). For example, the English forms /k/, /a/, and /t/ are combined in different ways to obtain the three words /kat/, /akt/, and /tak/ (respectively written 'cat', 'act' and 'tack'). Because the individual forms in them are meaningless, these words have no relation in meaning in spite of being made of the same forms. This is a very important property, thanks to which all of the many words of the English lexicon can be obtained by relatively simple combinations of about forty phonemes. If phonemes had individual meaning, this degree of compactness would not be possible. At a second level, meaningful units (typically referred to as morphemes) are composed into larger units, the meaning of which is related to the individual meaning of the composing units (henceforth this property will be referred to as compositionality). For example, the meaning of the word 'boyfriend' is related to the meaning of the words 'boy' and 'friend' which composed it. The compositional level includes syntax as well. For example, the meaning of the sentence 'cats eat fishes' is related to the meaning of the words 'cats', 'eat', and 'fishes'. In this paper, for the sake of simplicity, we focus exclusively on the lexicon level. This has to be considered as a first step towards the comprehension of the emergence of complex structures in languages.
823
What does "explicitly leverages their probabilistic correlation to guide the training process of both models" mean?
The framework jointly learns parametrized QA and QG models subject to the constraint in equation 2. In more detail, they minimize QA and QG loss functions, with a third dual loss for regularization.
This paper is concerned with open-domain question answering (i.e., OpenQA). Recently, some works have viewed this problem as a reading comprehension (RC) task, and directly applied successful RC models to it. However, the performances of such models are not so good as that in the RC task. In our opinion, the perspective of RC ignores three characteristics in OpenQA task: 1) many paragraphs without the answer span are included in the data collection; 2) multiple answer spans may exist within one given paragraph; 3) the end position of an answer span is dependent with the start position. In this paper, we first propose a new probabilistic formulation of OpenQA, based on a three-level hierarchical structure, i.e., the question level, the paragraph level and the answer span level. Then a Hierarchical Answer Spans Model (HAS-QA) is designed to capture each probability. HAS-QA has the ability to tackle the above three problems, and experiments on public OpenQA datasets show that it significantly outperforms traditional RC baselines and recent OpenQA baselines.
Open-domain question answering (OpenQA) aims to seek answers for a broad range of questions from a large knowledge sources, e.g., structured knowledge bases BIBREF0 , BIBREF1 and unstructured documents from search engine BIBREF2 . In this paper we focus on the OpenQA task with the unstructured knowledge sources retrieved by search engine. Inspired by the reading comprehension (RC) task flourishing in the area of natural language processing BIBREF3 , BIBREF4 , BIBREF5 , some recent works have viewed OpenQA as an RC task, and directly applied the existing RC models to it BIBREF6 , BIBREF7 , BIBREF3 , BIBREF8 . However, these RC models do not well fit for the OpenQA task. Firstly, they directly omit the paragraphs without answer string. RC task assumes that the given paragraph contains the answer string (Figure 1 top), however, it is not valid for the OpenQA task (Figure 1 bottom). That's because the paragraphs to provide answer for an OpenQA question is collected from a search engine, where each retrieved paragraph is merely relevant to the question. Therefore, it contains many paragraphs without answer string, for instance, in Figure 1 Paragraph2. When applying RC models to OpenQA task, we have to omit these paragraphs in the training phase. However, during the inference phase, when model meets one paragraph without answer string, it will pick out a text span as an answer span with high confidence, since RC model has no evidence to justify whether a paragraph contains the answer string. Secondly, they only consider the first answer span in the paragraph, but omit the remaining rich multiple answer spans. In RC task, the answer and its positions in the paragraph are provided by the annotator in the training data. Therefore RC models only need to consider the unique answer span, e.g., in SQuAD BIBREF9 . However, the OpenQA task only provides the answer string as the ground-truth. Therefore, multiple answer spans are detected in the given paragraph, which cannot be considered by the traditional RC models. Take Figure 1 as an example, all text spans contain `fat' are treated as answer span, so we detect two answer spans in Paragraph1. Thirdly, they assume that the start position and end position of an answer span is independent. However, the end position is evidently related with the start position, especially when there are multiple answer spans in a paragraph. Therefore, it may introduce some problems when using such independence assumption. For example, the detected end position may correspond to another answer span, rather than the answer span located by the start position. In Figure 1 Paragraph1, `fat in their $\cdots $ insulating effect fat' has a high confidence to be an answer span under independence assumption. In this paper, we propose a Hierarchical Answer Span Model, named HAS-QA, based on a new three-level probabilistic formulation of OpenQA task, as shown in Figure 2 . At the question level, the conditional probability of the answer string given a question and a collection of paragraphs, named answer probability, is defined as the product of the paragraph probability and conditional answer probability, based on the law of total probability. At the paragraph level, paragraph probability is defined as the degree to which a paragraph can answer the question. This probability is used to measure the quality of a paragraph and targeted to tackle the first problem mentioned, i.e. identify the useless paragraphs. For calculation, we first apply bidirectional GRU and an attention mechanism on the question aware context embedding to obtain a score. Then, we normalize the scores across the multiple paragraphs. In the training phase, we adopt a negative sampling strategy for optimization. Conditional answer probability is the conditional probability that a text string is the answer given the paragraph. Considering multiple answer spans in a paragraph, the conditional answer probability can be further represented as the aggregation of several span probability, defined later. In this paper, four types of functions, i.e. HEAD, RAND, MAX and SUM, are used for aggregation. At the span level, span probability represents the probability that a text span in a paragraph is the answer span. Similarly to previous work BIBREF3 , span probability can be computed as the product of two location probability, i.e., location start probability and location end probability. Then a conditional pointer network is proposed to model the probabilistic dependences between the start and end positions, by making generation of end position depended on the start position directly, rather than internal representation of start position BIBREF10 . The contributions of this paper include: 1) a probabilistic formulation of the OpenQA task, based on the a three-level hierarchical structure, i.e. the question level, the paragraph level and the answer span level; 2) the proposal of an end-to-end HAS-QA model to implement the three-level probabilistic formulation of OpenQA task (Section "HAS-QA Model" ), which tackles the three problems of direct applying existing RC models to OpenQA; 3) extensive experiments on QuasarT, TriviaQA and SearchQA datasets, which show that HAS-QA outperforms traditional RC baselines and recent OpenQA baselines.
833
What is WNGT 2019 shared task?
efficiency task aimed at reducing the number of parameters while minimizing drop in performance
The success of neural summarization models stems from the meticulous encodings of source articles. To overcome the impediments of limited and sometimes noisy training data, one promising direction is to make better use of the available training data by applying filters during summarization. In this paper, we propose a novel Bi-directional Selective Encoding with Template (BiSET) model, which leverages template discovered from training data to softly select key information from each source article to guide its summarization process. Extensive experiments on a standard summarization dataset were conducted and the results show that the template-equipped BiSET model manages to improve the summarization performance significantly with a new state of the art.
Abstractive summarization aims to shorten a source article or paragraph by rewriting while preserving the main idea. Due to the difficulties in rewriting long documents, a large body of research on this topic has focused on paragraph-level article summarization. Among them, sequence-to-sequence models have become the mainstream and some have achieved state-of-the-art performance BIBREF0 , BIBREF1 , BIBREF2 . In general, the only available information for these models during decoding is simply the source article representations from the encoder and the generated words from the previous time steps BIBREF2 , BIBREF3 , BIBREF4 , while the previous words are also generated based on the article representations. Since natural language text is complicated and verbose in nature, and training data is insufficient in size to help the models distinguish important article information from noise, sequence-to-sequence models tend to deteriorate with the accumulation of word generation, e.g., they generate irrelevant and repeated words frequently BIBREF5 . Template-based summarization BIBREF6 is an effective approach to traditional abstractive summarization, in which a number of hard templates are manually created by domain experts, and key snippets are then extracted and populated into the templates to form the final summaries. The advantage of such approach is it can guarantee concise and coherent summaries in no need of any training data. However, it is unrealistic to create all the templates manually since this work requires considerable domain knowledge and is also labor-intensive. Fortunately, the summaries of some specific training articles can provide similar guidance to the summarization as hard templates. Accordingly, these summaries are referred to as soft templates, or templates for simplicity, in this paper. Despite their potential in relieving the verbosity and insufficiency problems of natural language data, templates have not been exploited to full advantage. For example, cao2018retrieve simply concatenated template encoding after the source article in their summarization work. To this end, we propose a Bi-directional Selective Encoding with Template (BiSET) model for abstractive sentence summarization. Our model involves a novel bi-directional selective layer with two gates to mutually select key information from an article and its template to assist with summary generation. Due to the limitations in obtaining handcrafted templates, we further propose a multi-stage process for automatic retrieval of high-quality templates from training corpus. Extensive experiments were conducted on the Gigaword dataset BIBREF0 , a public dataset widely used for abstractive sentence summarization, and the results appear to be quite promising. Merely using the templates selected by our approach as the final summaries, our model can already achieve superior performance to some baseline models, demonstrating the effect of our templates. This may also indicate the availability of many quality templates in the corpus. Secondly, the template-equipped summarization model, BiSET, outperforms all the state-of-the-art models significantly. To evaluate the importance of the bi-directional selective layer and the two gates, we conducted an ablation study by discarding them respectively, and the results show that, while both of the gates are necessary, the template-to-article (T2A) gate tends to be more important than the article-to-template (A2T) gate. A human evaluation further validates the effectiveness of our model in generating informative, concise and readable summaries. 1.0 The contributions of this work include:
837
Was any variation in results observed based on language typology?
It is observed some variability - but not significant. Bert does not seem to gain much more syntax information than with type level information.
The role of social media, in particular microblogging platforms such as Twitter, as a conduit for actionable and tactical information during disasters is increasingly acknowledged. However, time-critical analysis of big crisis data on social media streams brings challenges to machine learning techniques, especially the ones that use supervised learning. The Scarcity of labeled data, particularly in the early hours of a crisis, delays the machine learning process. The current state-of-the-art classification methods require a significant amount of labeled data specific to a particular event for training plus a lot of feature engineering to achieve best results. In this work, we introduce neural network based classification methods for binary and multi-class tweet classification task. We show that neural network based models do not require any feature engineering and perform better than state-of-the-art methods. In the early hours of a disaster when no labeled data is available, our proposed method makes the best use of the out-of-event data and achieves good results.
Time-critical analysis of social media data streams is important for many application areas. For instance, responders to humanitarian disasters (e.g., earthquake, flood) need information about the disasters to determine what help is needed and where. This information usually breaks out on social media before other sources. During the onset of a crisis situation, rapid analysis of messages posted on microblogging platforms such as Twitter can help humanitarian organizations like the United Nations gain situational awareness, learn about urgent needs of affected people at different locations, and decide on actions accordingly BIBREF0 , BIBREF1 . Artificial Intelligence for Disaster Response (AIDR) is an online platform to support this cause BIBREF2 . During a disaster, any person or organization can use it to collect tweets related to the event. The total volume of all tweets can be huge, about 350 thousand tweets per minute. Filtering them using keywords helps cut down this volume to some extent. But, identifying different kinds of useful tweets that responders can act upon cannot be achieved using only keywords because a large number of tweets may contain the keywords but are of limited utility for the responders. The best-known solution to address this problem is to use supervised classifiers that would separate useful tweets from the rest. Classifying tweets to identify their usefulness is difficult because: tweets are short – only 140 characters – and therefore, hard to understand without enough context; they often contain abbreviations, informal language and are ambiguous; and, finally, determining whether the tweet is useful in a disaster situation and identifying required actions for relief operations is a hard task because of its subjectivity. Individuals differ on their judgement about whether a tweet is useful or not and sometimes whether they belong to one topical class or another especially when there is information in a tweet that would be classified into multiple topical classes. Given this ambiguity, a computer cannot agree with annotators at a rate that is higher than the rate at which the annotators agree with each other. Despite advances in natural language processing (NLP), interpreting the semantics of the short informal texts automatically remains a hard problem. To classify disaster-related tweets, traditional classification approaches use batch learning with discrete representation of words. This approach has three major limitations. First, in the beginning of a disaster situation, there is no event labeled data available for training. Later, the labeled data arrives in small batches depending on the availability of geographically dispersed volunteers. These learning algorithms are dependent on the labeled data of the event for training. Due to the discrete word representations, they perform poor when trained on the data from previous events (out-of-event data). The second limitation is the offline learning style that inputs the complete labeled data and train a model. This is computational expensive in a disaster situation where labeled data is coming in batches. One would need to train a classifier from scratch every time a new batch of labeled data arrives. Thirdly, these approaches require to manually engineered features like cue words and TF-IDF vectors BIBREF3 for learning. Deep neural networks (DNNs) are based on online learning mechanism and have the flexibility to adaptively learn from new batches of labeled data without requiring to retrain from scratch. Due to their distributed word representation, they generalize well and make better use of the previously labeled data from other events to speed up the classification process in the beginning of a disaster. DNNs automatically learn latent features as distributed dense vectors, which generalize well and have shown to benefit various NLP tasks BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . In this paper, we propose a convolutional neural network (CNN) for the classification task. CNN captures the most salient $n$ -gram information by means of its convolution and max-pooling operations. On top of the typical CNN, we propose an extension that combines multilayer perceptron with a CNN. We present a series of experiments using different variations of the training data – event data only, out-of-event data only and a concatenation of both. Experiments are conducted for binary and multi-class classification tasks. For the event only binary classification task, the CNN model outperformed in four out of five tasks with an accuracy gain of up to 4.5 absolute points. In the scenario of no event data, the CNN model shows substantial improvement of up to 18 absolute points over the several non-neural models. This makes the neural network model an ideal choice in early hours of a disaster for tweet classification. When combined the event data with out-of-event data, we see similar results as in the case of event only training. For multi-class classification, the CNN model outperformed in similar fashion as in the case of binary classification. Our variation of the CNN model with multilayer perceptron (MLP-CNN) performed better than it's CNN counter part. In some cases, adding out-of-event data drops the performance. To reduce the effect of large out-of-event data and to make the most out of it, we apply a simple event selection technique based on TF-IDF and select only those events that are most similar to the event under consideration. We then train the classifiers on the concatenation of the event plus selected out-of-event data. The performance improves only for the event with small event data. To summarize, we show that neural network models can be used reliably with the already available out-of-event data for binary and multi-class classification. The automatic feature learning capabilities brings an additional value on top of non-neural classification methods. The MLP-CNN results show that there is still a roam for improvement on top of the best accuracy achieved. The rest of the paper is organized as follows. We summarize related work in Section "Related Work" . Section "Convolutional Neural Network" presents the convolutional neural model. In Section "Experimental Settings" , we describe the dataset and training settings of the models. In Section "Results" presents our results and analysis. We conclude and discuss future work in Section "Conclusion and Future Work" .
843
Can the approach be generalized to other technical domains as well?
There is no reason to think that this approach wouldn't also be successful for other technical domains. Technical terms are replaced with tokens, therefore so as long as there is a corresponding process for identifying and replacing technical terms in the new domain this approach could be viable.
Existing approaches to automatic summarization assume that a length limit for the summary is given, and view content selection as an optimization problem to maximize informativeness and minimize redundancy within this budget. This framework ignores the fact that human-written summaries have rich internal structure which can be exploited to train a summarization system. We present NEXTSUM, a novel approach to summarization based on a model that predicts the next sentence to include in the summary using not only the source article, but also the summary produced so far. We show that such a model successfully captures summary-specific discourse moves, and leads to better content selection performance, in addition to automatically predicting how long the target summary should be. We perform experiments on the New York Times Annotated Corpus of summaries, where NEXTSUM outperforms lead and content-model summarization baselines by significant margins. We also show that the lengths of summaries produced by our system correlates with the lengths of the human-written gold standards.
Writing a summary is a different task compared to producing a longer article. As a consequence, it is likely that the topic and discourse moves made in summaries differ from those in regular articles. In this work, we present a powerful extractive summarization system which exploits rich summary-internal structure to perform content selection, redundancy reduction, and even predict the target summary length, all in one joint model. Text summarization has been addressed by numerous techniques in the community BIBREF0 . For extractive summarization, which is the focus of this paper, a popular task setup is to generate summaries that respect a fixed length limit. In the summarization shared tasks of the past Document Understanding Conferences (DUC), these limits are defined in terms of words or bytes. As a result, much work has framed summarization as a constrained optimization problem, in order to select a subset of sentences with desirable summary qualities such as informativeness, coherence, and non-redundancy within the length budget BIBREF1 , BIBREF2 , BIBREF3 . One problem with this setup is that it does not match many real-world summarization settings. For example, writers can tailor the length of their summaries to the amount of noteworthy content in the source article. Summaries created by news editors for archives, such as the New York Times Annotated Corpus BIBREF4 , exhibit a variety of lengths. There is also evidence that in the context of web search, people prefer summaries of different lengths for the documents in search results depending on the type of the search query BIBREF5 . More generally, current systems focus heavily on properties of the source document to learn to identify important sentences, and score the coherence of sentence transitions. They reason about the content of summaries primarily for purposes of avoiding redundancy, and respecting the length budget. But they ignore the idea that it might actually be useful to learn content structure and discourse planning for summaries from large collections of multi-sentence summaries. This work proposes an extractive summarization system that focuses on capturing rich summary-internal structure. Our key idea is that since summaries in a domain often follow some predictable structure, a partial summary or set of summary sentences should help predict other summary sentences. We formalize this intuition in a model called NextSum, which selects the next summary sentence based not only on properties of the source text, but also on the previously selected sentences in the summary. An example choice is shown in Table 1 . This setup allows our model to capture summary-specific discourse and topic transitions. For example, it can learn to expand on a topic that is already mentioned in the summary, or to introduce a new topic. It can learn to follow a script or discourse relations that are expected for that domain's summaries. It can even learn to predict the end of the summary, avoiding the need to explicitly define a length cutoff. The core of our system is a next-sentence prediction component, which is a feed-forward neural network driven by features capturing the prevalence of domain subtopics in the source and the summary, sentence importance in the source, and coverage of the source document by the summary so far. A full summary can then be generated by repeatedly predicting the next sentence until the model predicts that the summary should end. Since summary-specific moves may depend on the domain, we first explore domain-specific summarization on event-oriented news topics (War Crimes, Assassinations, Bombs) from the New York Times Annotated Corpus BIBREF4 . We also train a domain-general model across multiple types of events. NextSum predicts the next summary sentence with remarkably high accuracies, reaching 67% compared to a chance accuracy of 9%. The generated summaries outperform the lead baseline as well as domain-specific summarization baselines without requiring explicit redundancy check or a length constraint. Moreover, the system produces summaries of variable lengths which correlate with how long human summaries are for the same texts.