Unnamed: 0
int64 0
886
| question
stringlengths 19
151
| answer
stringlengths 1
1.08k
| abstract
stringlengths 279
2.02k
| introduction
stringlengths 52
9.04k
⌀ |
---|---|---|---|---|
0 | How does their model learn using mostly raw data? | by exploiting discourse relations to propagate polarity from seed predicates to final sentiment polarity | Recognizing affective events that trigger positive or negative sentiment has a wide range of natural language processing applications but remains a challenging problem mainly because the polarity of an event is not necessarily predictable from its constituent words. In this paper, we propose to propagate affective polarity using discourse relations. Our method is simple and only requires a very small seed lexicon and a large raw corpus. Our experiments using Japanese data show that our method learns affective events effectively without manually labeled data. It also improves supervised learning results when labeled data are small. | Affective events BIBREF0 are events that typically affect people in positive or negative ways. For example, getting money and playing sports are usually positive to the experiencers; catching cold and losing one's wallet are negative. Understanding affective events is important to various natural language processing (NLP) applications such as dialogue systems BIBREF1, question-answering systems BIBREF2, and humor recognition BIBREF3. In this paper, we work on recognizing the polarity of an affective event that is represented by a score ranging from $-1$ (negative) to 1 (positive). Learning affective events is challenging because, as the examples above suggest, the polarity of an event is not necessarily predictable from its constituent words. Combined with the unbounded combinatorial nature of language, the non-compositionality of affective polarity entails the need for large amounts of world knowledge, which can hardly be learned from small annotated data. In this paper, we propose a simple and effective method for learning affective events that only requires a very small seed lexicon and a large raw corpus. As illustrated in Figure FIGREF1, our key idea is that we can exploit discourse relations BIBREF4 to efficiently propagate polarity from seed predicates that directly report one's emotions (e.g., “to be glad” is positive). Suppose that events $x_1$ are $x_2$ are in the discourse relation of Cause (i.e., $x_1$ causes $x_2$). If the seed lexicon suggests $x_2$ is positive, $x_1$ is also likely to be positive because it triggers the positive emotion. The fact that $x_2$ is known to be negative indicates the negative polarity of $x_1$. Similarly, if $x_1$ and $x_2$ are in the discourse relation of Concession (i.e., $x_2$ in spite of $x_1$), the reverse of $x_2$'s polarity can be propagated to $x_1$. Even if $x_2$'s polarity is not known in advance, we can exploit the tendency of $x_1$ and $x_2$ to be of the same polarity (for Cause) or of the reverse polarity (for Concession) although the heuristic is not exempt from counterexamples. We transform this idea into objective functions and train neural network models that predict the polarity of a given event. We trained the models using a Japanese web corpus. Given the minimum amount of supervision, they performed well. In addition, the combination of annotated and unannotated data yielded a gain over a purely supervised baseline when labeled data were small. |
2 | How did the select the 300 Reddit communities for comparison? | They selected all the subreddits from January 2013 to December 2014 with at least 500 words in the vocabulary and at least 4 months of the subreddit's history. They also removed communities with the bulk of the contributions are in foreign language. | A community's identity defines and shapes its internal dynamics. Our current understanding of this interplay is mostly limited to glimpses gathered from isolated studies of individual communities. In this work we provide a systematic exploration of the nature of this relation across a wide variety of online communities. To this end we introduce a quantitative, language-based typology reflecting two key aspects of a community's identity: how distinctive, and how temporally dynamic it is. By mapping almost 300 Reddit communities into the landscape induced by this typology, we reveal regularities in how patterns of user engagement vary with the characteristics of a community. Our results suggest that the way new and existing users engage with a community depends strongly and systematically on the nature of the collective identity it fosters, in ways that are highly consequential to community maintainers. For example, communities with distinctive and highly dynamic identities are more likely to retain their users. However, such niche communities also exhibit much larger acculturation gaps between existing users and newcomers, which potentially hinder the integration of the latter. More generally, our methodology reveals differences in how various social phenomena manifest across communities, and shows that structuring the multi-community landscape can lead to a better understanding of the systematic nature of this diversity. | “If each city is like a game of chess, the day when I have learned the rules, I shall finally possess my empire, even if I shall never succeed in knowing all the cities it contains.” — Italo Calvino, Invisible Cities A community's identity—defined through the common interests and shared experiences of its users—shapes various facets of the social dynamics within it BIBREF0 , BIBREF1 , BIBREF2 . Numerous instances of this interplay between a community's identity and social dynamics have been extensively studied in the context of individual online communities BIBREF3 , BIBREF4 , BIBREF5 . However, the sheer variety of online platforms complicates the task of generalizing insights beyond these isolated, single-community glimpses. A new way to reason about the variation across multiple communities is needed in order to systematically characterize the relationship between properties of a community and the dynamics taking place within. One especially important component of community dynamics is user engagement. We can aim to understand why users join certain communities BIBREF6 , what factors influence user retention BIBREF7 , and how users react to innovation BIBREF5 . While striking patterns of user engagement have been uncovered in prior case studies of individual communities BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , we do not know whether these observations hold beyond these cases, or when we can draw analogies between different communities. Are there certain types of communities where we can expect similar or contrasting engagement patterns? To address such questions quantitatively we need to provide structure to the diverse and complex space of online communities. Organizing the multi-community landscape would allow us to both characterize individual points within this space, and reason about systematic variations in patterns of user engagement across the space. Present work: Structuring the multi-community space. In order to systematically understand the relationship between community identityand user engagement we introduce a quantitative typology of online communities. Our typology is based on two key aspects of community identity: how distinctive—or niche—a community's interests are relative to other communities, and how dynamic—or volatile—these interests are over time. These axes aim to capture the salience of a community's identity and dynamics of its temporal evolution. Our main insight in implementing this typology automatically and at scale is that the language used within a community can simultaneously capture how distinctive and dynamic its interests are. This language-based approach draws on a wealth of literature characterizing linguistic variation in online communities and its relationship to community and user identity BIBREF16 , BIBREF5 , BIBREF17 , BIBREF18 , BIBREF19 . Basing our typology on language is also convenient since it renders our framework immediately applicable to a wide variety of online communities, where communication is primarily recorded in a textual format. Using our framework, we map almost 300 Reddit communities onto the landscape defined by the two axes of our typology (Section SECREF2 ). We find that this mapping induces conceptually sound categorizations that effectively capture key aspects of community-level social dynamics. In particular, we quantitatively validate the effectiveness of our mapping by showing that our two-dimensional typology encodes signals that are predictive of community-level rates of user retention, complementing strong activity-based features. Engagement and community identity. We apply our framework to understand how two important aspects of user engagement in a community—the community's propensity to retain its users (Section SECREF3 ), and its permeability to new members (Section SECREF4 )—vary according to the type of collective identity it fosters. We find that communities that are characterized by specialized, constantly-updating content have higher user retention rates, but also exhibit larger linguistic gaps that separate newcomers from established members. More closely examining factors that could contribute to this linguistic gap, we find that especially within distinctive communities, established users have an increased propensity to engage with the community's specialized content, compared to newcomers (Section SECREF5 ). Interestingly, while established members of distinctive communities more avidly respond to temporal updates than newcomers, in more generic communities it is the outsiders who engage more with volatile content, perhaps suggesting that such content may serve as an entry-point to the community (but not necessarily a reason to stay). Such insights into the relation between collective identity and user engagement can be informative to community maintainers seeking to better understand growth patterns within their online communities. More generally, our methodology stands as an example of how sociological questions can be addressed in a multi-community setting. In performing our analyses across a rich variety of communities, we reveal both the diversity of phenomena that can occur, as well as the systematic nature of this diversity. |
3 | Is all text in this dataset a question, or are there unrelated sentences in between questions? | the dataset consists of pathology reports including sentences and questions and answers about tumor size and resection margins so it does include additional sentences | Clinical text structuring is a critical and fundamental task for clinical research. Traditional methods such as taskspecific end-to-end models and pipeline models usually suffer from the lack of dataset and error propagation. In this paper, we present a question answering based clinical text structuring (QA-CTS) task to unify different specific tasks and make dataset shareable. A novel model that aims to introduce domain-specific features (e.g., clinical named entity information) into pre-trained language model is also proposed for QA-CTS task. Experimental results on Chinese pathology reports collected from Ruijing Hospital demonstrate our presented QA-CTS task is very effective to improve the performance on specific tasks. Our proposed model also competes favorably with strong baseline models in specific tasks. | Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches. However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost. Traditionally, CTS tasks can be addressed by rule and dictionary based methods BIBREF0, BIBREF1, BIBREF2, task-specific end-to-end methods BIBREF3, BIBREF4, BIBREF5, BIBREF6 and pipeline methods BIBREF7, BIBREF8, BIBREF9. Rule and dictionary based methods suffer from costly human-designed extraction rules, while task-specific end-to-end methods have non-uniform output formats and require task-specific training dataset. Pipeline methods break down the entire process into several pieces which improves the performance and generality. However, when the pipeline depth grows, error propagation will have a greater impact on the performance. To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows. We first present a question answering based clinical text structuring (QA-CTS) task, which unifies different specific tasks and make dataset shareable. We also propose an effective model to integrate clinical named entity information into pre-trained language model. Experimental results show that QA-CTS task leads to significant improvement due to shared dataset. Our proposed model also achieves significantly better performance than the strong baseline methods. In addition, we also show that two-stage training mechanism has a great improvement on QA-CTS task. The rest of the paper is organized as follows. We briefly review the related work on clinical text structuring in Section SECREF2. Then, we present question answer based clinical text structuring task in Section SECREF3. In Section SECREF4, we present an effective model for this task. Section SECREF5 is devoted to computational studies and several investigations on the key issues of our proposed model. Finally, conclusions are given in Section SECREF6. |
4 | What aspects have been compared between various language models? | Quality measures using perplexity and recall, and performance measured using latency and energy usage. | In recent years, we have witnessed a dramatic shift towards techniques driven by neural networks for a variety of NLP tasks. Undoubtedly, neural language models (NLMs) have reduced perplexity by impressive amounts. This progress, however, comes at a substantial cost in performance, in terms of inference latency and energy consumption, which is particularly of concern in deployments on mobile devices. This paper, which examines the quality-performance tradeoff of various language modeling techniques, represents to our knowledge the first to make this observation. We compare state-of-the-art NLMs with"classic"Kneser-Ney (KN) LMs in terms of energy usage, latency, perplexity, and prediction accuracy using two standard benchmarks. On a Raspberry Pi, we find that orders of increase in latency and energy usage correspond to less change in perplexity, while the difference is much less pronounced on a desktop. | Deep learning has unquestionably advanced the state of the art in many natural language processing tasks, from syntactic dependency parsing BIBREF0 to named-entity recognition BIBREF1 to machine translation BIBREF2 . The same certainly applies to language modeling, where recent advances in neural language models (NLMs) have led to dramatically better approaches as measured using standard metrics such as perplexity BIBREF3 , BIBREF4 . Specifically focused on language modeling, this paper examines an issue that to our knowledge has not been explored: advances in neural language models have come at a significant cost in terms of increased computational complexity. Computing the probability of a token sequence using non-neural techniques requires a number of phrase lookups and perhaps a few arithmetic operations, whereas model inference with NLMs require large matrix multiplications consuming perhaps millions of floating point operations (FLOPs). These performance tradeoffs are worth discussing. In truth, language models exist in a quality–performance tradeoff space. As model quality increases (e.g., lower perplexity), performance as measured in terms of energy consumption, query latency, etc. tends to decrease. For applications primarily running in the cloud—say, machine translation—practitioners often solely optimize for the lowest perplexity. This is because such applications are embarrassingly parallel and hence trivial to scale in a data center environment. There are, however, applications of NLMs that require less one-sided optimizations. On mobile devices such as smartphones and tablets, for example, NLMs may be integrated into software keyboards for next-word prediction, allowing much faster text entry. Popular Android apps that enthusiastically tout this technology include SwiftKey and Swype. The greater computational costs of NLMs lead to higher energy usage in model inference, translating into shorter battery life. In this paper, we examine the quality–performance tradeoff in the shift from non-neural to neural language models. In particular, we compare Kneser–Ney smoothing, widely accepted as the state of the art prior to NLMs, to the best NLMs today. The decrease in perplexity on standard datasets has been well documented BIBREF3 , but to our knowledge no one has examined the performances tradeoffs. With deployment on a mobile device in mind, we evaluate energy usage and inference latency on a Raspberry Pi (which shares the same ARM architecture as nearly all smartphones today). We find that a 2.5 $\times $ reduction in perplexity on PTB comes at a staggering cost in terms of performance: inference with NLMs takes 49 $\times $ longer and requires 32 $\times $ more energy. Furthermore, we find that impressive reductions in perplexity translate into at best modest improvements in next-word prediction, which is arguable a better metric for evaluating software keyboards on a smartphone. The contribution of this paper is the first known elucidation of this quality–performance tradeoff. Note that we refrain from prescriptive recommendations: whether or not a tradeoff is worthwhile depends on the application. Nevertheless, NLP engineers should arguably keep these tradeoffs in mind when selecting a particular operating point. |
6 | How many attention layers are there in their model? | one | Saliency map generation techniques are at the forefront of explainable AI literature for a broad range of machine learning applications. Our goal is to question the limits of these approaches on more complex tasks. In this paper we apply Layer-Wise Relevance Propagation (LRP) to a sequence-to-sequence attention model trained on a text summarization dataset. We obtain unexpected saliency maps and discuss the rightfulness of these"explanations". We argue that we need a quantitative way of testing the counterfactual case to judge the truthfulness of the saliency maps. We suggest a protocol to check the validity of the importance attributed to the input and show that the saliency maps obtained sometimes capture the real use of the input features by the network, and sometimes do not. We use this example to discuss how careful we need to be when accepting them as explanation. | Ever since the LIME algorithm BIBREF0 , "explanation" techniques focusing on finding the importance of input features in regard of a specific prediction have soared and we now have many ways of finding saliency maps (also called heat-maps because of the way we like to visualize them). We are interested in this paper by the use of such a technique in an extreme task that highlights questions about the validity and evaluation of the approach. We would like to first set the vocabulary we will use. We agree that saliency maps are not explanations in themselves and that they are more similar to attribution, which is only one part of the human explanation process BIBREF1 . We will prefer to call this importance mapping of the input an attribution rather than an explanation. We will talk about the importance of the input relevance score in regard to the model's computation and not make allusion to any human understanding of the model as a result. There exist multiple ways to generate saliency maps over the input for non-linear classifiers BIBREF2 , BIBREF3 , BIBREF4 . We refer the reader to BIBREF5 for a survey of explainable AI in general. We use in this paper Layer-Wise Relevance Propagation (LRP) BIBREF2 which aims at redistributing the value of the classifying function on the input to obtain the importance attribution. It was first created to “explain" the classification of neural networks on image recognition tasks. It was later successfully applied to text using convolutional neural networks (CNN) BIBREF6 and then Long-Short Term Memory (LSTM) networks for sentiment analysis BIBREF7 . Our goal in this paper is to test the limits of the use of such a technique for more complex tasks, where the notion of input importance might not be as simple as in topic classification or sentiment analysis. We changed from a classification task to a generative task and chose a more complex one than text translation (in which we can easily find a word to word correspondence/importance between input and output). We chose text summarization. We consider abstractive and informative text summarization, meaning that we write a summary “in our own words" and retain the important information of the original text. We refer the reader to BIBREF8 for more details on the task and the different variants that exist. Since the success of deep sequence-to-sequence models for text translation BIBREF9 , the same approaches have been applied to text summarization tasks BIBREF10 , BIBREF11 , BIBREF12 which use architectures on which we can apply LRP. We obtain one saliency map for each word in the generated summaries, supposed to represent the use of the input features for each element of the output sequence. We observe that all the saliency maps for a text are nearly identical and decorrelated with the attention distribution. We propose a way to check their validity by creating what could be seen as a counterfactual experiment from a synthesis of the saliency maps, using the same technique as in Arras et al. Arras2017. We show that in some but not all cases they help identify the important input features and that we need to rigorously check importance attributions before trusting them, regardless of whether or not the mapping “makes sense" to us. We finally argue that in the process of identifying the important input features, verifying the saliency maps is as important as the generation step, if not more. |
7 | What are the three measures of bias which are reduced in experiments? | RIPA, Neighborhood Metric, WEAT | It has been shown that word embeddings derived from large corpora tend to incorporate biases present in their training data. Various methods for mitigating these biases have been proposed, but recent work has demonstrated that these methods hide but fail to truly remove the biases, which can still be observed in word nearest-neighbor statistics. In this work we propose a probabilistic view of word embedding bias. We leverage this framework to present a novel method for mitigating bias which relies on probabilistic observations to yield a more robust bias mitigation algorithm. We demonstrate that this method effectively reduces bias according to three separate measures of bias while maintaining embedding quality across various popular benchmark semantic tasks | Word embeddings, or vector representations of words, are an important component of Natural Language Processing (NLP) models and necessary for many downstream tasks. However, word embeddings, including embeddings commonly deployed for public use, have been shown to exhibit unwanted societal stereotypes and biases, raising concerns about disparate impact on axes of gender, race, ethnicity, and religion BIBREF0, BIBREF1. The impact of this bias has manifested in a range of downstream tasks, ranging from autocomplete suggestions BIBREF2 to advertisement delivery BIBREF3, increasing the likelihood of amplifying harmful biases through the use of these models. The most well-established method thus far for mitigating bias relies on projecting target words onto a bias subspace (such as a gender subspace) and subtracting out the difference between the resulting distances BIBREF0. On the other hand, the most popular metric for measuring bias is the WEAT statistic BIBREF1, which compares the cosine similarities between groups of words. However, WEAT has been recently shown to overestimate bias as a result of implicitly relying on similar frequencies for the target words BIBREF4, and BIBREF5 demonstrated that evidence of bias can still be recovered after geometric bias mitigation by examining the neighborhood of a target word among socially-biased words. In response to this, we propose an alternative framework for bias mitigation in word embeddings that approaches this problem from a probabilistic perspective. The motivation for this approach is two-fold. First, most popular word embedding algorithms are probabilistic at their core – i.e., they are trained (explicitly or implicitly BIBREF6) to minimize some form of word co-occurrence probabilities. Thus, we argue that a framework for measuring and treating bias in these embeddings should take into account, in addition to their geometric aspect, their probabilistic nature too. On the other hand, the issue of bias has also been approached (albeit in different contexts) in the fairness literature, where various intuitive notions of equity such as equalized odds have been formalized through probabilistic criteria. By considering analogous criteria for the word embedding setting, we seek to draw connections between these two bodies of work. We present experiments on various bias mitigation benchmarks and show that our framework is comparable to state-of-the-art alternatives according to measures of geometric bias mitigation and that it performs far better according to measures of neighborhood bias. For fair comparison, we focus on mitigating a binary gender bias in pre-trained word embeddings using SGNS (skip-gram with negative-sampling), though we note that this framework and methods could be extended to other types of bias and word embedding algorithms. |
10 | How big is the dataset? | 903019 references | In this paper, we introduce the citation data of the Czech apex courts (Supreme Court, Supreme Administrative Court and Constitutional Court). This dataset was automatically extracted from the corpus of texts of Czech court decisions - CzCDC 1.0. We obtained the citation data by building the natural language processing pipeline for extraction of the court decision identifiers. The pipeline included the (i) document segmentation model and the (ii) reference recognition model. Furthermore, the dataset was manually processed to achieve high-quality citation data as a base for subsequent qualitative and quantitative analyses. The dataset will be made available to the general public. | Analysis of the way court decisions refer to each other provides us with important insights into the decision-making process at courts. This is true both for the common law courts and for their counterparts in the countries belonging to the continental legal system. Citation data can be used for both qualitative and quantitative studies, casting light in the behavior of specific judges through document analysis or allowing complex studies into changing the nature of courts in transforming countries. That being said, it is still difficult to create sufficiently large citation datasets to allow a complex research. In the case of the Czech Republic, it was difficult to obtain a relevant dataset of the court decisions of the apex courts (Supreme Court, Supreme Administrative Court and Constitutional Court). Due to its size, it is nearly impossible to extract the references manually. One has to reach out for an automation of such task. However, study of court decisions displayed many different ways that courts use to cite even decisions of their own, not to mention the decisions of other courts.The great diversity in citations led us to the use of means of the natural language processing for the recognition and the extraction of the citation data from court decisions of the Czech apex courts. In this paper, we describe the tool ultimately used for the extraction of the references from the court decisions, together with a subsequent way of manual processing of the raw data to achieve a higher-quality dataset. Section SECREF2 maps the related work in the area of legal citation analysis (SectionSECREF1), reference recognition (Section SECREF2), text segmentation (Section SECREF4), and data availability (Section SECREF3). Section SECREF3 describes the method we used for the citation extraction, listing the individual models and the way we have combined these models into the NLP pipeline. Section SECREF4 presents results in the terms of evaluation of the performance of our pipeline, the statistics of the raw data, further manual processing and statistics of the final citation dataset. Section SECREF5 discusses limitations of our work and outlines the possible future development. Section SECREF6 concludes this paper. |
11 | How is the intensity of the PTSD established? | Given we have four intensity, No PTSD, Low Risk PTSD, Moderate Risk PTSD and High Risk PTSD with a score of 0, 1, 2 and 3 respectively, the estimated intensity is established as mean squared error. | Veteran mental health is a significant national problem as large number of veterans are returning from the recent war in Iraq and continued military presence in Afghanistan. While significant existing works have investigated twitter posts-based Post Traumatic Stress Disorder (PTSD) assessment using blackbox machine learning techniques, these frameworks cannot be trusted by the clinicians due to the lack of clinical explainability. To obtain the trust of clinicians, we explore the big question, can twitter posts provide enough information to fill up clinical PTSD assessment surveys that have been traditionally trusted by clinicians? To answer the above question, we propose, LAXARY (Linguistic Analysis-based Exaplainable Inquiry) model, a novel Explainable Artificial Intelligent (XAI) model to detect and represent PTSD assessment of twitter users using a modified Linguistic Inquiry and Word Count (LIWC) analysis. First, we employ clinically validated survey tools for collecting clinical PTSD assessment data from real twitter users and develop a PTSD Linguistic Dictionary using the PTSD assessment survey results. Then, we use the PTSD Linguistic Dictionary along with machine learning model to fill up the survey tools towards detecting PTSD status and its intensity of corresponding twitter users. Our experimental evaluation on 210 clinically validated veteran twitter users provides promising accuracies of both PTSD classification and its intensity estimation. We also evaluate our developed PTSD Linguistic Dictionary's reliability and validity. | Combat veterans diagnosed with PTSD are substantially more likely to engage in a number of high risk activities including engaging in interpersonal violence, attempting suicide, committing suicide, binge drinking, and drug abuse BIBREF0. Despite improved diagnostic screening, outpatient mental health and inpatient treatment for PTSD, the syndrome remains treatment resistant, is typically chronic, and is associated with numerous negative health effects and higher treatment costs BIBREF1. As a result, the Veteran Administration's National Center for PTSD (NCPTSD) suggests to reconceptualize PTSD not just in terms of a psychiatric symptom cluster, but focusing instead on the specific high risk behaviors associated with it, as these may be directly addressed though behavioral change efforts BIBREF0. Consensus prevalence estimates suggest that PTSD impacts between 15-20% of the veteran population which is typically chronic and treatment resistant BIBREF0. The PTSD patients support programs organized by different veterans peer support organization use a set of surveys for local weekly assessment to detect the intensity of PTSD among the returning veterans. However, recent advanced evidence-based care for PTSD sufferers surveys have showed that veterans, suffered with chronic PTSD are reluctant in participating assessments to the professionals which is another significant symptom of war returning veterans with PTSD. Several existing researches showed that, twitter posts of war veterans could be a significant indicator of their mental health and could be utilized to predict PTSD sufferers in time before going out of control BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8. However, all of the proposed methods relied on either blackbox machine learning methods or language models based sentiments extraction of posted texts which failed to obtain acceptability and trust of clinicians due to the lack of their explainability. In the context of the above research problem, we aim to answer the following research questions Given clinicians have trust on clinically validated PTSD assessment surveys, can we fill out PTSD assessment surveys using twitter posts analysis of war-veterans? If possible, what sort of analysis and approach are needed to develop such XAI model to detect the prevalence and intensity of PTSD among war-veterans only using the social media (twitter) analysis where users are free to share their everyday mental and social conditions? How much quantitative improvement do we observe in our model's ability to explain both detection and intensity estimation of PTSD? In this paper, we propose LAXARY, an explainable and trustworthy representation of PTSD classification and its intensity for clinicians. The key contributions of our work are summarized below, The novelty of LAXARY lies on the proposed clinical surveys-based PTSD Linguistic dictionary creation with words/aspects which represents the instantaneous perturbation of twitter-based sentiments as a specific pattern and help calculate the possible scores of each survey question. LAXARY includes a modified LIWC model to calculate the possible scores of each survey question using PTSD Linguistic Dictionary to fill out the PTSD assessment surveys which provides a practical way not only to determine fine-grained discrimination of physiological and psychological health markers of PTSD without incurring the expensive and laborious in-situ laboratory testing or surveys, but also obtain trusts of clinicians who are expected to see traditional survey results of the PTSD assessment. Finally, we evaluate the accuracy of LAXARY model performance and reliability-validity of generated PTSD Linguistic Dictionary using real twitter users' posts. Our results show that, given normal weekly messages posted in twitter, LAXARY can provide very high accuracy in filling up surveys towards identifying PTSD ($\approx 96\%$) and its intensity ($\approx 1.2$ mean squared error). |
13 | how is quality measured? | Accuracy and the macro-F1 (averaged F1 over positive and negative classes) are used as a measure of quality. | In this paper, we introduce UniSent a universal sentiment lexica for 1000 languages created using an English sentiment lexicon and a massively parallel corpus in the Bible domain. To the best of our knowledge, UniSent is the largest sentiment resource to date in terms of number of covered languages, including many low resource languages. To create UniSent, we propose Adapted Sentiment Pivot, a novel method that combines annotation projection, vocabulary expansion, and unsupervised domain adaptation. We evaluate the quality of UniSent for Macedonian, Czech, German, Spanish, and French and show that its quality is comparable to manually or semi-manually created sentiment resources. With the publication of this paper, we release UniSent lexica as well as Adapted Sentiment Pivot related codes. method. | Sentiment classification is an important task which requires either word level or document level sentiment annotations. Such resources are available for at most 136 languages BIBREF0 , preventing accurate sentiment classification in a low resource setup. Recent research efforts on cross-lingual transfer learning enable to train models in high resource languages and transfer this information into other, low resource languages using minimal bilingual supervision BIBREF1 , BIBREF2 , BIBREF3 . Besides that, little effort has been spent on the creation of sentiment lexica for low resource languages (e.g., BIBREF0 , BIBREF4 , BIBREF5 ). We create and release Unisent, the first massively cross-lingual sentiment lexicon in more than 1000 languages. An extensive evaluation across several languages shows that the quality of Unisent is close to manually created resources. Our method is inspired by BIBREF6 with a novel combination of vocabulary expansion and domain adaptation using embedding spaces. Similar to our work, BIBREF7 also use massively parallel corpora to project POS tags and dependency relations across languages. However, their approach is based on assignment of the most probable label according to the alignment model from the source to the target language and does not include any vocabulary expansion or domain adaptation and do not use the embedding graphs. |
15 | What is the accuracy reported by state-of-the-art methods? | Answer with content missing: (Table 1)
Previous state-of-the art on same dataset: ResNet50 89% (6 languages), SVM-HMM 70% (4 languages) | Language Identification (LI) is an important first step in several speech processing systems. With a growing number of voice-based assistants, speech LI has emerged as a widely researched field. To approach the problem of identifying languages, we can either adopt an implicit approach where only the speech for a language is present or an explicit one where text is available with its corresponding transcript. This paper focuses on an implicit approach due to the absence of transcriptive data. This paper benchmarks existing models and proposes a new attention based model for language identification which uses log-Mel spectrogram images as input. We also present the effectiveness of raw waveforms as features to neural network models for LI tasks. For training and evaluation of models, we classified six languages (English, French, German, Spanish, Russian and Italian) with an accuracy of 95.4% and four languages (English, French, German, Spanish) with an accuracy of 96.3% obtained from the VoxForge dataset. This approach can further be scaled to incorporate more languages. | Language Identification (LI) is a problem which involves classifying the language being spoken by a speaker. LI systems can be used in call centers to route international calls to an operator who is fluent in that identified language BIBREF0. In speech-based assistants, LI acts as the first step which chooses the corresponding grammar from a list of available languages for its further semantic analysis BIBREF1. It can also be used in multi-lingual voice-controlled information retrieval systems, for example, Apple Siri and Amazon Alexa. Over the years, studies have utilized many prosodic and acoustic features to construct machine learning models for LI systems BIBREF2. Every language is composed of phonemes, which are distinct unit of sounds in that language, such as b of black and g of green. Several prosodic and acoustic features are based on phonemes, which become the underlying features on whom the performance of the statistical model depends BIBREF3, BIBREF4. If two languages have many overlapping phonemes, then identifying them becomes a challenging task for a classifier. For example, the word cat in English, kat in Dutch, katze in German have different consonants but when used in a speech they all would sound quite similar. Due to such drawbacks several studies have switched over to using Deep Neural Networks (DNNs) to harness their novel auto-extraction techniques BIBREF1, BIBREF5. This work follows an implicit approach for identifying six languages with overlapping phonemes on the VoxForge BIBREF6 dataset and achieves 95.4% overall accuracy. In previous studies BIBREF1, BIBREF7, BIBREF5, authors use log-Mel spectrum of a raw audio as inputs to their models. One of our contributions is to enhance the performance of this approach by utilising recent techniques like Mixup augmentation of inputs and exploring the effectiveness of Attention mechanism in enhancing performance of neural network. As log-Mel spectrum needs to be computed for each raw audio input and processing time for generating log-Mel spectrum increases linearly with length of audio, this acts as a bottleneck for these models. Hence, we propose the use of raw audio waveforms as inputs to deep neural network which boosts performance by avoiding additional overhead of computing log-Mel spectrum for each audio. Our 1D-ConvNet architecture auto-extracts and classifies features from this raw audio input. The structure of the work is as follows. In Section 2 we discuss about the previous related studies in this field. The model architecture for both the raw waveforms and log-Mel spectrogram images is discussed in Section 3 along with the a discussion on hyperparameter space exploration. In Section 4 we present the experimental results. Finally, in Section 5 we discuss the conclusions drawn from the experiment and future work. |
19 | How do the authors define or exemplify 'incorrect words'? | typos in spellings or ungrammatical words | In this paper, we propose Stacked DeBERT, short for Stacked Denoising Bidirectional Encoder Representations from Transformers. This novel model improves robustness in incomplete data, when compared to existing systems, by designing a novel encoding scheme in BERT, a powerful language representation model solely based on attention mechanisms. Incomplete data in natural language processing refer to text with missing or incorrect words, and its presence can hinder the performance of current models that were not implemented to withstand such noises, but must still perform well even under duress. This is due to the fact that current approaches are built for and trained with clean and complete data, and thus are not able to extract features that can adequately represent incomplete data. Our proposed approach consists of obtaining intermediate input representations by applying an embedding layer to the input tokens followed by vanilla transformers. These intermediate features are given as input to novel denoising transformers which are responsible for obtaining richer input representations. The proposed approach takes advantage of stacks of multilayer perceptrons for the reconstruction of missing words' embeddings by extracting more abstract and meaningful hidden feature vectors, and bidirectional transformers for improved embedding representation. We consider two datasets for training and evaluation: the Chatbot Natural Language Understanding Evaluation Corpus and Kaggle's Twitter Sentiment Corpus. Our model shows improved F1-scores and better robustness in informal/incorrect texts present in tweets and in texts with Speech-to-Text error in the sentiment and intent classification tasks. | Understanding a user's intent and sentiment is of utmost importance for current intelligent chatbots to respond appropriately to human requests. However, current systems are not able to perform to their best capacity when presented with incomplete data, meaning sentences with missing or incorrect words. This scenario is likely to happen when one considers human error done in writing. In fact, it is rather naive to assume that users will always type fully grammatically correct sentences. Panko BIBREF0 goes as far as claiming that human accuracy regarding research paper writing is none when considering the entire document. This has been aggravated with the advent of internet and social networks, which allowed language and modern communication to be been rapidly transformed BIBREF1, BIBREF2. Take Twitter for instance, where information is expected to be readily communicated in short and concise sentences with little to no regard to correct sentence grammar or word spelling BIBREF3. Further motivation can be found in Automatic Speech Recognition (ASR) applications, where high error rates prevail and pose an enormous hurdle in the broad adoption of speech technology by users worldwide BIBREF4. This is an important issue to tackle because, in addition to more widespread user adoption, improving Speech-to-Text (STT) accuracy diminishes error propagation to modules using the recognized text. With that in mind, in order for current systems to improve the quality of their services, there is a need for development of robust intelligent systems that are able to understand a user even when faced with incomplete representation in language. The advancement of deep neural networks have immensely aided in the development of the Natural Language Processing (NLP) domain. Tasks such as text generation, sentence correction, image captioning and text classification, have been possible via models such as Convolutional Neural Networks and Recurrent Neural Networks BIBREF5, BIBREF6, BIBREF7. More recently, state-of-the-art results have been achieved with attention models, more specifically Transformers BIBREF8. Surprisingly, however, there is currently no research on incomplete text classification in the NLP community. Realizing the need of research in that area, we make it the focus of this paper. In this novel task, the model aims to identify the user's intent or sentiment by analyzing a sentence with missing and/or incorrect words. In the sentiment classification task, the model aims to identify the user's sentiment given a tweet, written in informal language and without regards for sentence correctness. Current approaches for Text Classification tasks focus on efficient embedding representations. Kim et al. BIBREF9 use semantically enriched word embeddings to make synonym and antonym word vectors respectively more and less similar in order to improve intent classification performance. Devlin et al. BIBREF10 propose Bidirectional Encoder Representations from Transformers (BERT), a powerful bidirectional language representation model based on Transformers, achieving state-of-the-art results on eleven NLP tasks BIBREF11, including sentiment text classification. Concurrently, Shridhar et al. BIBREF12 also reach state of the art in the intent recognition task using Semantic Hashing for feature representation followed by a neural classifier. All aforementioned approaches are, however, applied to datasets based solely on complete data. The incomplete data problem is usually approached as a reconstruction or imputation task and is most often related to missing numbers imputation BIBREF13. Vincent et al. BIBREF14, BIBREF15 propose to reconstruct clean data from its noisy version by mapping the input to meaningful representations. This approach has also been shown to outperform other models, such as predictive mean matching, random forest, Support Vector Machine (SVM) and Multiple imputation by Chained Equations (MICE), at missing data imputation tasks BIBREF16, BIBREF17. Researchers in those two areas have shown that meaningful feature representation of data is of utter importance for high performance achieving methods. We propose a model that combines the power of BERT in the NLP domain and the strength of denoising strategies in incomplete data reconstruction to tackle the tasks of incomplete intent and sentiment classification. This enables the implementation of a novel encoding scheme, more robust to incomplete data, called Stacked Denoising BERT or Stacked DeBERT. Our approach consists of obtaining richer input representations from input tokens by stacking denoising transformers on an embedding layer with vanilla transformers. The embedding layer and vanilla transformers extract intermediate input features from the input tokens, and the denoising transformers are responsible for obtaining richer input representations from them. By improving BERT with stronger denoising abilities, we are able to reconstruct missing and incorrect words' embeddings and improve classification accuracy. To summarize, our contribution is two-fold: Novel model architecture that is more robust to incomplete data, including missing or incorrect words in text. Proposal of the novel tasks of incomplete intent and sentiment classification from incorrect sentences, and release of corpora related with these tasks. The remainder of this paper is organized in four sections, with Section SECREF2 explaining the proposed model. This is followed by Section SECREF3 which includes a detailed description of the dataset used for training and evaluation purposes and how it was obtained. Section SECREF4 covers the baseline models used for comparison, training specifications and experimental results. Finally, Section SECREF5 wraps up this paper with conclusion and future works. |
21 | Which experiments are perfomed? | They used BERT-based models to detect subjective language in the WNC corpus | Subjective bias detection is critical for applications like propaganda detection, content recommendation, sentiment analysis, and bias neutralization. This bias is introduced in natural language via inflammatory words and phrases, casting doubt over facts, and presupposing the truth. In this work, we perform comprehensive experiments for detecting subjective bias using BERT-based models on the Wiki Neutrality Corpus(WNC). The dataset consists of $360k$ labeled instances, from Wikipedia edits that remove various instances of the bias. We further propose BERT-based ensembles that outperform state-of-the-art methods like $BERT_{large}$ by a margin of $5.6$ F1 score. | In natural language, subjectivity refers to the aspects of communication used to express opinions, evaluations, and speculationsBIBREF0, often influenced by one's emotional state and viewpoints. Writers and editors of texts like news and textbooks try to avoid the use of biased language, yet subjective bias is pervasive in these texts. More than $56\%$ of Americans believe that news sources do not report the news objectively , thus implying the prevalence of the bias. Therefore, when presenting factual information, it becomes necessary to differentiate subjective language from objective language. There has been considerable work on capturing subjectivity using text-classification models ranging from linguistic-feature-based modelsBIBREF1 to finetuned pre-trained word embeddings like BERTBIBREF2. The detection of bias-inducing words in a Wikipedia statement was explored in BIBREF1. The authors propose the "Neutral Point of View" (NPOV) corpus made using Wikipedia revision history, containing Wikipedia edits that are specifically designed to remove subjective bias. They use logistic regression with linguistic features, including factive verbs, hedges, and subjective intensifiers to detect bias-inducing words. In BIBREF2, the authors extend this work by mitigating subjective bias after detecting bias-inducing words using a BERT-based model. However, they primarily focused on detecting and mitigating subjective bias for single-word edits. We extend their work by incorporating multi-word edits by detecting bias at the sentence level. We further use their version of the NPOV corpus called Wiki Neutrality Corpus(WNC) for this work. The task of detecting sentences containing subjective bias rather than individual words inducing the bias has been explored in BIBREF3. However, they conduct majority of their experiments in controlled settings, limiting the type of articles from which the revisions were extracted. Their attempt to test their models in a general setting is dwarfed by the fact that they used revisions from a single Wikipedia article resulting in just 100 instances to evaluate their proposed models robustly. Consequently, we perform our experiments in the complete WNC corpus, which consists of $423,823$ revisions in Wikipedia marked by its editors over a period of 15 years, to simulate a more general setting for the bias. In this work, we investigate the application of BERT-based models for the task of subjective language detection. We explore various BERT-based models, including BERT, RoBERTa, ALBERT, with their base and large specifications along with their native classifiers. We propose an ensemble model exploiting predictions from these models using multiple ensembling techniques. We show that our model outperforms the baselines by a margin of $5.6$ of F1 score and $5.95\%$ of Accuracy. |
22 | Is ROUGE their only baseline? | No, other baseline metrics they use besides ROUGE-L are n-gram overlap, negative cross-entropy, perplexity, and BLEU. | Motivated by recent findings on the probabilistic modeling of acceptability judgments, we propose syntactic log-odds ratio (SLOR), a normalized language model score, as a metric for referenceless fluency evaluation of natural language generation output at the sentence level. We further introduce WPSLOR, a novel WordPiece-based version, which harnesses a more compact language model. Even though word-overlap metrics like ROUGE are computed with the help of hand-written references, our referenceless methods obtain a significantly higher correlation with human fluency scores on a benchmark dataset of compressed sentences. Finally, we present ROUGE-LM, a reference-based metric which is a natural extension of WPSLOR to the case of available references. We show that ROUGE-LM yields a significantly higher correlation with human judgments than all baseline metrics, including WPSLOR on its own. | Producing sentences which are perceived as natural by a human addressee—a property which we will denote as fluency throughout this paper —is a crucial goal of all natural language generation (NLG) systems: it makes interactions more natural, avoids misunderstandings and, overall, leads to higher user satisfaction and user trust BIBREF0 . Thus, fluency evaluation is important, e.g., during system development, or for filtering unacceptable generations at application time. However, fluency evaluation of NLG systems constitutes a hard challenge: systems are often not limited to reusing words from the input, but can generate in an abstractive way. Hence, it is not guaranteed that a correct output will match any of a finite number of given references. This results in difficulties for current reference-based evaluation, especially of fluency, causing word-overlap metrics like ROUGE BIBREF1 to correlate only weakly with human judgments BIBREF2 . As a result, fluency evaluation of NLG is often done manually, which is costly and time-consuming. Evaluating sentences on their fluency, on the other hand, is a linguistic ability of humans which has been the subject of a decade-long debate in cognitive science. In particular, the question has been raised whether the grammatical knowledge that underlies this ability is probabilistic or categorical in nature BIBREF3 , BIBREF4 , BIBREF5 . Within this context, lau2017grammaticality have recently shown that neural language models (LMs) can be used for modeling human ratings of acceptability. Namely, they found SLOR BIBREF6 —sentence log-probability which is normalized by unigram log-probability and sentence length—to correlate well with acceptability judgments at the sentence level. However, to the best of our knowledge, these insights have so far gone disregarded by the natural language processing (NLP) community. In this paper, we investigate the practical implications of lau2017grammaticality's findings for fluency evaluation of NLG, using the task of automatic compression BIBREF7 , BIBREF8 as an example (cf. Table 1 ). Specifically, we test our hypothesis that SLOR should be a suitable metric for evaluation of compression fluency which (i) does not rely on references; (ii) can naturally be applied at the sentence level (in contrast to the system level); and (iii) does not need human fluency annotations of any kind. In particular the first aspect, i.e., SLOR not needing references, makes it a promising candidate for automatic evaluation. Getting rid of human references has practical importance in a variety of settings, e.g., if references are unavailable due to a lack of resources for annotation, or if obtaining references is impracticable. The latter would be the case, for instance, when filtering system outputs at application time. We further introduce WPSLOR, a novel, WordPiece BIBREF9 -based version of SLOR, which drastically reduces model size and training time. Our experiments show that both approaches correlate better with human judgments than traditional word-overlap metrics, even though the latter do rely on reference compressions. Finally, investigating the case of available references and how to incorporate them, we combine WPSLOR and ROUGE to ROUGE-LM, a novel reference-based metric, and increase the correlation with human fluency ratings even further. |
24 | By how much does their system outperform the lexicon-based models? | Under the retrieval evaluation setting, their proposed model + IR2 had better MRR than NVDM by 0.3769, better MR by 4.6, and better Recall@10 by 20 .
Under the generative evaluation setting the proposed model + IR2 had better BLEU by 0.044 , better CIDEr by 0.033, better ROUGE by 0.032, and better METEOR by 0.029 | Article comments can provide supplementary opinions and facts for readers, thereby increase the attraction and engagement of articles. Therefore, automatically commenting is helpful in improving the activeness of the community, such as online forums and news websites. Previous work shows that training an automatic commenting system requires large parallel corpora. Although part of articles are naturally paired with the comments on some websites, most articles and comments are unpaired on the Internet. To fully exploit the unpaired data, we completely remove the need for parallel data and propose a novel unsupervised approach to train an automatic article commenting model, relying on nothing but unpaired articles and comments. Our model is based on a retrieval-based commenting framework, which uses news to retrieve comments based on the similarity of their topics. The topic representation is obtained from a neural variational topic model, which is trained in an unsupervised manner. We evaluate our model on a news comment dataset. Experiments show that our proposed topic-based approach significantly outperforms previous lexicon-based models. The model also profits from paired corpora and achieves state-of-the-art performance under semi-supervised scenarios. | Making article comments is a fundamental ability for an intelligent machine to understand the article and interact with humans. It provides more challenges because commenting requires the abilities of comprehending the article, summarizing the main ideas, mining the opinions, and generating the natural language. Therefore, machine commenting is an important problem faced in building an intelligent and interactive agent. Machine commenting is also useful in improving the activeness of communities, including online forums and news websites. Article comments can provide extended information and external opinions for the readers to have a more comprehensive understanding of the article. Therefore, an article with more informative and interesting comments will attract more attention from readers. Moreover, machine commenting can kick off the discussion about an article or a topic, which helps increase user engagement and interaction between the readers and authors. Because of the advantage and importance described above, more recent studies have focused on building a machine commenting system with neural models BIBREF0 . One bottleneck of neural machine commenting models is the requirement of a large parallel dataset. However, the naturally paired commenting dataset is loosely paired. Qin et al. QinEA2018 were the first to propose the article commenting task and an article-comment dataset. The dataset is crawled from a news website, and they sample 1,610 article-comment pairs to annotate the relevance score between articles and comments. The relevance score ranges from 1 to 5, and we find that only 6.8% of the pairs have an average score greater than 4. It indicates that the naturally paired article-comment dataset contains a lot of loose pairs, which is a potential harm to the supervised models. Besides, most articles and comments are unpaired on the Internet. For example, a lot of articles do not have the corresponding comments on the news websites, and the comments regarding the news are more likely to appear on social media like Twitter. Since comments on social media are more various and recent, it is important to exploit these unpaired data. Another issue is that there is a semantic gap between articles and comments. In machine translation and text summarization, the target output mainly shares the same points with the source input. However, in article commenting, the comment does not always tell the same thing as the corresponding article. Table TABREF1 shows an example of an article and several corresponding comments. The comments do not directly tell what happened in the news, but talk about the underlying topics (e.g. NBA Christmas Day games, LeBron James). However, existing methods for machine commenting do not model the topics of articles, which is a potential harm to the generated comments. To this end, we propose an unsupervised neural topic model to address both problems. For the first problem, we completely remove the need of parallel data and propose a novel unsupervised approach to train a machine commenting system, relying on nothing but unpaired articles and comments. For the second issue, we bridge the articles and comments with their topics. Our model is based on a retrieval-based commenting framework, which uses the news as the query to retrieve the comments by the similarity of their topics. The topic is represented with a variational topic, which is trained in an unsupervised manner. The contributions of this work are as follows: |
29 | How are the main international development topics that states raise identified? | They focus on exclusivity and semantic coherence measures: Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. They select select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence. | There is surprisingly little known about agenda setting for international development in the United Nations (UN) despite it having a significant influence on the process and outcomes of development efforts. This paper addresses this shortcoming using a novel approach that applies natural language processing techniques to countries' annual statements in the UN General Debate. Every year UN member states deliver statements during the General Debate on their governments' perspective on major issues in world politics. These speeches provide invaluable information on state preferences on a wide range of issues, including international development, but have largely been overlooked in the study of global politics. This paper identifies the main international development topics that states raise in these speeches between 1970 and 2016, and examine the country-specific drivers of international development rhetoric. | Decisions made in international organisations are fundamental to international development efforts and initiatives. It is in these global governance arenas that the rules of the global economic system, which have a huge impact on development outcomes are agreed on; decisions are made about large-scale funding for development issues, such as health and infrastructure; and key development goals and targets are agreed on, as can be seen with the Millennium Development Goals (MDGs). More generally, international organisations have a profound influence on the ideas that shape international development efforts BIBREF0 . Yet surprisingly little is known about the agenda-setting process for international development in global governance institutions. This is perhaps best demonstrated by the lack of information on how the different goals and targets of the MDGs were decided, which led to much criticism and concern about the global governance of development BIBREF1 . More generally, we know little about the types of development issues that different countries prioritise, or whether country-specific factors such as wealth or democracy make countries more likely to push for specific development issues to be put on the global political agenda. The lack of knowledge about the agenda setting process in the global governance of development is in large part due to the absence of obvious data sources on states' preferences about international development issues. To address this gap we employ a novel approach based on the application of natural language processing (NLP) to countries' speeches in the UN. Every September, the heads of state and other high-level country representatives gather in New York at the start of a new session of the United Nations General Assembly (UNGA) and address the Assembly in the General Debate. The General Debate (GD) provides the governments of the almost two hundred UN member states with an opportunity to present their views on key issues in international politics – including international development. As such, the statements made during GD are an invaluable and, largely untapped, source of information on governments' policy preferences on international development over time. An important feature of these annual country statements is that they are not institutionally connected to decision-making in the UN. This means that governments face few external constraints when delivering these speeches, enabling them to raise the issues that they consider the most important. Therefore, the General Debate acts “as a barometer of international opinion on important issues, even those not on the agenda for that particular session” BIBREF2 . In fact, the GD is usually the first item for each new session of the UNGA, and as such it provides a forum for governments to identify like-minded members, and to put on the record the issues they feel the UNGA should address. Therefore, the GD can be viewed as a key forum for governments to put different policy issues on international agenda. We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . Our application of NLP to these statements focuses in particular on structural topic models (STMs) BIBREF4 . The paper makes two contributions using this approach: (1) It sheds light on the main international development issues that governments prioritise in the UN; and (2) It identifies the key country-specific factors associated with governments discussing development issues in their GD statements. |
33 | Is the model evaluated? | the English version is evaluated. The German version evaluation is in progress | We introduce DisSim, a discourse-aware sentence splitting framework for English and German whose goal is to transform syntactically complex sentences into an intermediate representation that presents a simple and more regular structure which is easier to process for downstream semantic applications. For this purpose, we turn input sentences into a two-layered semantic hierarchy in the form of core facts and accompanying contexts, while identifying the rhetorical relations that hold between them. In that way, we preserve the coherence structure of the input and, hence, its interpretability for downstream tasks. | We developed a syntactic text simplification (TS) approach that can be used as a preprocessing step to facilitate and improve the performance of a wide range of artificial intelligence (AI) tasks, such as Machine Translation, Information Extraction (IE) or Text Summarization. Since shorter sentences are generally better processed by natural language processing (NLP) systems BIBREF0, the goal of our approach is to break down a complex source sentence into a set of minimal propositions, i.e. a sequence of sound, self-contained utterances, with each of them presenting a minimal semantic unit that cannot be further decomposed into meaningful propositions BIBREF1. However, any sound and coherent text is not simply a loose arrangement of self-contained units, but rather a logical structure of utterances that are semantically connected BIBREF2. Consequently, when carrying out syntactic simplification operations without considering discourse implications, the rewriting may easily result in a disconnected sequence of simplified sentences that lack important contextual information, making the text harder to interpret. Thus, in order to preserve the coherence structure and, hence, the interpretability of the input, we developed a discourse-aware TS approach based on Rhetorical Structure Theory (RST) BIBREF3. It establishes a contextual hierarchy between the split components, and identifies and classifies the semantic relationship that holds between them. In that way, a complex source sentence is turned into a so-called discourse tree, consisting of a set of hierarchically ordered and semantically interconnected sentences that present a simplified syntax which is easier to process for downstream semantic applications and may support a faster generalization in machine learning tasks. |
35 | How better is accuracy of new model compared to previously reported models? | Average accuracy of proposed model vs best prevous result:
Single-task Training: 57.57 vs 55.06
Multi-task Training: 50.17 vs 50.59 | This paper addresses the problem of comprehending procedural commonsense knowledge. This is a challenging task as it requires identifying key entities, keeping track of their state changes, and understanding temporal and causal relations. Contrary to most of the previous work, in this study, we do not rely on strong inductive bias and explore the question of how multimodality can be exploited to provide a complementary semantic signal. Towards this end, we introduce a new entity-aware neural comprehension model augmented with external relational memory units. Our model learns to dynamically update entity states in relation to each other while reading the text instructions. Our experimental analysis on the visual reasoning tasks in the recently proposed RecipeQA dataset reveals that our approach improves the accuracy of the previously reported models by a large margin. Moreover, we find that our model learns effective dynamic representations of entities even though we do not use any supervision at the level of entity states. | A great deal of commonsense knowledge about the world we live is procedural in nature and involves steps that show ways to achieve specific goals. Understanding and reasoning about procedural texts (e.g. cooking recipes, how-to guides, scientific processes) are very hard for machines as it demands modeling the intrinsic dynamics of the procedures BIBREF0, BIBREF1, BIBREF2. That is, one must be aware of the entities present in the text, infer relations among them and even anticipate changes in the states of the entities after each action. For example, consider the cheeseburger recipe presented in Fig. FIGREF2. The instruction “salt and pepper each patty and cook for 2 to 3 minutes on the first side” in Step 5 entails mixing three basic ingredients, the ground beef, salt and pepper, together and then applying heat to the mix, which in turn causes chemical changes that alter both the appearance and the taste. From a natural language understanding perspective, the main difficulty arises when a model sees the word patty again at a later stage of the recipe. It still corresponds to the same entity, but its form is totally different. Over the past few years, many new datasets and approaches have been proposed that address this inherently hard problem BIBREF0, BIBREF1, BIBREF3, BIBREF4. To mitigate the aforementioned challenges, the existing works rely mostly on heavy supervision and focus on predicting the individual state changes of entities at each step. Although these models can accurately learn to make local predictions, they may lack global consistency BIBREF3, BIBREF4, not to mention that building such annotated corpora is very labor-intensive. In this work, we take a different direction and explore the problem from a multimodal standpoint. Our basic motivation, as illustrated in Fig. FIGREF2, is that accompanying images provide complementary cues about causal effects and state changes. For instance, it is quite easy to distinguish raw meat from cooked one in visual domain. In particular, we take advantage of recently proposed RecipeQA dataset BIBREF2, a dataset for multimodal comprehension of cooking recipes, and ask whether it is possible to have a model which employs dynamic representations of entities in answering questions that require multimodal understanding of procedures. To this end, inspired from BIBREF5, we propose Procedural Reasoning Networks (PRN) that incorporates entities into the comprehension process and allows to keep track of entities, understand their interactions and accordingly update their states across time. We report that our proposed approach significantly improves upon previously published results on visual reasoning tasks in RecipeQA, which test understanding causal and temporal relations from images and text. We further show that the dynamic entity representations can capture semantics of the state information in the corresponding steps. |
36 | How does the active learning model work? | Active learning methods has a learning engine (mainly used for training of classification problems) and the selection engine (which chooses samples that need to be relabeled by annotators from unlabeled data). Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively. | Electronic health records (EHRs) stored in hospital information systems completely reflect the patients' diagnosis and treatment processes, which are essential to clinical data mining. Chinese word segmentation (CWS) is a fundamental and important task for Chinese natural language processing. Currently, most state-of-the-art CWS methods greatly depend on large-scale manually-annotated data, which is a very time-consuming and expensive work, specially for the annotation in medical field. In this paper, we present an active learning method for CWS in medical text. To effectively utilize complete segmentation history, a new scoring model in sampling strategy is proposed, which combines information entropy with neural network. Besides, to capture interactions between adjacent characters, K-means clustering features are additionally added in word segmenter. We experimentally evaluate our proposed CWS method in medical text, experimental results based on EHRs collected from the Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine show that our proposed method outperforms other reference methods, which can effectively save the cost of manual annotation. | Electronic health records (EHRs) systematically collect patients' clinical information, such as health profiles, histories of present illness, past medical histories, examination results and treatment plans BIBREF0 . By analyzing EHRs, many useful information, closely related to patients, can be discovered BIBREF1 . Since Chinese EHRs are recorded without explicit word delimiters (e.g., “UTF8gkai糖尿病酮症酸中毒” (diabetic ketoacidosis)), Chinese word segmentation (CWS) is a prerequisite for processing EHRs. Currently, state-of-the-art CWS methods usually require large amounts of manually-labeled data to reach their full potential. However, there are many challenges inherent in labeling EHRs. First, EHRs have many medical terminologies, such as “UTF8gkai高血压性心脏病” (hypertensive heart disease) and “UTF8gkai罗氏芬” (Rocephin), so only annotators with medical backgrounds can be qualified to label EHRs. Second, EHRs may involve personal privacies of patients. Therefore, they cannot be openly published on a large scale for labeling. The above two problems lead to the high annotation cost and insufficient training corpus in the research of CWS in medical text. CWS was usually formulated as a sequence labeling task BIBREF2 , which can be solved by supervised learning approaches, such as hidden markov model (HMM) BIBREF3 and conditional random field (CRF) BIBREF4 . However, these methods rely heavily on handcrafted features. To relieve the efforts of feature engineering, neural network-based methods are beginning to thrive BIBREF5 , BIBREF6 , BIBREF7 . However, due to insufficient annotated training data, conventional models for CWS trained on open corpus often suffer from significant performance degradation when transferred to a domain-specific text. Moreover, the task in medical domain is rarely dabbled, and only one related work on transfer learning is found in recent literatures BIBREF8 . However, researches related to transfer learning mostly remain in general domains, causing a major problem that a considerable amount of manually annotated data is required, when introducing the models into specific domains. One of the solutions for this obstacle is to use active learning, where only a small scale of samples are selected and labeled in an active manner. Active learning methods are favored by the researchers in many natural language processing (NLP) tasks, such as text classification BIBREF9 and named entity recognition (NER) BIBREF10 . However, only a handful of works are conducted on CWS BIBREF2 , and few focuses on medical domain tasks. Given the aforementioned challenges and current researches, we propose a word segmentation method based on active learning. To model the segmentation history, we incorporate a sampling strategy consisting of word score, link score and sequence score, which effectively evaluates the segmentation decisions. Specifically, we combine information branch and gated neural network to determine if the segment is a legal word, i.e., word score. Meanwhile, we use the hidden layer output of the long short-term memory (LSTM) BIBREF11 to find out how the word is linked to its surroundings, i.e., link score. The final decision on the selection of labeling samples is made by calculating the average of word and link scores on the whole segmented sentence, i.e., sequence score. Besides, to capture coherence over characters, we additionally add K-means clustering features to the input of CRF-based word segmenter. To sum up, the main contributions of our work are summarized as follows: The rest of this paper is organized as follows. Section SECREF2 briefly reviews the related work on CWS and active learning. Section SECREF3 presents an active learning method for CWS. We experimentally evaluate our proposed method in Section SECREF4 . Finally, Section SECREF5 concludes the paper and envisions on future work. |
37 | Did the annotators agreed and how much? | For event types and participant types, there was a moderate to substantial level of agreement using the Fleiss' Kappa. For coreference chain annotation, there was average agreement of 90.5%. | This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). InScript is a corpus of 1,000 stories centered around 10 different scenarios. Verbs and noun phrases are annotated with event and participant types, respectively. Additionally, the text is annotated with coreference information. The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing. | A script is “a standardized sequence of events that describes some stereotypical human activity such as going to a restaurant or visiting a doctor” BIBREF0 . Script events describe an action/activity along with the involved participants. For example, in the script describing a visit to a restaurant, typical events are entering the restaurant, ordering food or eating. Participants in this scenario can include animate objects like the waiter and the customer, as well as inanimate objects such as cutlery or food. Script knowledge has been shown to play an important role in text understanding (cullingford1978script, miikkulainen1995script, mueller2004understanding, Chambers2008, Chambers2009, modi2014inducing, rudinger2015learning). It guides the expectation of the reader, supports coreference resolution as well as common-sense knowledge inference and enables the appropriate embedding of the current sentence into the larger context. Figure 1 shows the first few sentences of a story describing the scenario taking a bath. Once the taking a bath scenario is evoked by the noun phrase (NP) “a bath”, the reader can effortlessly interpret the definite NP “the faucet” as an implicitly present standard participant of the taking a bath script. Although in this story, “entering the bath room”, “turning on the water” and “filling the tub” are explicitly mentioned, a reader could nevertheless have inferred the “turning on the water” event, even if it was not explicitly mentioned in the text. Table 1 gives an example of typical events and participants for the script describing the scenario taking a bath. A systematic study of the influence of script knowledge in texts is far from trivial. Typically, text documents (e.g. narrative texts) describing various scenarios evoke many different scripts, making it difficult to study the effect of a single script. Efforts have been made to collect scenario-specific script knowledge via crowdsourcing, for example the OMICS and SMILE corpora (singh2002open, Regneri:2010, Regneri2013), but these corpora describe script events in a pointwise telegram style rather than in full texts. This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). It is a corpus of simple narrative texts in the form of stories, wherein each story is centered around a specific scenario. The stories have been collected via Amazon Mechanical Turk (M-Turk). In this experiment, turkers were asked to write down a concrete experience about a bus ride, a grocery shopping event etc. We concentrated on 10 scenarios and collected 100 stories per scenario, giving a total of 1,000 stories with about 200,000 words. Relevant verbs and noun phrases in all stories are annotated with event types and participant types respectively. Additionally, the texts have been annotated with coreference information in order to facilitate the study of the interdependence between script structure and coreference. The InScript corpus is a unique resource that provides a basis for studying various aspects of the role of script knowledge in language processing by humans. The acquisition of this corpus is part of a larger research effort that aims at using script knowledge to model the surprisal and information density in written text. Besides InScript, this project also released a corpus of generic descriptions of script activities called DeScript (for Describing Script Structure, Wanzare2016). DeScript contains a range of short and textually simple phrases that describe script events in the style of OMICS or SMILE (singh2002open, Regneri:2010). These generic telegram-style descriptions are called Event Descriptions (EDs); a sequence of such descriptions that cover a complete script is called an Event Sequence Description (ESD). Figure 2 shows an excerpt of a script in the baking a cake scenario. The figure shows event descriptions for 3 different events in the DeScript corpus (left) and fragments of a story in the InScript corpus (right) that instantiate the same event type. |
38 | What datasets are used to evaluate this approach? | Kinship and Nations knowledge graphs, YAGO3-10 and WN18KGs knowledge graphs | Representing entities and relations in an embedding space is a well-studied approach for machine learning on relational data. Existing approaches, however, primarily focus on improving accuracy and overlook other aspects such as robustness and interpretability. In this paper, we propose adversarial modifications for link prediction models: identifying the fact to add into or remove from the knowledge graph that changes the prediction for a target fact after the model is retrained. Using these single modifications of the graph, we identify the most influential fact for a predicted link and evaluate the sensitivity of the model to the addition of fake facts. We introduce an efficient approach to estimate the effect of such modifications by approximating the change in the embeddings when the knowledge graph changes. To avoid the combinatorial search over all possible facts, we train a network to decode embeddings to their corresponding graph components, allowing the use of gradient-based optimization to identify the adversarial modification. We use these techniques to evaluate the robustness of link prediction models (by measuring sensitivity to additional facts), study interpretability through the facts most responsible for predictions (by identifying the most influential neighbors), and detect incorrect facts in the knowledge base. | Knowledge graphs (KG) play a critical role in many real-world applications such as search, structured data management, recommendations, and question answering. Since KGs often suffer from incompleteness and noise in their facts (links), a number of recent techniques have proposed models that embed each entity and relation into a vector space, and use these embeddings to predict facts. These dense representation models for link prediction include tensor factorization BIBREF0 , BIBREF1 , BIBREF2 , algebraic operations BIBREF3 , BIBREF4 , BIBREF5 , multiple embeddings BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , and complex neural models BIBREF10 , BIBREF11 . However, there are only a few studies BIBREF12 , BIBREF13 that investigate the quality of the different KG models. There is a need to go beyond just the accuracy on link prediction, and instead focus on whether these representations are robust and stable, and what facts they make use of for their predictions. In this paper, our goal is to design approaches that minimally change the graph structure such that the prediction of a target fact changes the most after the embeddings are relearned, which we collectively call Completion Robustness and Interpretability via Adversarial Graph Edits (). First, we consider perturbations that red!50!blackremove a neighboring link for the target fact, thus identifying the most influential related fact, providing an explanation for the model's prediction. As an example, consider the excerpt from a KG in Figure 1 with two observed facts, and a target predicted fact that Princes Henriette is the parent of Violante Bavaria. Our proposed graph perturbation, shown in Figure 1 , identifies the existing fact that Ferdinal Maria is the father of Violante Bavaria as the one when removed and model retrained, will change the prediction of Princes Henriette's child. We also study attacks that green!50!blackadd a new, fake fact into the KG to evaluate the robustness and sensitivity of link prediction models to small additions to the graph. An example attack for the original graph in Figure 1 , is depicted in Figure 1 . Such perturbations to the the training data are from a family of adversarial modifications that have been applied to other machine learning tasks, known as poisoning BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . Since the setting is quite different from traditional adversarial attacks, search for link prediction adversaries brings up unique challenges. To find these minimal changes for a target link, we need to identify the fact that, when added into or removed from the graph, will have the biggest impact on the predicted score of the target fact. Unfortunately, computing this change in the score is expensive since it involves retraining the model to recompute the embeddings. We propose an efficient estimate of this score change by approximating the change in the embeddings using Taylor expansion. The other challenge in identifying adversarial modifications for link prediction, especially when considering addition of fake facts, is the combinatorial search space over possible facts, which is intractable to enumerate. We introduce an inverter of the original embedding model, to decode the embeddings to their corresponding graph components, making the search of facts tractable by performing efficient gradient-based continuous optimization. We evaluate our proposed methods through following experiments. First, on relatively small KGs, we show that our approximations are accurate compared to the true change in the score. Second, we show that our additive attacks can effectively reduce the performance of state of the art models BIBREF2 , BIBREF10 up to $27.3\%$ and $50.7\%$ in Hits@1 for two large KGs: WN18 and YAGO3-10. We also explore the utility of adversarial modifications in explaining the model predictions by presenting rule-like descriptions of the most influential neighbors. Finally, we use adversaries to detect errors in the KG, obtaining up to $55\%$ accuracy in detecting errors. |
40 | How was the dataset collected? | They crawled travel information from the Web to build a database, created a multi-domain goal generator from the database, collected dialogue between workers an automatically annotated dialogue acts. | To advance multi-domain (cross-domain) dialogue modeling as well as alleviate the shortage of Chinese task-oriented datasets, we propose CrossWOZ, the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. It contains 6K dialogue sessions and 102K utterances for 5 domains, including hotel, restaurant, attraction, metro, and taxi. Moreover, the corpus contains rich annotation of dialogue states and dialogue acts at both user and system sides. About 60% of the dialogues have cross-domain user goals that favor inter-domain dependency and encourage natural transition across domains in conversation. We also provide a user simulator and several benchmark models for pipelined task-oriented dialogue systems, which will facilitate researchers to compare and evaluate their models on this corpus. The large size and rich annotation of CrossWOZ make it suitable to investigate a variety of tasks in cross-domain dialogue modeling, such as dialogue state tracking, policy learning, user simulation, etc. | Recently, there have been a variety of task-oriented dialogue models thanks to the prosperity of neural architectures BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. However, the research is still largely limited by the availability of large-scale high-quality dialogue data. Many corpora have advanced the research of task-oriented dialogue systems, most of which are single domain conversations, including ATIS BIBREF6, DSTC 2 BIBREF7, Frames BIBREF8, KVRET BIBREF9, WOZ 2.0 BIBREF10 and M2M BIBREF11. Despite the significant contributions to the community, these datasets are still limited in size, language variation, or task complexity. Furthermore, there is a gap between existing dialogue corpora and real-life human dialogue data. In real-life conversations, it is natural for humans to transition between different domains or scenarios while still maintaining coherent contexts. Thus, real-life dialogues are much more complicated than those dialogues that are only simulated within a single domain. To address this issue, some multi-domain corpora have been proposed BIBREF12, BIBREF13. The most notable corpus is MultiWOZ BIBREF12, a large-scale multi-domain dataset which consists of crowdsourced human-to-human dialogues. It contains 10K dialogue sessions and 143K utterances for 7 domains, with annotation of system-side dialogue states and dialogue acts. However, the state annotations are noisy BIBREF14, and user-side dialogue acts are missing. The dependency across domains is simply embodied in imposing the same pre-specified constraints on different domains, such as requiring both a hotel and an attraction to locate in the center of the town. In comparison to the abundance of English dialogue data, surprisingly, there is still no widely recognized Chinese task-oriented dialogue corpus. In this paper, we propose CrossWOZ, a large-scale Chinese multi-domain (cross-domain) task-oriented dialogue dataset. An dialogue example is shown in Figure FIGREF1. We compare CrossWOZ to other corpora in Table TABREF5 and TABREF6. Our dataset has the following features comparing to other corpora (particularly MultiWOZ BIBREF12): The dependency between domains is more challenging because the choice in one domain will affect the choices in related domains in CrossWOZ. As shown in Figure FIGREF1 and Table TABREF6, the hotel must be near the attraction chosen by the user in previous turns, which requires more accurate context understanding. It is the first Chinese corpus that contains large-scale multi-domain task-oriented dialogues, consisting of 6K sessions and 102K utterances for 5 domains (attraction, restaurant, hotel, metro, and taxi). Annotation of dialogue states and dialogue acts is provided for both the system side and user side. The annotation of user states enables us to track the conversation from the user's perspective and can empower the development of more elaborate user simulators. In this paper, we present the process of dialogue collection and provide detailed data analysis of the corpus. Statistics show that our cross-domain dialogues are complicated. To facilitate model comparison, benchmark models are provided for different modules in pipelined task-oriented dialogue systems, including natural language understanding, dialogue state tracking, dialogue policy learning, and natural language generation. We also provide a user simulator, which will facilitate the development and evaluation of dialogue models on this corpus. The corpus and the benchmark models are publicly available at https://github.com/thu-coai/CrossWOZ. |
41 | What models other than standalone BERT is new model compared to? | Only Bert base and Bert large are compared to proposed approach. | Pretraining deep contextualized representations using an unsupervised language modeling objective has led to large performance gains for a variety of NLP tasks. Despite this success, recent work by Schick and Schutze (2019) suggests that these architectures struggle to understand rare words. For context-independent word embeddings, this problem can be addressed by separately learning representations for infrequent words. In this work, we show that the same idea can also be applied to contextualized models and clearly improves their downstream task performance. Most approaches for inducing word embeddings into existing embedding spaces are based on simple bag-of-words models; hence they are not a suitable counterpart for deep neural network language models. To overcome this problem, we introduce BERTRAM, a powerful architecture based on a pretrained BERT language model and capable of inferring high-quality representations for rare words. In BERTRAM, surface form and contexts of a word directly interact with each other in a deep architecture. Both on a rare word probing task and on three downstream task datasets, BERTRAM considerably improves representations for rare and medium frequency words compared to both a standalone BERT model and previous work. | As traditional word embedding algorithms BIBREF1 are known to struggle with rare words, several techniques for improving their representations have been proposed over the last few years. These approaches exploit either the contexts in which rare words occur BIBREF2, BIBREF3, BIBREF4, BIBREF5, their surface-form BIBREF6, BIBREF7, BIBREF8, or both BIBREF9, BIBREF10. However, all of these approaches are designed for and evaluated on uncontextualized word embeddings. With the recent shift towards contextualized representations obtained from pretrained deep language models BIBREF11, BIBREF12, BIBREF13, BIBREF14, the question naturally arises whether these approaches are facing the same problem. As all of them already handle rare words implicitly – using methods such as byte-pair encoding BIBREF15 and WordPiece embeddings BIBREF16, or even character-level CNNs BIBREF17 –, it is unclear whether these models even require special treatment of rare words. However, the listed methods only make use of surface-form information, whereas BIBREF9 found that for covering a wide range of rare words, it is crucial to consider both surface-form and contexts. Consistently, BIBREF0 recently showed that for BERT BIBREF13, a popular pretrained language model based on a Transformer architecture BIBREF18, performance on a rare word probing task can significantly be improve by relearning representations of rare words using Attentive Mimicking BIBREF19. However, their proposed model is limited in two important respects: For processing contexts, it uses a simple bag-of-words model, throwing away much of the available information. It combines form and context only in a shallow fashion, thus preventing both input signals from sharing information in any sophisticated manner. Importantly, this limitation applies not only to their model, but to all previous work on obtaining representations for rare words by leveraging form and context. While using bag-of-words models is a reasonable choice for uncontextualized embeddings, which are often themselves based on such models BIBREF1, BIBREF7, it stands to reason that they are suboptimal for contextualized embeddings based on position-aware deep neural architectures. To overcome these limitations, we introduce Bertram (BERT for Attentive Mimicking), a novel architecture for understanding rare words that combines a pretrained BERT language model with Attentive Mimicking BIBREF19. Unlike previous approaches making use of language models BIBREF5, our approach integrates BERT in an end-to-end fashion and directly makes use of its hidden states. By giving Bertram access to both surface form and context information already at its very lowest layer, we allow for a deep connection and exchange of information between both input signals. For various reasons, assessing the effectiveness of methods like Bertram in a contextualized setting poses a huge difficulty: While most previous work on rare words was evaluated on datasets explicitly focusing on such words BIBREF6, BIBREF3, BIBREF4, BIBREF5, BIBREF10, all of these datasets are tailored towards context-independent embeddings and thus not suitable for evaluating our proposed model. Furthermore, understanding rare words is of negligible importance for most commonly used downstream task datasets. To evaluate our proposed model, we therefore introduce a novel procedure that allows us to automatically turn arbitrary text classification datasets into ones where rare words are guaranteed to be important. This is achieved by replacing classification-relevant frequent words with rare synonyms obtained using semantic resources such as WordNet BIBREF20. Using this procedure, we extract rare word datasets from three commonly used text (or text pair) classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. On both the WNLaMPro dataset of BIBREF0 and all three so-obtained datasets, our proposed Bertram model outperforms previous work by a large margin. In summary, our contributions are as follows: We show that a pretrained BERT instance can be integrated into Attentive Mimicking, resulting in much better context representations and a deeper connection of form and context. We design a procedure that allows us to automatically transform text classification datasets into datasets for which rare words are guaranteed to be important. We show that Bertram achieves a new state-of-the-art on the WNLaMPro probing task BIBREF0 and beats all baselines on rare word instances of AG's News, MNLI and DBPedia, resulting in an absolute improvement of up to 24% over a BERT baseline. |
42 | How big is the performance difference between this method and the baseline? | Comparing with the highest performing baseline: 1.3 points on ACE2004 dataset, 0.6 points on CWEB dataset, and 0.86 points in the average of all scores. | Entity linking is the task of aligning mentions to corresponding entities in a given knowledge base. Previous studies have highlighted the necessity for entity linking systems to capture the global coherence. However, there are two common weaknesses in previous global models. First, most of them calculate the pairwise scores between all candidate entities and select the most relevant group of entities as the final result. In this process, the consistency among wrong entities as well as that among right ones are involved, which may introduce noise data and increase the model complexity. Second, the cues of previously disambiguated entities, which could contribute to the disambiguation of the subsequent mentions, are usually ignored by previous models. To address these problems, we convert the global linking into a sequence decision problem and propose a reinforcement learning model which makes decisions from a global perspective. Our model makes full use of the previous referred entities and explores the long-term influence of current selection on subsequent decisions. We conduct experiments on different types of datasets, the results show that our model outperforms state-of-the-art systems and has better generalization performance. | Entity Linking (EL), which is also called Entity Disambiguation (ED), is the task of mapping mentions in text to corresponding entities in a given knowledge Base (KB). This task is an important and challenging stage in text understanding because mentions are usually ambiguous, i.e., different named entities may share the same surface form and the same entity may have multiple aliases. EL is key for information retrieval (IE) and has many applications, such as knowledge base population (KBP), question answering (QA), etc. Existing EL methods can be divided into two categories: local model and global model. Local models concern mainly on contextual words surrounding the mentions, where mentions are disambiguated independently. These methods are not work well when the context information is not rich enough. Global models take into account the topical coherence among the referred entities within the same document, where mentions are disambiguated jointly. Most of previous global models BIBREF0 , BIBREF1 , BIBREF2 calculate the pairwise scores between all candidate entities and select the most relevant group of entities. However, the consistency among wrong entities as well as that among right ones are involved, which not only increases the model complexity but also introduces some noises. For example, in Figure 1, there are three mentions "France", "Croatia" and "2018 World Cup", and each mention has three candidate entities. Here, "France" may refer to French Republic, France national basketball team or France national football team in KB. It is difficult to disambiguate using local models, due to the scarce common information in the contextual words of "France" and the descriptions of its candidate entities. Besides, the topical coherence among the wrong entities related to basketball team (linked by an orange dashed line) may make the global models mistakenly refer "France" to France national basketball team. So, how to solve these problems? We note that, mentions in text usually have different disambiguation difficulty according to the quality of contextual information and the topical coherence. Intuitively, if we start with mentions that are easier to disambiguate and gain correct results, it will be effective to utilize information provided by previously referred entities to disambiguate subsequent mentions. In the above example, it is much easier to map "2018 World Cup" to 2018 FIFA World Cup based on their common contextual words "France", "Croatia", "4-2". Then, it is obvious that "France" and "Croatia" should be referred to the national football team because football-related terms are mentioned many times in the description of 2018 FIFA World Cup. Inspired by this intuition, we design the solution with three principles: (i) utilizing local features to rank the mentions in text and deal with them in a sequence manner; (ii) utilizing the information of previously referred entities for the subsequent entity disambiguation; (iii) making decisions from a global perspective to avoid the error propagation if the previous decision is wrong. In order to achieve these aims, we consider global EL as a sequence decision problem and proposed a deep reinforcement learning (RL) based model, RLEL for short, which consists of three modules: Local Encoder, Global Encoder and Entity Selector. For each mention and its candidate entities, Local Encoder encodes the local features to obtain their latent vector representations. Then, the mentions are ranked according to their disambiguation difficulty, which is measured by the learned vector representations. In order to enforce global coherence between mentions, Global Encoder encodes the local representations of mention-entity pairs in a sequential manner via a LSTM network, which maintains a long-term memory on features of entities which has been selected in previous states. Entity Selector uses a policy network to choose the target entities from the candidate set. For a single disambiguation decision, the policy network not only considers the pairs of current mention-entity representations, but also concerns the features of referred entities in the previous states which is pursued by the Global Encoder. In this way, Entity Selector is able to take actions based on the current state and previous ones. When eliminating the ambiguity of all mentions in the sequence, delayed rewards are used to adjust its policy in order to gain an optimized global decision. Deep RL model, which learns to directly optimize the overall evaluation metrics, works much better than models which learn with loss functions that just evaluate a particular single decision. By this property, RL has been successfully used in many NLP tasks, such as information retrieval BIBREF3 , dialogue system BIBREF4 and relation classification BIBREF5 , etc. To the best of our knowledge, we are the first to design a RL model for global entity linking. And in this paper, our RL model is able to produce more accurate results by exploring the long-term influence of independent decisions and encoding the entities disambiguated in previous states. In summary, the main contributions of our paper mainly include following aspects: |
43 | What approaches without reinforcement learning have been tried? | classification, regression, neural methods | Task B Phase B of the 2019 BioASQ challenge focuses on biomedical question answering. Macquarie University's participation applies query-based multi-document extractive summarisation techniques to generate a multi-sentence answer given the question and the set of relevant snippets. In past participation we explored the use of regression approaches using deep learning architectures and a simple policy gradient architecture. For the 2019 challenge we experiment with the use of classification approaches with and without reinforcement learning. In addition, we conduct a correlation analysis between various ROUGE metrics and the BioASQ human evaluation scores. | The BioASQ Challenge includes a question answering task (Phase B, part B) where the aim is to find the “ideal answer” — that is, an answer that would normally be given by a person BIBREF0. This is in contrast with most other question answering challenges where the aim is normally to give an exact answer, usually a fact-based answer or a list. Given that the answer is based on an input that consists of a biomedical question and several relevant PubMed abstracts, the task can be seen as an instance of query-based multi-document summarisation. As in past participation BIBREF1, BIBREF2, we wanted to test the use of deep learning and reinforcement learning approaches for extractive summarisation. In contrast with past years where the training procedure was based on a regression set up, this year we experiment with various classification set ups. The main contributions of this paper are: We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels. We conduct correlation analysis between various ROUGE evaluation metrics and the human evaluations conducted at BioASQ and show that Precision and F1 correlate better than Recall. Section SECREF2 briefly introduces some related work for context. Section SECREF3 describes our classification and regression experiments. Section SECREF4 details our experiments using deep learning architectures. Section SECREF5 explains the reinforcement learning approaches. Section SECREF6 shows the results of our correlation analysis between ROUGE scores and human annotations. Section SECREF7 lists the specific runs submitted at BioASQ 7b. Finally, Section SECREF8 concludes the paper. |
44 | Which languages do they validate on? | Ar, Bg, Ca, Cs, Da, De, En, Es, Eu, Fa, Fi, Fr, Ga, He, Hi, Hu, It, La, Lt, Lv, Nb, Nl, Nn, PL, Pt, Ro, Ru, Sl, Sv, Tr, Uk, Ur | The Universal Dependencies (UD) and Universal Morphology (UniMorph) projects each present schemata for annotating the morphosyntactic details of language. Each project also provides corpora of annotated text in many languages - UD at the token level and UniMorph at the type level. As each corpus is built by different annotators, language-specific decisions hinder the goal of universal schemata. With compatibility of tags, each project's annotations could be used to validate the other's. Additionally, the availability of both type- and token-level resources would be a boon to tasks such as parsing and homograph disambiguation. To ease this interoperability, we present a deterministic mapping from Universal Dependencies v2 features into the UniMorph schema. We validate our approach by lookup in the UniMorph corpora and find a macro-average of 64.13% recall. We also note incompatibilities due to paucity of data on either side. Finally, we present a critical evaluation of the foundations, strengths, and weaknesses of the two annotation projects. | The two largest standardized, cross-lingual datasets for morphological annotation are provided by the Universal Dependencies BIBREF1 and Universal Morphology BIBREF2 , BIBREF3 projects. Each project's data are annotated according to its own cross-lingual schema, prescribing how features like gender or case should be marked. The schemata capture largely similar information, so one may want to leverage both UD's token-level treebanks and UniMorph's type-level lookup tables and unify the two resources. This would permit a leveraging of both the token-level UD treebanks and the type-level UniMorph tables of paradigms. Unfortunately, neither resource perfectly realizes its schema. On a dataset-by-dataset basis, they incorporate annotator errors, omissions, and human decisions when the schemata are underspecified; one such example is in fig:disagreement. A dataset-by-dataset problem demands a dataset-by-dataset solution; our task is not to translate a schema, but to translate a resource. Starting from the idealized schema, we create a rule-based tool for converting UD-schema annotations to UniMorph annotations, incorporating language-specific post-edits that both correct infelicities and also increase harmony between the datasets themselves (rather than the schemata). We apply this conversion to the 31 languages with both UD and UniMorph data, and we report our method's recall, showing an improvement over the strategy which just maps corresponding schematic features to each other. Further, we show similar downstream performance for each annotation scheme in the task of morphological tagging. This tool enables a synergistic use of UniMorph and Universal Dependencies, as well as teasing out the annotation discrepancies within and across projects. When one dataset disobeys its schema or disagrees with a related language, the flaws may not be noticed except by such a methodological dive into the resources. When the maintainers of the resources ameliorate these flaws, the resources move closer to the goal of a universal, cross-lingual inventory of features for morphological annotation. The contributions of this work are: |
45 | What is the baseline method for the task? | For the emotion recognition from text they use described neural network as baseline.
For audio and face there is no baseline. | The recognition of emotions by humans is a complex process which considers multiple interacting signals such as facial expressions and both prosody and semantic content of utterances. Commonly, research on automatic recognition of emotions is, with few exceptions, limited to one modality. We describe an in-car experiment for emotion recognition from speech interactions for three modalities: the audio signal of a spoken interaction, the visual signal of the driver's face, and the manually transcribed content of utterances of the driver. We use off-the-shelf tools for emotion detection in audio and face and compare that to a neural transfer learning approach for emotion recognition from text which utilizes existing resources from other domains. We see that transfer learning enables models based on out-of-domain corpora to perform well. This method contributes up to 10 percentage points in F1, with up to 76 micro-average F1 across the emotions joy, annoyance and insecurity. Our findings also indicate that off-the-shelf-tools analyzing face and audio are not ready yet for emotion detection in in-car speech interactions without further adjustments. | Automatic emotion recognition is commonly understood as the task of assigning an emotion to a predefined instance, for example an utterance (as audio signal), an image (for instance with a depicted face), or a textual unit (e.g., a transcribed utterance, a sentence, or a Tweet). The set of emotions is often following the original definition by Ekman Ekman1992, which includes anger, fear, disgust, sadness, joy, and surprise, or the extension by Plutchik Plutchik1980 who adds trust and anticipation. Most work in emotion detection is limited to one modality. Exceptions include Busso2004 and Sebe2005, who investigate multimodal approaches combining speech with facial information. Emotion recognition in speech can utilize semantic features as well BIBREF0. Note that the term “multimodal” is also used beyond the combination of vision, audio, and text. For example, Soleymani2012 use it to refer to the combination of electroencephalogram, pupillary response and gaze distance. In this paper, we deal with the specific situation of car environments as a testbed for multimodal emotion recognition. This is an interesting environment since it is, to some degree, a controlled environment: Dialogue partners are limited in movement, the degrees of freedom for occurring events are limited, and several sensors which are useful for emotion recognition are already integrated in this setting. More specifically, we focus on emotion recognition from speech events in a dialogue with a human partner and with an intelligent agent. Also from the application point of view, the domain is a relevant choice: Past research has shown that emotional intelligence is beneficial for human computer interaction. Properly processing emotions in interactions increases the engagement of users and can improve performance when a specific task is to be fulfilled BIBREF1, BIBREF2, BIBREF3, BIBREF4. This is mostly based on the aspect that machines communicating with humans appear to be more trustworthy when they show empathy and are perceived as being natural BIBREF3, BIBREF5, BIBREF4. Virtual agents play an increasingly important role in the automotive context and the speech modality is increasingly being used in cars due to its potential to limit distraction. It has been shown that adapting the in-car speech interaction system according to the drivers' emotional state can help to enhance security, performance as well as the overall driving experience BIBREF6, BIBREF7. With this paper, we investigate how each of the three considered modalitites, namely facial expressions, utterances of a driver as an audio signal, and transcribed text contributes to the task of emotion recognition in in-car speech interactions. We focus on the five emotions of joy, insecurity, annoyance, relaxation, and boredom since terms corresponding to so-called fundamental emotions like fear have been shown to be associated to too strong emotional states than being appropriate for the in-car context BIBREF8. Our first contribution is the description of the experimental setup for our data collection. Aiming to provoke specific emotions with situations which can occur in real-world driving scenarios and to induce speech interactions, the study was conducted in a driving simulator. Based on the collected data, we provide baseline predictions with off-the-shelf tools for face and speech emotion recognition and compare them to a neural network-based approach for emotion recognition from text. Our second contribution is the introduction of transfer learning to adapt models trained on established out-of-domain corpora to our use case. We work on German language, therefore the transfer consists of a domain and a language transfer. |
46 | what amounts of size were used on german-english? | Training data with 159000, 80000, 40000, 20000, 10000 and 5000 sentences, and 7584 sentences for development | It has been shown that the performance of neural machine translation (NMT) drops starkly in low-resource conditions, underperforming phrase-based statistical machine translation (PBSMT) and requiring large amounts of auxiliary data to achieve competitive results. In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. We discuss some pitfalls to be aware of when training low-resource NMT systems, and recent techniques that have shown to be especially helpful in low-resource settings, resulting in a set of best practices for low-resource NMT. In our experiments on German--English with different amounts of IWSLT14 training data, we show that, without the use of any auxiliary monolingual or multilingual data, an optimized NMT system can outperform PBSMT with far less data than previously claimed. We also apply these techniques to a low-resource Korean-English dataset, surpassing previously reported results by 4 BLEU. | While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field BIBREF0 , BIBREF1 , BIBREF2 , recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PBSMT) or unsupervised methods in low-data conditions BIBREF3 , BIBREF4 . In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. Our main contributions are as follows: |
48 | How big is the dataset? | Resulting dataset was 7934 messages for train and 700 messages for test. | With a sharp rise in fluency and users of "Hinglish" in linguistically diverse country, India, it has increasingly become important to analyze social content written in this language in platforms such as Twitter, Reddit, Facebook. This project focuses on using deep learning techniques to tackle a classification problem in categorizing social content written in Hindi-English into Abusive, Hate-Inducing and Not offensive categories. We utilize bi-directional sequence models with easy text augmentation techniques such as synonym replacement, random insertion, random swap, and random deletion to produce a state of the art classifier that outperforms the previous work done on analyzing this dataset. | Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English words blending with Punjabi, Gujarati, Marathi and Hindi. Sometimes, though rarely, Hinglish is used to refer to Hindi written in English script and mixing with English words or phrases. This makes analyzing the language very interesting. Its rampant usage in social media like Twitter, Facebook, Online blogs and reviews has also led to its usage in delivering hate and abuses in similar platforms. We aim to find such content in the social media focusing on the tweets. Hypothetically, if we can classify such tweets, we might be able to detect them and isolate them for further analysis before it reaches public. This will a great application of AI to the social cause and thus is motivating. An example of a simple, non offensive message written in Hinglish could be: "Why do you waste your time with <redacted content>. Aapna ghar sambhalta nahi(<redacted content>). Chale dusro ko basane..!!" The second part of the above sentence is written in Hindi while the first part is in English. Second part calls for an action to a person to bring order to his/her home before trying to settle others. |
53 | What MC abbreviate for? | machine comprehension | The last several years have seen intensive interest in exploring neural-network-based models for machine comprehension (MC) and question answering (QA). In this paper, we approach the problems by closely modelling questions in a neural network framework. We first introduce syntactic information to help encode questions. We then view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline. | Enabling computers to understand given documents and answer questions about their content has recently attracted intensive interest, including but not limited to the efforts as in BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Many specific problems such as machine comprehension and question answering often involve modeling such question-document pairs. The recent availability of relatively large training datasets (see Section "Related Work" for more details) has made it more feasible to train and estimate rather complex models in an end-to-end fashion for these problems, in which a whole model is fit directly with given question-answer tuples and the resulting model has shown to be rather effective. In this paper, we take a closer look at modeling questions in such an end-to-end neural network framework, since we regard question understanding is of importance for such problems. We first introduced syntactic information to help encode questions. We then viewed and modelled different types of questions and the information shared among them as an adaptation problem and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results on our competitive baselines. |
54 | What are their correlation results? | High correlation results range from 0.472 to 0.936 | We propose SumQE, a novel Quality Estimation model for summarization based on BERT. The model addresses linguistic quality aspects that are only indirectly captured by content-based approaches to summary evaluation, without involving comparison with human references. SumQE achieves very high correlations with human ratings, outperforming simpler models addressing these linguistic aspects. Predictions of the SumQE model can be used for system development, and to inform users of the quality of automatically produced summaries and other types of generated text. | Quality Estimation (QE) is a term used in machine translation (MT) to refer to methods that measure the quality of automatically translated text without relying on human references BIBREF0, BIBREF1. In this study, we address QE for summarization. Our proposed model, Sum-QE, successfully predicts linguistic qualities of summaries that traditional evaluation metrics fail to capture BIBREF2, BIBREF3, BIBREF4, BIBREF5. Sum-QE predictions can be used for system development, to inform users of the quality of automatically produced summaries and other types of generated text, and to select the best among summaries output by multiple systems. Sum-QE relies on the BERT language representation model BIBREF6. We use a pre-trained BERT model adding just a task-specific layer, and fine-tune the entire model on the task of predicting linguistic quality scores manually assigned to summaries. The five criteria addressed are given in Figure FIGREF2. We provide a thorough evaluation on three publicly available summarization datasets from NIST shared tasks, and compare the performance of our model to a wide variety of baseline methods capturing different aspects of linguistic quality. Sum-QE achieves very high correlations with human ratings, showing the ability of BERT to model linguistic qualities that relate to both text content and form. |
56 | What dataset do they use? | A parallel corpus where the source is an English expression of code and the target is Python code. | Making computer programming language more understandable and easy for the human is a longstanding problem. From assembly language to present day’s object-oriented programming, concepts came to make programming easier so that a programmer can focus on the logic and the architecture rather than the code and language itself. To go a step further in this journey of removing human-computer language barrier, this paper proposes machine learning approach using Recurrent Neural Network (RNN) and Long-Short Term Memory (LSTM) to convert human language into programming language code. The programmer will write expressions for codes in layman’s language, and the machine learning model will translate it to the targeted programming language. The proposed approach yields result with 74.40% accuracy. This can be further improved by incorporating additional techniques, which are also discussed in this paper. | Removing computer-human language barrier is an inevitable advancement researchers are thriving to achieve for decades. One of the stages of this advancement will be coding through natural human language instead of traditional programming language. On naturalness of computer programming D. Knuth said, “Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.”BIBREF0. Unfortunately, learning programming language is still necessary to instruct it. Researchers and developers are working to overcome this human-machine language barrier. Multiple branches exists to solve this challenge (i.e. inter-conversion of different programming language to have universally connected programming languages). Automatic code generation through natural language is not a new concept in computer science studies. However, it is difficult to create such tool due to these following three reasons– Programming languages are diverse An individual person expresses logical statements differently than other Natural Language Processing (NLP) of programming statements is challenging since both human and programming language evolve over time In this paper, a neural approach to translate pseudo-code or algorithm like human language expression into programming language code is proposed. |
57 | What is typical GAN architecture for each text-to-image synhesis group? | Semantic Enhancement GANs: DC-GANs, MC-GAN
Resolution Enhancement GANs: StackGANs, AttnGAN, HDGAN
Diversity Enhancement GANs: AC-GAN, TAC-GAN etc.
Motion Enhancement GAGs: T2S, T2V, StoryGAN | Text-to-image synthesis refers to computational methods which translate human written textual descriptions, in the form of keywords or sentences, into images with similar semantic meaning to the text. In earlier research, image synthesis relied mainly on word to image correlation analysis combined with supervised methods to find best alignment of the visual content matching to the text. Recent progress in deep learning (DL) has brought a new set of unsupervised deep learning methods, particularly deep generative models which are able to generate realistic visual images using suitably trained neural network models. In this paper, we review the most recent development in the text-to-image synthesis research domain. Our survey first introduces image synthesis and its challenges, and then reviews key concepts such as generative adversarial networks (GANs) and deep convolutional encoder-decoder neural networks (DCNN). After that, we propose a taxonomy to summarize GAN based text-to-image synthesis into four major categories: Semantic Enhancement GANs, Resolution Enhancement GANs, Diversity Enhancement GANS, and Motion Enhancement GANs. We elaborate the main objective of each group, and further review typical GAN architectures in each group. The taxonomy and the review outline the techniques and the evolution of different approaches, and eventually provide a clear roadmap to summarize the list of contemporaneous solutions that utilize GANs and DCNNs to generate enthralling results in categories such as human faces, birds, flowers, room interiors, object reconstruction from edge maps (games) etc. The survey will conclude with a comparison of the proposed solutions, challenges that remain unresolved, and future developments in the text-to-image synthesis domain. | “ (GANs), and the variations that are now being proposed is the most interesting idea in the last 10 years in ML, in my opinion.” (2016) – Yann LeCun A picture is worth a thousand words! While written text provide efficient, effective, and concise ways for communication, visual content, such as images, is a more comprehensive, accurate, and intelligible method of information sharing and understanding. Generation of images from text descriptions, i.e. text-to-image synthesis, is a complex computer vision and machine learning problem that has seen great progress over recent years. Automatic image generation from natural language may allow users to describe visual elements through visually-rich text descriptions. The ability to do so effectively is highly desirable as it could be used in artificial intelligence applications such as computer-aided design, image editing BIBREF0, BIBREF1, game engines for the development of the next generation of video gamesBIBREF2, and pictorial art generation BIBREF3. |
62 | What language do the agents talk in? | English | We introduce"Talk The Walk", the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a"guide"and a"tourist") that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide's map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task. | 0pt0.03.03 * 0pt0.030.03 * 0pt0.030.03 We introduce “Talk The Walk”, the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a “guide” and a “tourist”) that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide's map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task. |
66 | How much better is performance of proposed method than state-of-the-art methods in experiments? | Accuracy of best proposed method KANE (LSTM+Concatenation) are 0.8011, 0.8592, 0.8605 compared to best state-of-the art method R-GCN + LR 0.7721, 0.8193, 0.8229 on three datasets respectively. | The goal of representation learning of knowledge graph is to encode both entities and relations into a low-dimensional embedding spaces. Many recent works have demonstrated the benefits of knowledge graph embedding on knowledge graph completion task, such as relation extraction. However, we observe that: 1) existing method just take direct relations between entities into consideration and fails to express high-order structural relationship between entities; 2) these methods just leverage relation triples of KGs while ignoring a large number of attribute triples that encoding rich semantic information. To overcome these limitations, this paper propose a novel knowledge graph embedding method, named KANE, which is inspired by the recent developments of graph convolutional networks (GCN). KANE can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework. Empirical results on three datasets show that KANE significantly outperforms seven state-of-arts methods. Further analysis verify the efficiency of our method and the benefits brought by the attention mechanism. | In the past decade, many large-scale Knowledge Graphs (KGs), such as Freebase BIBREF0, DBpedia BIBREF1 and YAGO BIBREF2 have been built to represent human complex knowledge about the real-world in the machine-readable format. The facts in KGs are usually encoded in the form of triples $(\textit {head entity}, relation, \textit {tail entity})$ (denoted $(h, r, t)$ in this study) through the Resource Description Framework, e.g.,$(\textit {Donald Trump}, Born In, \textit {New York City})$. Figure FIGREF2 shows the subgraph of knowledge graph about the family of Donald Trump. In many KGs, we can observe that some relations indicate attributes of entities, such as the $\textit {Born}$ and $\textit {Abstract}$ in Figure FIGREF2, and others indicates the relations between entities (the head entity and tail entity are real world entity). Hence, the relationship in KG can be divided into relations and attributes, and correspondingly two types of triples, namely relation triples and attribute triples BIBREF3. A relation triples in KGs represents relationship between entities, e.g.,$(\textit {Donald Trump},Father of, \textit {Ivanka Trump})$, while attribute triples denote a literal attribute value of an entity, e.g.,$(\textit {Donald Trump},Born, \textit {"June 14, 1946"})$. Knowledge graphs have became important basis for many artificial intelligence applications, such as recommendation system BIBREF4, question answering BIBREF5 and information retrieval BIBREF6, which is attracting growing interests in both academia and industry communities. A common approach to apply KGs in these artificial intelligence applications is through embedding, which provide a simple method to encode both entities and relations into a continuous low-dimensional embedding spaces. Hence, learning distributional representation of knowledge graph has attracted many research attentions in recent years. TransE BIBREF7 is a seminal work in representation learning low-dimensional vectors for both entities and relations. The basic idea behind TransE is that the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $\textbf {h}+\textbf {r}\approx \textbf {t}$. This model provide a flexible way to improve the ability in completing the KGs, such as predicating the missing items in knowledge graph. Since then, several methods like TransH BIBREF8 and TransR BIBREF9, which represent the relational translation in other effective forms, have been proposed. Recent attempts focused on either incorporating extra information beyond KG triples BIBREF10, BIBREF11, BIBREF12, BIBREF13, or designing more complicated strategies BIBREF14, BIBREF15, BIBREF16. While these methods have achieved promising results in KG completion and link predication, existing knowledge graph embedding methods still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. We argue that the high-order structural relationship between entities also contain rich semantic relationships and incorporating these information can improve model performance. For example the fact $\textit {Donald Trump}\stackrel{Father of}{\longrightarrow }\textit {Ivanka Trump}\stackrel{Spouse}{\longrightarrow }\textit {Jared Kushner} $ indicates the relationship between entity Donald Trump and entity Jared Kushner. Several path-based methods have attempted to take multiple-step relation paths into consideration for learning high-order structural information of KGs BIBREF17, BIBREF18. But note that huge number of paths posed a critical complexity challenge on these methods. In order to enable efficient path modeling, these methods have to make approximations by sampling or applying path selection algorithm. We argue that making approximations has a large impact on the final performance. Second, to the best of our knowledge, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. Therefore, these methods easily suffer from sparseness and incompleteness of knowledge graph. Even worse, structure information usually cannot distinguish the different meanings of relations and entities in different triples. We believe that these rich information encoded in attribute triples can help explore rich semantic information and further improve the performance of knowledge graph. For example, we can learn date of birth and abstraction from values of Born and Abstract about Donald Trump in Figure FIGREF2. There are a huge number of attribute triples in real KGs, for example the statistical results in BIBREF3 shows attribute triples are three times as many as relationship triples in English DBpedia (2016-04). Recent a few attempts try to incorporate attribute triples BIBREF11, BIBREF12. However, these are two limitations existing in these methods. One is that only a part of attribute triples are used in the existing methods, such as only entity description is used in BIBREF12. The other is some attempts try to jointly model the attribute triples and relation triples in one unified optimization problem. The loss of two kinds triples has to be carefully balanced during optimization. For example, BIBREF3 use hyper-parameters to weight the loss of two kinds triples in their models. Considering limitations of existing knowledge graph embedding methods, we believe it is of critical importance to develop a model that can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner. Towards this end, inspired by the recent developments of graph convolutional networks (GCN) BIBREF19, which have the potential of achieving the goal but have not been explored much for knowledge graph embedding, we propose Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding (KANE). The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Specifically, two carefully designs are equipped in KANE to correspondingly address the above two challenges: 1) recursive embedding propagation based on relation triples, which updates a entity embedding. Through performing such recursively embedding propagation, the high-order structural information of kGs can be successfully captured in a linear time complexity; and 2) multi-head attention-based aggregation. The weight of each attribute triples can be learned through applying the neural attention mechanism BIBREF20. In experiments, we evaluate our model on two KGs tasks including knowledge graph completion and entity classification. Experimental results on three datasets shows that our method can significantly outperforms state-of-arts methods. The main contributions of this study are as follows: 1) We highlight the importance of explicitly modeling the high-order structural and attribution information of KGs to provide better knowledge graph embedding. 2) We proposed a new method KANE, which achieves can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework. 3) We conduct experiments on three datasets, demonstrating the effectiveness of KANE and its interpretability in understanding the importance of high-order relations. |
67 | What stylistic features are used to detect drunk texts? | LDA unigrams (Presence/Count), POS Ratio, #Named Entity Mentions, #Discourse Connectors, Spelling errors, Repeated characters, Capitalization, Length, Emoticon (Presence/Count), Sentiment Ratio. | Alcohol abuse may lead to unsociable behavior such as crime, drunk driving, or privacy leaks. We introduce automatic drunk-texting prediction as the task of identifying whether a text was written when under the influence of alcohol. We experiment with tweets labeled using hashtags as distant supervision. Our classifiers use a set of N-gram and stylistic features to detect drunk tweets. Our observations present the first quantitative evidence that text contains signals that can be exploited to detect drunk-texting. | The ubiquity of communication devices has made social media highly accessible. The content on these media reflects a user's day-to-day activities. This includes content created under the influence of alcohol. In popular culture, this has been referred to as `drunk-texting'. In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. We refer to tweets written under the influence of alcohol as `drunk tweets', and the opposite as `sober tweets'. A key challenge is to obtain an annotated dataset. We use hashtag-based supervision so that the authors of the tweets mention if they were drunk at the time of posting a tweet. We create three datasets by using different strategies that are related to the use of hashtags. We then present SVM-based classifiers that use N-gram and stylistic features such as capitalisation, spelling errors, etc. Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work. To the best of our knowledge, this is a first study that shows the feasibility of text-based analysis for drunk-texting prediction. |
68 | What is the accuracy of the proposed technique? | 51.7 and 51.6 on 4th and 8th grade question sets with no curated knowledge. 47.5 and 48.0 on 4th and 8th grade question sets when both solvers are given the same knowledge | While there has been substantial progress in factoid question-answering (QA), answering complex questions remains challenging, typically requiring both a large body of knowledge and inference techniques. Open Information Extraction (Open IE) provides a way to generate semi-structured knowledge for QA, but to date such knowledge has only been used to answer simple questions with retrieval-based methods. We overcome this limitation by presenting a method for reasoning with Open IE knowledge, allowing more complex questions to be handled. Using a recently proposed support graph optimization framework for QA, we develop a new inference model for Open IE, in particular one that can work effectively with multiple short facts, noise, and the relational structure of tuples. Our model significantly outperforms a state-of-the-art structured solver on complex questions of varying difficulty, while also removing the reliance on manually curated knowledge. | Effective question answering (QA) systems have been a long-standing quest of AI research. Structured curated KBs have been used successfully for this task BIBREF0 , BIBREF1 . However, these KBs are expensive to build and typically domain-specific. Automatically constructed open vocabulary (subject; predicate; object) style tuples have broader coverage, but have only been used for simple questions where a single tuple suffices BIBREF2 , BIBREF3 . Our goal in this work is to develop a QA system that can perform reasoning with Open IE BIBREF4 tuples for complex multiple-choice questions that require tuples from multiple sentences. Such a system can answer complex questions in resource-poor domains where curated knowledge is unavailable. Elementary-level science exams is one such domain, requiring complex reasoning BIBREF5 . Due to the lack of a large-scale structured KB, state-of-the-art systems for this task either rely on shallow reasoning with large text corpora BIBREF6 , BIBREF7 or deeper, structured reasoning with a small amount of automatically acquired BIBREF8 or manually curated BIBREF9 knowledge. Consider the following question from an Alaska state 4th grade science test: Which object in our solar system reflects light and is a satellite that orbits around one planet? (A) Earth (B) Mercury (C) the Sun (D) the Moon This question is challenging for QA systems because of its complex structure and the need for multi-fact reasoning. A natural way to answer it is by combining facts such as (Moon; is; in the solar system), (Moon; reflects; light), (Moon; is; satellite), and (Moon; orbits; around one planet). A candidate system for such reasoning, and which we draw inspiration from, is the TableILP system of BIBREF9 . TableILP treats QA as a search for an optimal subgraph that connects terms in the question and answer via rows in a set of curated tables, and solves the optimization problem using Integer Linear Programming (ILP). We similarly want to search for an optimal subgraph. However, a large, automatically extracted tuple KB makes the reasoning context different on three fronts: (a) unlike reasoning with tables, chaining tuples is less important and reliable as join rules aren't available; (b) conjunctive evidence becomes paramount, as, unlike a long table row, a single tuple is less likely to cover the entire question; and (c) again, unlike table rows, tuples are noisy, making combining redundant evidence essential. Consequently, a table-knowledge centered inference model isn't the best fit for noisy tuples. To address this challenge, we present a new ILP-based model of inference with tuples, implemented in a reasoner called TupleInf. We demonstrate that TupleInf significantly outperforms TableILP by 11.8% on a broad set of over 1,300 science questions, without requiring manually curated tables, using a substantially simpler ILP formulation, and generalizing well to higher grade levels. The gains persist even when both solvers are provided identical knowledge. This demonstrates for the first time how Open IE based QA can be extended from simple lookup questions to an effective system for complex questions. |
70 | Which retrieval system was used for baselines? | The dataset comes with a ranked set of relevant documents. Hence the baselines do not use a retrieval system. | We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. The Quasar-S dataset consists of 37000 cloze-style (fill-in-the-gap) queries constructed from definitions of software entity tags on the popular website Stack Overflow. The posts and comments on the website serve as the background corpus for answering the cloze questions. The Quasar-T dataset consists of 43000 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 serves as the background corpus for extracting these answers. We pose these datasets as a challenge for two related subtasks of factoid Question Answering: (1) searching for relevant pieces of text that include the correct answer to a query, and (2) reading the retrieved text to answer the query. We also describe a retrieval system for extracting relevant sentences and documents from the corpus given a query, and include these in the release for researchers wishing to only focus on (2). We evaluate several baselines on both datasets, ranging from simple heuristics to powerful neural models, and show that these lag behind human performance by 16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at https://github.com/bdhingra/quasar . | Factoid Question Answering (QA) aims to extract answers, from an underlying knowledge source, to information seeking questions posed in natural language. Depending on the knowledge source available there are two main approaches for factoid QA. Structured sources, including Knowledge Bases (KBs) such as Freebase BIBREF1 , are easier to process automatically since the information is organized according to a fixed schema. In this case the question is parsed into a logical form in order to query against the KB. However, even the largest KBs are often incomplete BIBREF2 , BIBREF3 , and hence can only answer a limited subset of all possible factoid questions. For this reason the focus is now shifting towards unstructured sources, such as Wikipedia articles, which hold a vast quantity of information in textual form and, in principle, can be used to answer a much larger collection of questions. Extracting the correct answer from unstructured text is, however, challenging, and typical QA pipelines consist of the following two components: (1) searching for the passages relevant to the given question, and (2) reading the retrieved text in order to select a span of text which best answers the question BIBREF4 , BIBREF5 . Like most other language technologies, the current research focus for both these steps is firmly on machine learning based approaches for which performance improves with the amount of data available. Machine reading performance, in particular, has been significantly boosted in the last few years with the introduction of large-scale reading comprehension datasets such as CNN / DailyMail BIBREF6 and Squad BIBREF7 . State-of-the-art systems for these datasets BIBREF8 , BIBREF9 focus solely on step (2) above, in effect assuming the relevant passage of text is already known. In this paper, we introduce two new datasets for QUestion Answering by Search And Reading – Quasar. The datasets each consist of factoid question-answer pairs and a corresponding large background corpus to facilitate research into the combined problem of retrieval and comprehension. Quasar-S consists of 37,362 cloze-style questions constructed from definitions of software entities available on the popular website Stack Overflow. The answer to each question is restricted to be another software entity, from an output vocabulary of 4874 entities. Quasar-T consists of 43,013 trivia questions collected from various internet sources by a trivia enthusiast. The answers to these questions are free-form spans of text, though most are noun phrases. While production quality QA systems may have access to the entire world wide web as a knowledge source, for Quasar we restrict our search to specific background corpora. This is necessary to avoid uninteresting solutions which directly extract answers from the sources from which the questions were constructed. For Quasar-S we construct the knowledge source by collecting top 50 threads tagged with each entity in the dataset on the Stack Overflow website. For Quasar-T we use ClueWeb09 BIBREF0 , which contains about 1 billion web pages collected between January and February 2009. Figure 1 shows some examples. Unlike existing reading comprehension tasks, the Quasar tasks go beyond the ability to only understand a given passage, and require the ability to answer questions given large corpora. Prior datasets (such as those used in BIBREF4 ) are constructed by first selecting a passage and then constructing questions about that passage. This design (intentionally) ignores some of the subproblems required to answer open-domain questions from corpora, namely searching for passages that may contain candidate answers, and aggregating information/resolving conflicts between candidates from many passages. The purpose of Quasar is to allow research into these subproblems, and in particular whether the search step can benefit from integration and joint training with downstream reading systems. Additionally, Quasar-S has the interesting feature of being a closed-domain dataset about computer programming, and successful approaches to it must develop domain-expertise and a deep understanding of the background corpus. To our knowledge it is one of the largest closed-domain QA datasets available. Quasar-T, on the other hand, consists of open-domain questions based on trivia, which refers to “bits of information, often of little importance". Unlike previous open-domain systems which rely heavily on the redundancy of information on the web to correctly answer questions, we hypothesize that Quasar-T requires a deeper reading of documents to answer correctly. We evaluate Quasar against human testers, as well as several baselines ranging from naïve heuristics to state-of-the-art machine readers. The best performing baselines achieve $33.6\%$ and $28.5\%$ on Quasar-S and Quasar-T, while human performance is $50\%$ and $60.6\%$ respectively. For the automatic systems, we see an interesting tension between searching and reading accuracies – retrieving more documents in the search phase leads to a higher coverage of answers, but makes the comprehension task more difficult. We also collect annotations on a subset of the development set questions to allow researchers to analyze the categories in which their system performs well or falls short. We plan to release these annotations along with the datasets, and our retrieved documents for each question. |
71 | How much better was the BLSTM-CNN-CRF than the BLSTM-CRF? | Best BLSTM-CNN-CRF had F1 score 86.87 vs 86.69 of best BLSTM-CRF | In recent years, Vietnamese Named Entity Recognition (NER) systems have had a great breakthrough when using Deep Neural Network methods. This paper describes the primary errors of the state-of-the-art NER systems on Vietnamese language. After conducting experiments on BLSTM-CNN-CRF and BLSTM-CRF models with different word embeddings on the Vietnamese NER dataset. This dataset is provided by VLSP in 2016 and used to evaluate most of the current Vietnamese NER systems. We noticed that BLSTM-CNN-CRF gives better results, therefore, we analyze the errors on this model in detail. Our error-analysis results provide us thorough insights in order to increase the performance of NER for the Vietnamese language and improve the quality of the corpus in the future works. | Named Entity Recognition (NER) is one of information extraction subtasks that is responsible for detecting entity elements from raw text and can determine the category in which the element belongs, these categories include the names of persons, organizations, locations, expressions of times, quantities, monetary values and percentages. The problem of NER is described as follow: Input: A sentence S consists a sequence of $n$ words: $S= w_1,w_2,w_3,…,w_n$ ($w_i$: the $i^{th}$ word) Output: The sequence of $n$ labels $y_1,y_2,y_3,…,y_n$. Each $y_i$ label represents the category which $w_i$ belongs to. For example, given a sentence: Input: vietnamGiám đốc điều hành Tim Cook của Apple vừa giới thiệu 2 điện thoại iPhone, đồng hồ thông minh mới, lớn hơn ở sự kiện Flint Center, Cupertino. (Apple CEO Tim Cook introduces 2 new, larger iPhones, Smart Watch at Cupertino Flint Center event) The algorithm will output: Output: vietnam⟨O⟩Giám đốc điều hành⟨O⟩ ⟨PER⟩Tim Cook⟨PER⟩ ⟨O⟩của⟨O⟩ ⟨ORG⟩Apple⟨ORG⟩ ⟨O⟩vừa giới thiệu 2 điện thoại iPhone, đồng hồ thông minh mới, lớn hơn ở sự kiện⟨O⟩ ⟨ORG⟩Flint Center⟨ORG⟩, ⟨LOC⟩Cupertino⟨LOC⟩. With LOC, PER, ORG is Name of location, person, organization respectively. Note that O means Other (Not a Name entity). We will not denote the O label in the following examples in this article because we only care about name of entities. In this paper, we analyze common errors of the previous state-of-the-art techniques using Deep Neural Network (DNN) on VLSP Corpus. This may contribute to the later researchers the common errors from the results of these state-of-the-art models, then they can rely on to improve the model. Section 2 discusses the related works to this paper. We will present a method for evaluating and analyzing the types of errors in Section 3. The data used for testing and analysis of errors will be introduced in Section 4, we also talk about deep neural network methods and pre-trained word embeddings for experimentation in this section. Section 5 will detail the errors and evaluations. In the end is our contribution to improve the above errors. |
72 | What supplemental tasks are used for multitask learning? | Multitask learning is used for the task of predicting relevance of a comment on a different question to a given question, where the supplemental tasks are predicting relevance between the questions, and between the comment and the corresponding question | We apply a general recurrent neural network (RNN) encoder framework to community question answering (cQA) tasks. Our approach does not rely on any linguistic processing, and can be applied to different languages or domains. Further improvements are observed when we extend the RNN encoders with a neural attention mechanism that encourages reasoning over entire sequences. To deal with practical issues such as data sparsity and imbalanced labels, we apply various techniques such as transfer learning and multitask learning. Our experiments on the SemEval-2016 cQA task show 10% improvement on a MAP score compared to an information retrieval-based approach, and achieve comparable performance to a strong handcrafted feature-based method. | Community question answering (cQA) is a paradigm that provides forums for users to ask or answer questions on any topic with barely any restrictions. In the past decade, these websites have attracted a great number of users, and have accumulated a large collection of question-comment threads generated by these users. However, the low restriction results in a high variation in answer quality, which makes it time-consuming to search for useful information from the existing content. It would therefore be valuable to automate the procedure of ranking related questions and comments for users with a new question, or when looking for solutions from comments of an existing question. Automation of cQA forums can be divided into three tasks: question-comment relevance (Task A), question-question relevance (Task B), and question-external comment relevance (Task C). One might think that classic retrieval models like language models for information retrieval BIBREF0 could solve these tasks. However, a big challenge for cQA tasks is that users are used to expressing similar meanings with different words, which creates gaps when matching questions based on common words. Other challenges include informal usage of language, highly diverse content of comments, and variation in the length of both questions and comments. To overcome these issues, most previous work (e.g. SemEval 2015 BIBREF1 ) relied heavily on additional features and reasoning capabilities. In BIBREF2 , a neural attention-based model was proposed for automatically recognizing entailment relations between pairs of natural language sentences. In this study, we first modify this model for all three cQA tasks. We also extend this framework into a jointly trained model when the external resources are available, i.e. selecting an external comment when we know the question that the external comment answers (Task C). Our ultimate objective is to classify relevant questions and comments without complicated handcrafted features. By applying RNN-based encoders, we avoid heavily engineered features and learn the representation automatically. In addition, an attention mechanism augments encoders with the ability to attend to past outputs directly. This becomes helpful when encoding longer sequences, since we no longer need to compress all information into a fixed-length vector representation. In our view, existing annotated cQA corpora are generally too small to properly train an end-to-end neural network. To address this, we investigate transfer learning by pretraining the recurrent systems on other corpora, and also generating additional instances from existing cQA corpus. |
73 | How big is their model? | Proposed model has 1.16 million parameters and 11.04 MB. | Targeted sentiment classification aims at determining the sentimental tendency towards specific targets. Most of the previous approaches model context and target words with RNN and attention. However, RNNs are difficult to parallelize and truncated backpropagation through time brings difficulty in remembering long-term patterns. To address this issue, this paper proposes an Attentional Encoder Network (AEN) which eschews recurrence and employs attention based encoders for the modeling between context and target. We raise the label unreliability issue and introduce label smoothing regularization. We also apply pre-trained BERT to this task and obtain new state-of-the-art results. Experiments and analysis demonstrate the effectiveness and lightweight of our model. | Targeted sentiment classification is a fine-grained sentiment analysis task, which aims at determining the sentiment polarities (e.g., negative, neutral, or positive) of a sentence over “opinion targets” that explicitly appear in the sentence. For example, given a sentence “I hated their service, but their food was great”, the sentiment polarities for the target “service” and “food” are negative and positive respectively. A target is usually an entity or an entity aspect. In recent years, neural network models are designed to automatically learn useful low-dimensional representations from targets and contexts and obtain promising results BIBREF0 , BIBREF1 . However, these neural network models are still in infancy to deal with the fine-grained targeted sentiment classification task. Attention mechanism, which has been successfully used in machine translation BIBREF2 , is incorporated to enforce the model to pay more attention to context words with closer semantic relations with the target. There are already some studies use attention to generate target-specific sentence representations BIBREF3 , BIBREF4 , BIBREF5 or to transform sentence representations according to target words BIBREF6 . However, these studies depend on complex recurrent neural networks (RNNs) as sequence encoder to compute hidden semantics of texts. The first problem with previous works is that the modeling of text relies on RNNs. RNNs, such as LSTM, are very expressive, but they are hard to parallelize and backpropagation through time (BPTT) requires large amounts of memory and computation. Moreover, essentially every training algorithm of RNN is the truncated BPTT, which affects the model's ability to capture dependencies over longer time scales BIBREF7 . Although LSTM can alleviate the vanishing gradient problem to a certain extent and thus maintain long distance information, this usually requires a large amount of training data. Another problem that previous studies ignore is the label unreliability issue, since neutral sentiment is a fuzzy sentimental state and brings difficulty for model learning. As far as we know, we are the first to raise the label unreliability issue in the targeted sentiment classification task. This paper propose an attention based model to solve the problems above. Specifically, our model eschews recurrence and employs attention as a competitive alternative to draw the introspective and interactive semantics between target and context words. To deal with the label unreliability issue, we employ a label smoothing regularization to encourage the model to be less confident with fuzzy labels. We also apply pre-trained BERT BIBREF8 to this task and show our model enhances the performance of basic BERT model. Experimental results on three benchmark datasets show that the proposed model achieves competitive performance and is a lightweight alternative of the best RNN based models. The main contributions of this work are presented as follows: |
75 | How many emotions do they look at? | 9 | We introduce a new dataset for multi-class emotion analysis from long-form narratives in English. The Dataset for Emotions of Narrative Sequences (DENS) was collected from both classic literature available on Project Gutenberg and modern online narratives available on Wattpad, annotated using Amazon Mechanical Turk. A number of statistics and baseline benchmarks are provided for the dataset. Of the tested techniques, we find that the fine-tuning of a pre-trained BERT model achieves the best results, with an average micro-F1 score of 60.4%. Our results show that the dataset provides a novel opportunity in emotion analysis that requires moving beyond existing sentence-level techniques. | Humans experience a variety of complex emotions in daily life. These emotions are heavily reflected in our language, in both spoken and written forms. Many recent advances in natural language processing on emotions have focused on product reviews BIBREF0 and tweets BIBREF1, BIBREF2. These datasets are often limited in length (e.g. by the number of words in tweets), purpose (e.g. product reviews), or emotional spectrum (e.g. binary classification). Character dialogues and narratives in storytelling usually carry strong emotions. A memorable story is often one in which the emotional journey of the characters resonates with the reader. Indeed, emotion is one of the most important aspects of narratives. In order to characterize narrative emotions properly, we must move beyond binary constraints (e.g. good or bad, happy or sad). In this paper, we introduce the Dataset for Emotions of Narrative Sequences (DENS) for emotion analysis, consisting of passages from long-form fictional narratives from both classic literature and modern stories in English. The data samples consist of self-contained passages that span several sentences and a variety of subjects. Each sample is annotated by using one of 9 classes and an indicator for annotator agreement. |
79 | What is the performance of proposed model on entire DROP dataset? | The proposed model achieves EM 77,63 and F1 80,73 on the test and EM 76,95 and F1 80,25 on the dev | With models reaching human performance on many popular reading comprehension datasets in recent years, a new dataset, DROP, introduced questions that were expected to present a harder challenge for reading comprehension models. Among these new types of questions were "multi-span questions", questions whose answers consist of several spans from either the paragraph or the question itself. Until now, only one model attempted to tackle multi-span questions as a part of its design. In this work, we suggest a new approach for tackling multi-span questions, based on sequence tagging, which differs from previous approaches for answering span questions. We show that our approach leads to an absolute improvement of 29.7 EM and 15.1 F1 compared to existing state-of-the-art results, while not hurting performance on other question types. Furthermore, we show that our model slightly eclipses the current state-of-the-art results on the entire DROP dataset. | The task of reading comprehension, where systems must understand a single passage of text well enough to answer arbitrary questions about it, has seen significant progress in the last few years. With models reaching human performance on the popular SQuAD dataset BIBREF0, and with much of the most popular reading comprehension datasets having been solved BIBREF1, BIBREF2, a new dataset, DROP BIBREF3, was recently published. DROP aimed to present questions that require more complex reasoning in order to answer than that of previous datasets, in a hope to push the field towards a more comprehensive analysis of paragraphs of text. In addition to questions whose answers are a single continuous span from the paragraph text (questions of a type already included in SQuAD), DROP introduced additional types of questions. Among these new types were questions that require simple numerical reasoning, i.e questions whose answer is the result of a simple arithmetic expression containing numbers from the passage, and questions whose answers consist of several spans taken from the paragraph or the question itself, what we will denote as "multi-span questions". Of all the existing models that tried to tackle DROP, only one model BIBREF4 directly targeted multi-span questions in a manner that wasn't just a by-product of the model's overall performance. In this paper, we propose a new method for tackling multi-span questions. Our method takes a different path from that of the aforementioned model. It does not try to generalize the existing approach for tackling single-span questions, but instead attempts to attack this issue with a new, tag-based, approach. |
80 | How accurate is the aspect based sentiment classifier trained only using the XR loss? | BiLSTM-XR-Dev Estimation accuracy is 83.31 for SemEval-15 and 87.68 for SemEval-16.
BiLSTM-XR accuracy is 83.31 for SemEval-15 and 88.12 for SemEval-16.
| Deep learning systems thrive on abundance of labeled training data but such data is not always available, calling for alternative methods of supervision. One such method is expectation regularization (XR) (Mann and McCallum, 2007), where models are trained based on expected label proportions. We propose a novel application of the XR framework for transfer learning between related tasks, where knowing the labels of task A provides an estimation of the label proportion of task B. We then use a model trained for A to label a large corpus, and use this corpus with an XR loss to train a model for task B. To make the XR framework applicable to large-scale deep-learning setups, we propose a stochastic batched approximation procedure. We demonstrate the approach on the task of Aspect-based Sentiment classification, where we effectively use a sentence-level sentiment predictor to train accurate aspect-based predictor. The method improves upon fully supervised neural system trained on aspect-level data, and is also cumulative with LM-based pretraining, as we demonstrate by improving a BERT-based Aspect-based Sentiment model. | Data annotation is a key bottleneck in many data driven algorithms. Specifically, deep learning models, which became a prominent tool in many data driven tasks in recent years, require large datasets to work well. However, many tasks require manual annotations which are relatively hard to obtain at scale. An attractive alternative is lightly supervised learning BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. For example, in label regularization BIBREF0 the model is trained to fit the true label proportions of an unlabeled dataset. Label regularization is special case of expectation regularization (XR) BIBREF0 , in which the model is trained to fit the conditional probabilities of labels given features. In this work we consider the case of correlated tasks, in the sense that knowing the labels for task A provides information on the expected label composition of task B. We demonstrate the approach using sentence-level and aspect-level sentiment analysis, which we use as a running example: knowing that a sentence has positive sentiment label (task A), we can expect that most aspects within this sentence (task B) will also have positive label. While this expectation may be noisy on the individual example level, it holds well in aggregate: given a set of positively-labeled sentences, we can robustly estimate the proportion of positively-labeled aspects within this set. For example, in a random set of positive sentences, we expect to find 90% positive aspects, while in a set of negative sentences, we expect to find 70% negative aspects. These proportions can be easily either guessed or estimated from a small set. We propose a novel application of the XR framework for transfer learning in this setup. We present an algorithm (Sec SECREF12 ) that, given a corpus labeled for task A (sentence-level sentiment), learns a classifier for performing task B (aspect-level sentiment) instead, without a direct supervision signal for task B. We note that the label information for task A is only used at training time. Furthermore, due to the stochastic nature of the estimation, the task A labels need not be fully accurate, allowing us to make use of noisy predictions which are assigned by an automatic classifier (Sections SECREF12 and SECREF4 ). In other words, given a medium-sized sentiment corpus with sentence-level labels, and a large collection of un-annotated text from the same distribution, we can train an accurate aspect-level sentiment classifier. The XR loss allows us to use task A labels for training task B predictors. This ability seamlessly integrates into other semi-supervised schemes: we can use the XR loss on top of a pre-trained model to fine-tune the pre-trained representation to the target task, and we can also take the model trained using XR loss and plentiful data and fine-tune it to the target task using the available small-scale annotated data. In Section SECREF56 we explore these options and show that our XR framework improves the results also when applied on top of a pre-trained Bert-based model BIBREF9 . Finally, to make the XR framework applicable to large-scale deep-learning setups, we propose a stochastic batched approximation procedure (Section SECREF19 ). Source code is available at https://github.com/MatanBN/XRTransfer. |
81 | What were the non-neural baselines used for the task? | The Lemming model in BIBREF17 | The SIGMORPHON 2019 shared task on cross-lingual transfer and contextual analysis in morphology examined transfer learning of inflection between 100 language pairs, as well as contextual lemmatization and morphosyntactic description in 66 languages. The first task evolves past years' inflection tasks by examining transfer of morphological inflection knowledge from a high-resource language to a low-resource language. This year also presents a new second challenge on lemmatization and morphological feature analysis in context. All submissions featured a neural component and built on either this year's strong baselines or highly ranked systems from previous years' shared tasks. Every participating team improved in accuracy over the baselines for the inflection task (though not Levenshtein distance), and every team in the contextual analysis task improved on both state-of-the-art neural and non-neural baselines. | While producing a sentence, humans combine various types of knowledge to produce fluent output—various shades of meaning are expressed through word selection and tone, while the language is made to conform to underlying structural rules via syntax and morphology. Native speakers are often quick to identify disfluency, even if the meaning of a sentence is mostly clear. Automatic systems must also consider these constraints when constructing or processing language. Strong enough language models can often reconstruct common syntactic structures, but are insufficient to properly model morphology. Many languages implement large inflectional paradigms that mark both function and content words with a varying levels of morphosyntactic information. For instance, Romanian verb forms inflect for person, number, tense, mood, and voice; meanwhile, Archi verbs can take on thousands of forms BIBREF0. Such complex paradigms produce large inventories of words, all of which must be producible by a realistic system, even though a large percentage of them will never be observed over billions of lines of linguistic input. Compounding the issue, good inflectional systems often require large amounts of supervised training data, which is infeasible in many of the world's languages. This year's shared task is concentrated on encouraging the construction of strong morphological systems that perform two related but different inflectional tasks. The first task asks participants to create morphological inflectors for a large number of under-resourced languages, encouraging systems that use highly-resourced, related languages as a cross-lingual training signal. The second task welcomes submissions that invert this operation in light of contextual information: Given an unannotated sentence, lemmatize each word, and tag them with a morphosyntactic description. Both of these tasks extend upon previous morphological competitions, and the best submitted systems now represent the state of the art in their respective tasks. |
83 | What are the models evaluated on? | They evaluate F1 score and agent's test performance on their own built interactive datasets (iSQuAD and iNewsQA) | Existing machine reading comprehension (MRC) models do not scale effectively to real-world applications like web-level information retrieval and question answering (QA). We argue that this stems from the nature of MRC datasets: most of these are static environments wherein the supporting documents and all necessary information are fully observed. In this paper, we propose a simple method that reframes existing MRC datasets as interactive, partially observable environments. Specifically, we "occlude" the majority of a document's text and add context-sensitive commands that reveal "glimpses" of the hidden text to a model. We repurpose SQuAD and NewsQA as an initial case study, and then show how the interactive corpora can be used to train a model that seeks relevant information through sequential decision making. We believe that this setting can contribute in scaling models to web-level QA scenarios. | Many machine reading comprehension (MRC) datasets have been released in recent years BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 to benchmark a system's ability to understand and reason over natural language. Typically, these datasets require an MRC model to read through a document to answer a question about information contained therein. The supporting document is, more often than not, static and fully observable. This raises concerns, since models may find answers simply through shallow pattern matching; e.g., syntactic similarity between the words in questions and documents. As pointed out by BIBREF5, for questions starting with when, models tend to predict the only date/time answer in the supporting document. Such behavior limits the generality and usefulness of MRC models, and suggests that they do not learn a proper `understanding' of the intended task. In this paper, to address this problem, we shift the focus of MRC data away from `spoon-feeding' models with sufficient information in fully observable, static documents. Instead, we propose interactive versions of existing MRC tasks, whereby the information needed to answer a question must be gathered sequentially. The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL). As an initial case study, we repurpose two well known, related corpora with different difficulty levels for our interactive MRC task: SQuAD and NewsQA. Table TABREF2 shows some examples of a model performing interactive MRC on these datasets. Naturally, our reframing makes the MRC problem harder; however, we believe the added demands of iMRC more closely match web-level QA and may lead to deeper comprehension of documents' content. The main contributions of this work are as follows: We describe a method to make MRC datasets interactive and formulate the new task as an RL problem. We develop a baseline agent that combines a top performing MRC model and a state-of-the-art RL optimization algorithm and test it on our iMRC tasks. We conduct experiments on several variants of iMRC and discuss the significant challenges posed by our setting. |
84 | What is the results of multimodal compared to unimodal models? | Unimodal LSTM vs Best Multimodal (FCM)
- F score: 0.703 vs 0.704
- AUC: 0.732 vs 0.734
- Mean Accuracy: 68.3 vs 68.4 | In this work we target the problem of hate speech detection in multimodal publications formed by a text and an image. We gather and annotate a large scale dataset from Twitter, MMHS150K, and propose different models that jointly analyze textual and visual information for hate speech detection, comparing them with unimodal detection. We provide quantitative and qualitative results and analyze the challenges of the proposed task. We find that, even though images are useful for the hate speech detection task, current multimodal models cannot outperform models analyzing only text. We discuss why and open the field and the dataset for further research. | Social Media platforms such as Facebook, Twitter or Reddit have empowered individuals' voices and facilitated freedom of expression. However they have also been a breeding ground for hate speech and other types of online harassment. Hate speech is defined in legal literature as speech (or any form of expression) that expresses (or seeks to promote, or has the capacity to increase) hatred against a person or a group of people because of a characteristic they share, or a group to which they belong BIBREF0. Twitter develops this definition in its hateful conduct policy as violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. In this work we focus on hate speech detection. Due to the inherent complexity of this task, it is important to distinguish hate speech from other types of online harassment. In particular, although it might be offensive to many people, the sole presence of insulting terms does not itself signify or convey hate speech. And, the other way around, hate speech may denigrate or threaten an individual or a group of people without the use of any profanities. People from the african-american community, for example, often use the term nigga online, in everyday language, without malicious intentions to refer to folks within their community, and the word cunt is often used in non hate speech publications and without any sexist purpose. The goal of this work is not to discuss if racial slur, such as nigga, should be pursued. The goal is to distinguish between publications using offensive terms and publications attacking communities, which we call hate speech. Modern social media content usually include images and text. Some of these multimodal publications are only hate speech because of the combination of the text with a certain image. That is because, as we have stated, the presence of offensive terms does not itself signify hate speech, and the presence of hate speech is often determined by the context of a publication. Moreover, users authoring hate speech tend to intentionally construct publications where the text is not enough to determine they are hate speech. This happens especially in Twitter, where multimodal tweets are formed by an image and a short text, which in many cases is not enough to judge them. In those cases, the image might give extra context to make a proper judgement. Fig. FIGREF5 shows some of such examples in MMHS150K. The contributions of this work are as follows: [noitemsep,leftmargin=*] We propose the novel task of hate speech detection in multimodal publications, collect, annotate and publish a large scale dataset. We evaluate state of the art multimodal models on this specific task and compare their performance with unimodal detection. Even though images are proved to be useful for hate speech detection, the proposed multimodal models do not outperform unimodal textual models. We study the challenges of the proposed task, and open the field for future research. |
85 | What were their performance results? | On SearchSnippets dataset ACC 77.01%, NMI 62.94%, on StackOverflow dataset ACC 51.14%, NMI 49.08%, on Biomedical dataset ACC 43.00%, NMI 38.18% | Short text clustering is a challenging problem due to its sparseness of text representation. Here we propose a flexible Self-Taught Convolutional neural network framework for Short Text Clustering (dubbed STC^2), which can flexibly and successfully incorporate more useful semantic features and learn non-biased deep text representation in an unsupervised manner. In our framework, the original raw text features are firstly embedded into compact binary codes by using one existing unsupervised dimensionality reduction methods. Then, word embeddings are explored and fed into convolutional neural networks to learn deep feature representations, meanwhile the output units are used to fit the pre-trained binary codes in the training process. Finally, we get the optimal clusters by employing K-means to cluster the learned representations. Extensive experimental results demonstrate that the proposed framework is effective, flexible and outperform several popular clustering methods when tested on three public short text datasets. | Short text clustering is of great importance due to its various applications, such as user profiling BIBREF0 and recommendation BIBREF1 , for nowaday's social media dataset emerged day by day. However, short text clustering has the data sparsity problem and most words only occur once in each short text BIBREF2 . As a result, the Term Frequency-Inverse Document Frequency (TF-IDF) measure cannot work well in short text setting. In order to address this problem, some researchers work on expanding and enriching the context of data from Wikipedia BIBREF3 or an ontology BIBREF4 . However, these methods involve solid Natural Language Processing (NLP) knowledge and still use high-dimensional representation which may result in a waste of both memory and computation time. Another way to overcome these issues is to explore some sophisticated models to cluster short texts. For example, Yin and Wang BIBREF5 proposed a Dirichlet multinomial mixture model-based approach for short text clustering. Yet how to design an effective model is an open question, and most of these methods directly trained based on Bag-of-Words (BoW) are shallow structures which cannot preserve the accurate semantic similarities. Recently, with the help of word embedding, neural networks demonstrate their great performance in terms of constructing text representation, such as Recursive Neural Network (RecNN) BIBREF6 , BIBREF7 and Recurrent Neural Network (RNN) BIBREF8 . However, RecNN exhibits high time complexity to construct the textual tree, and RNN, using the hidden layer computed at the last word to represent the text, is a biased model where later words are more dominant than earlier words BIBREF9 . Whereas for the non-biased models, the learned representation of one text can be extracted from all the words in the text with non-dominant learned weights. More recently, Convolution Neural Network (CNN), as the most popular non-biased model and applying convolutional filters to capture local features, has achieved a better performance in many NLP applications, such as sentence modeling BIBREF10 , relation classification BIBREF11 , and other traditional NLP tasks BIBREF12 . Most of the previous works focus CNN on solving supervised NLP tasks, while in this paper we aim to explore the power of CNN on one unsupervised NLP task, short text clustering. We systematically introduce a simple yet surprisingly powerful Self-Taught Convolutional neural network framework for Short Text Clustering, called STC INLINEFORM0 . An overall architecture of our proposed approach is illustrated in Figure FIGREF5 . We, inspired by BIBREF13 , BIBREF14 , utilize a self-taught learning framework into our task. In particular, the original raw text features are first embedded into compact binary codes INLINEFORM1 with the help of one traditional unsupervised dimensionality reduction function. Then text matrix INLINEFORM2 projected from word embeddings are fed into CNN model to learn the deep feature representation INLINEFORM3 and the output units are used to fit the pre-trained binary codes INLINEFORM4 . After obtaining the learned features, K-means algorithm is employed on them to cluster texts into clusters INLINEFORM5 . Obviously, we call our approach “self-taught” because the CNN model is learnt from the pseudo labels generated from the previous stage, which is quite different from the term “self-taught” in BIBREF15 . Our main contributions can be summarized as follows: This work is an extension of our conference paper BIBREF16 , and they differ in the following aspects. First, we put forward a general a self-taught CNN framework in this paper which can flexibly couple various semantic features, whereas the conference version can be seen as a specific example of this work. Second, in this paper we use a new short text dataset, Biomedical, in the experiment to verify the effectiveness of our approach. Third, we put much effort on studying the influence of various different semantic features integrated in our self-taught CNN framework, which is not involved in the conference paper. For the purpose of reproducibility, we make the datasets and software used in our experiments publicly available at the website. The remainder of this paper is organized as follows: In Section SECREF2 , we first briefly survey several related works. In Section SECREF3 , we describe the proposed approach STC INLINEFORM0 and implementation details. Experimental results and analyses are presented in Section SECREF4 . Finally, conclusions are given in the last Section. |
89 | What is the state of the art? | POS and DP task: CONLL 2018
NER task: (no extensive work) Strong baselines CRF and BiLSTM-CRF
NLI task: mBERT or XLM (not clear from text) | Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models—in all languages except English—very limited. Aiming to address this issue for French, we release CamemBERT, a French version of the Bi-directional Encoders for Transformers (BERT). We measure the performance of CamemBERT compared to multilingual models in multiple downstream tasks, namely part-of-speech tagging, dependency parsing, named-entity recognition, and natural language inference. CamemBERT improves the state of the art for most of the tasks considered. We release the pretrained model for CamemBERT hoping to foster research and downstream applications for French NLP. | Pretrained word representations have a long history in Natural Language Processing (NLP), from non-neural methods BIBREF0, BIBREF1, BIBREF2 to neural word embeddings BIBREF3, BIBREF4 and to contextualised representations BIBREF5, BIBREF6. Approaches shifted more recently from using these representations as an input to task-specific architectures to replacing these architectures with large pretrained language models. These models are then fine-tuned to the task at hand with large improvements in performance over a wide range of tasks BIBREF7, BIBREF8, BIBREF9, BIBREF10. These transfer learning methods exhibit clear advantages over more traditional task-specific approaches, probably the most important being that they can be trained in an unsupervised manner. They nevertheless come with implementation challenges, namely the amount of data and computational resources needed for pretraining that can reach hundreds of gigabytes of uncompressed text and require hundreds of GPUs BIBREF11, BIBREF9. The latest transformer architecture has gone uses as much as 750GB of plain text and 1024 TPU v3 for pretraining BIBREF10. This has limited the availability of these state-of-the-art models to the English language, at least in the monolingual setting. Even though multilingual models give remarkable results, they are often larger and their results still lag behind their monolingual counterparts BIBREF12. This is particularly inconvenient as it hinders their practical use in NLP systems as well as the investigation of their language modeling capacity, something that remains to be investigated in the case of, for instance, morphologically rich languages. We take advantage of the newly available multilingual corpus OSCAR BIBREF13 and train a monolingual language model for French using the RoBERTa architecture. We pretrain the model - which we dub CamemBERT- and evaluate it in four different downstream tasks for French: part-of-speech (POS) tagging, dependency parsing, named entity recognition (NER) and natural language inference (NLI). CamemBERT improves the state of the art for most tasks over previous monolingual and multilingual approaches, which confirms the effectiveness of large pretrained language models for French. We summarise our contributions as follows: We train a monolingual BERT model on the French language using recent large-scale corpora. We evaluate our model on four downstream tasks (POS tagging, dependency parsing, NER and natural language inference (NLI)), achieving state-of-the-art results in most tasks, confirming the effectiveness of large BERT-based models for French. We release our model in a user-friendly format for popular open-source libraries so that it can serve as a strong baseline for future research and be useful for French NLP practitioners. |
91 | What difficulties does sentiment analysis on Twitter have, compared to sentiment analysis in other domains? | Tweets noisy nature, use of creative spelling and punctuation, misspellings, slang, new words, URLs, and genre-specific terminology and abbreviations, short (length limited) text | Internet and the proliferation of smart mobile devices have changed the way information is created, shared, and spreads, e.g., microblogs such as Twitter, weblogs such as LiveJournal, social networks such as Facebook, and instant messengers such as Skype and WhatsApp are now commonly used to share thoughts and opinions about anything in the surrounding world. This has resulted in the proliferation of social media content, thus creating new opportunities to study public opinion at a scale that was never possible before. Naturally, this abundance of data has quickly attracted business and research interest from various fields including marketing, political science, and social studies, among many others, which are interested in questions like these: Do people like the new Apple Watch? Do Americans support ObamaCare? How do Scottish feel about the Brexit? Answering these questions requires studying the sentiment of opinions people express in social media, which has given rise to the fast growth of the field of sentiment analysis in social media, with Twitter being especially popular for research due to its scale, representativeness, variety of topics discussed, as well as ease of public access to its messages. Here we present an overview of work on sentiment analysis on Twitter. | Microblog sentiment analysis; Twitter opinion mining |
92 | How are possible sentence transformations represented in dataset, as new sentences? | Yes, as new sentences. | COSTRA 1.0 is a dataset of Czech complex sentence transformations. The dataset is intended for the study of sentence-level embeddings beyond simple word alternations or standard paraphrasing. ::: The dataset consist of 4,262 unique sentences with average length of 10 words, illustrating 15 types of modifications such as simplification, generalization, or formal and informal language variation. ::: The hope is that with this dataset, we should be able to test semantic properties of sentence embeddings and perhaps even to find some topologically interesting “skeleton” in the sentence embedding space. | Vector representations are becoming truly essential in majority of natural language processing tasks. Word embeddings became widely popular with the introduction of word2vec BIBREF0 and GloVe BIBREF1 and their properties have been analyzed in length from various aspects. Studies of word embeddings range from word similarity BIBREF2, BIBREF3, over the ability to capture derivational relations BIBREF4, linear superposition of multiple senses BIBREF5, the ability to predict semantic hierarchies BIBREF6 or POS tags BIBREF7 up to data efficiency BIBREF8. Several studies BIBREF9, BIBREF10, BIBREF11, BIBREF12 show that word vector representations are capable of capturing meaningful syntactic and semantic regularities. These include, for example, male/female relation demonstrated by the pairs “man:woman”, “king:queen” and the country/capital relation (“Russia:Moscow”, “Japan:Tokyo”). These regularities correspond to simple arithmetic operations in the vector space. Sentence embeddings are becoming equally ubiquitous in NLP, with novel representations appearing almost every other week. With an overwhelming number of methods to compute sentence vector representations, the study of their general properties becomes difficult. Furthermore, it is not so clear in which way the embeddings should be evaluated. In an attempt to bring together more traditional representations of sentence meanings and the emerging vector representations, bojar:etal:jnle:representations:2019 introduce a number of aspects or desirable properties of sentence embeddings. One of them is denoted as “relatability”, which highlights the correspondence between meaningful differences between sentences and geometrical relations between their respective embeddings in the highly dimensional continuous vector space. If such a correspondence could be found, we could use geometrical operations in the space to induce meaningful changes in sentences. In this work, we present COSTRA, a new dataset of COmplex Sentence TRAnsformations. In its first version, the dataset is limited to sample sentences in Czech. The goal is to support studies of semantic and syntactic relations between sentences in the continuous space. Our dataset is the prerequisite for one of possible ways of exploring sentence meaning relatability: we envision that the continuous space of sentences induced by an ideal embedding method would exhibit topological similarity to the graph of sentence variations. For instance, one could argue that a subset of sentences could be organized along a linear scale reflecting the formalness of the language used. Another set of sentences could form a partially ordered set of gradually less and less concrete statements. And yet another set, intersecting both of the previous ones in multiple sentences could be partially or linearly ordered according to the strength of the speakers confidence in the claim. Our long term goal is to search for an embedding method which exhibits this behaviour, i.e. that the topological map of the embedding space corresponds to meaningful operations or changes in the set of sentences of a language (or more languages at once). We prefer this behaviour to emerge, as it happened for word vector operations, but regardless if the behaviour is emergent or trained, we need a dataset of sentences illustrating these patterns. If large enough, such a dataset could serve for training. If it will be smaller, it will provide a test set. In either case, these sentences could provide a “skeleton” to the continuous space of sentence embeddings. The paper is structured as follows: related summarizes existing methods of sentence embeddings evaluation and related work. annotation describes our methodology for constructing our dataset. data details the obtained dataset and some first observations. We conclude and provide the link to the dataset in conclusion |
94 | What semantic rules are proposed? | rules that compute polarity of words after POS tagging or parsing steps | This paper introduces a novel deep learning framework including a lexicon-based approach for sentence-level prediction of sentiment label distribution. We propose to first apply semantic rules and then use a Deep Convolutional Neural Network (DeepCNN) for character-level embeddings in order to increase information for word-level embedding. After that, a Bidirectional Long Short-Term Memory Network (Bi-LSTM) produces a sentence-wide feature representation from the word-level embedding. We evaluate our approach on three Twitter sentiment classification datasets. Experimental results show that our model can improve the classification accuracy of sentence-level sentiment analysis in Twitter social networking. | Twitter sentiment classification have intensively researched in recent years BIBREF0 BIBREF1 . Different approaches were developed for Twitter sentiment classification by using machine learning such as Support Vector Machine (SVM) with rule-based features BIBREF2 and the combination of SVMs and Naive Bayes (NB) BIBREF3 . In addition, hybrid approaches combining lexicon-based and machine learning methods also achieved high performance described in BIBREF4 . However, a problem of traditional machine learning is how to define a feature extractor for a specific domain in order to extract important features. Deep learning models are different from traditional machine learning methods in that a deep learning model does not depend on feature extractors because features are extracted during training progress. The use of deep learning methods becomes to achieve remarkable results for sentiment analysis BIBREF5 BIBREF6 BIBREF7 . Some researchers used Convolutional Neural Network (CNN) for sentiment classification. CNN models have been shown to be effective for NLP. For example, BIBREF6 proposed various kinds of CNN to learn sentiment-bearing sentence vectors, BIBREF5 adopted two CNNs in character-level to sentence-level representation for sentiment analysis. BIBREF7 constructs experiments on a character-level CNN for several large-scale datasets. In addition, Long Short-Term Memory (LSTM) is another state-of-the-art semantic composition model for sentiment classification with many variants described in BIBREF8 . The studies reveal that using a CNN is useful in extracting information and finding feature detectors from texts. In addition, a LSTM can be good in maintaining word order and the context of words. However, in some important aspects, the use of CNN or LSTM separately may not capture enough information. Inspired by the models above, the goal of this research is using a Deep Convolutional Neural Network (DeepCNN) to exploit the information of characters of words in order to support word-level embedding. A Bi-LSTM produces a sentence-wide feature representation based on these embeddings. The Bi-LSTM is a version of BIBREF9 with Full Gradient described in BIBREF10 . In addition, the rules-based approach also effects classification accuracy by focusing on important sub-sentences expressing the main sentiment of a tweet while removing unnecessary parts of a tweet. The paper makes the following contributions: The organization of the present paper is as follows: In section 2, we describe the model architecture which introduces the structure of the model. We explain the basic idea of model and the way of constructing the model. Section 3 show results and analysis and section 4 summarize this paper. |
96 | What is the performance of the model? | Experiment 1: ACC around 0.5 with 50% noise rate in worst case - clearly higher than baselines for all noise rates
Experiment 2: ACC on real noisy datasets: 0.7 on Movie, 0.79 on Laptop, 0.86 on Restaurant (clearly higher than baselines in almost all cases) | Deep neural networks (DNNs) can fit (or even over-fit) the training data very well. If a DNN model is trained using data with noisy labels and tested on data with clean labels, the model may perform poorly. This paper studies the problem of learning with noisy labels for sentence-level sentiment classification. We propose a novel DNN model called NetAb (as shorthand for convolutional neural Networks with Ab-networks) to handle noisy labels during training. NetAb consists of two convolutional neural networks, one with a noise transition layer for dealing with the input noisy labels and the other for predicting 'clean' labels. We train the two networks using their respective loss functions in a mutual reinforcement manner. Experimental results demonstrate the effectiveness of the proposed model. | It is well known that sentiment annotation or labeling is subjective BIBREF0. Annotators often have many disagreements. This is especially so for crowd-workers who are not well trained. That is why one always feels that there are many errors in an annotated dataset. In this paper, we study whether it is possible to build accurate sentiment classifiers even with noisy-labeled training data. Sentiment classification aims to classify a piece of text according to the polarity of the sentiment expressed in the text, e.g., positive or negative BIBREF1, BIBREF0, BIBREF2. In this work, we focus on sentence-level sentiment classification (SSC) with labeling errors. As we will see in the experiment section, noisy labels in the training data can be highly damaging, especially for DNNs because they easily fit the training data and memorize their labels even when training data are corrupted with noisy labels BIBREF3. Collecting datasets annotated with clean labels is costly and time-consuming as DNN based models usually require a large number of training examples. Researchers and practitioners typically have to resort to crowdsourcing. However, as mentioned above, the crowdsourced annotations can be quite noisy. Research on learning with noisy labels dates back to 1980s BIBREF4. It is still vibrant today BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 as it is highly challenging. We will discuss the related work in the next section. This paper studies the problem of learning with noisy labels for SSC. Formally, we study the following problem. Problem Definition: Given noisy labeled training sentences $S=\lbrace (x_1,y_1),...,(x_n,y_n)\rbrace $, where $x_i|_{i=1}^n$ is the $i$-th sentence and $y_i\in \lbrace 1,...,c\rbrace $ is the sentiment label of this sentence, the noisy labeled sentences are used to train a DNN model for a SSC task. The trained model is then used to classify sentences with clean labels to one of the $c$ sentiment labels. In this paper, we propose a convolutional neural Network with Ab-networks (NetAb) to deal with noisy labels during training, as shown in Figure FIGREF2. We will introduce the details in the subsequent sections. Basically, NetAb consists of two convolutional neural networks (CNNs) (see Figure FIGREF2), one for learning sentiment scores to predict `clean' labels and the other for learning a noise transition matrix to handle input noisy labels. We call the two CNNs A-network and Ab-network, respectively. The fundamental here is that (1) DNNs memorize easy instances first and gradually adapt to hard instances as training epochs increase BIBREF3, BIBREF13; and (2) noisy labels are theoretically flipped from the clean/true labels by a noise transition matrix BIBREF14, BIBREF15, BIBREF16, BIBREF17. We motivate and propose a CNN model with a transition layer to estimate the noise transition matrix for the input noisy labels, while exploiting another CNN to predict `clean' labels for the input training (and test) sentences. In training, we pre-train A-network in early epochs and then train Ab-network and A-network with their own loss functions in an alternating manner. To our knowledge, this is the first work that addresses the noisy label problem in sentence-level sentiment analysis. Our experimental results show that the proposed model outperforms the state-of-the-art methods. |
98 | Which of the two speech recognition models works better overall on CN-Celeb? | x-vector | Recently, researchers set an ambitious goal of conducting speaker recognition in unconstrained conditions where the variations on ambient, channel and emotion could be arbitrary. However, most publicly available datasets are collected under constrained environments, i.e., with little noise and limited channel variation. These datasets tend to deliver over optimistic performance and do not meet the request of research on speaker recognition in unconstrained conditions. In this paper, we present CN-Celeb, a large-scale speaker recognition dataset collected `in the wild'. This dataset contains more than 130,000 utterances from 1,000 Chinese celebrities, and covers 11 different genres in real world. Experiments conducted with two state-of-the-art speaker recognition approaches (i-vector and x-vector) show that the performance on CN-Celeb is far inferior to the one obtained on VoxCeleb, a widely used speaker recognition dataset. This result demonstrates that in real-life conditions, the performance of existing techniques might be much worse than it was thought. Our database is free for researchers and can be downloaded from this http URL. | Speaker recognition including identification and verification, aims to recognize claimed identities of speakers. After decades of research, performance of speaker recognition systems has been vastly improved, and the technique has been deployed to a wide range of practical applications. Nevertheless, the present speaker recognition approaches are still far from reliable in unconstrained conditions where uncertainties within the speech recordings could be arbitrary. These uncertainties might be caused by multiple factors, including free text, multiple channels, environmental noises, speaking styles, and physiological status. These uncertainties make the speaker recognition task highly challenging BIBREF0, BIBREF1. Researchers have devoted much effort to address the difficulties in unconstrained conditions. Early methods are based on probabilistic models that treat these uncertainties as an additive Gaussian noise. JFA BIBREF2, BIBREF3 and PLDA BIBREF4 are the most famous among such models. These models, however, are shallow and linear, and therefore cannot deal with the complexity of real-life applications. Recent advance in deep learning methods offers a new opportunity BIBREF5, BIBREF6, BIBREF7, BIBREF8. Resorting to the power of deep neural networks (DNNs) in representation learning, these methods can remove unwanted uncertainties by propagating speech signals through the DNN layer by layer and retain speaker-relevant features only BIBREF9. Significant improvement in robustness has been achieved by the DNN-based approach BIBREF10, which makes it more suitable for applications in unconstrained conditions. The success of DNN-based methods, however, largely relies on a large amount of data, in particular data that involve the true complexity in unconstrained conditions. Unfortunately, most existing datasets for speaker recognition are collected in constrained conditions, where the acoustic environment, channel and speaking style do not change significantly for each speaker BIBREF11, BIBREF12, BIBREF13. These datasets tend to deliver over optimistic performance and do not meet the request of research on speaker recognition in unconstrained conditions. To address this shortage in datasets, researchers have started to collect data `in the wild'. The most successful `wild' dataset may be VoxCeleb BIBREF14, BIBREF15, which contains millions of utterances from over thousands of speakers. The utterances were collected from open-source media using a fully automated pipeline based on computer vision techniques, in particular face detection, tracking and recognition, plus video-audio synchronization. The automated pipeline is almost costless, and thus greatly improves the efficiency of data collection. In this paper, we re-implement the automated pipeline of VoxCeleb and collect a new large-scale speaker dataset, named CN-Celeb. Compared with VoxCeleb, CN-Celeb has three distinct features: CN-Celeb specially focuses on Chinese celebrities, and contains more than $130,000$ utterances from $1,000$ persons. CN-Celeb covers more genres of speech. We intentionally collected data from 11 genres, including entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement. The speech of a particular speaker may be in more than 5 genres. As a comparison, most of the utterances in VoxCeleb were extracted from interview videos. The diversity in genres makes our database more representative for the true scenarios in unconstrained conditions, but also more challenging. CN-Celeb is not fully automated, but involves human check. We found that more complex the genre is, more errors the automated pipeline tends to produce. Ironically, the error-pron segments could be highly valuable as they tend to be boundary samples. We therefore choose a two-stage strategy that employs the automated pipeline to perform pre-selection, and then perform human check. The rest of the paper is organized as follows. Section SECREF2 presents a detailed description for CN-Celeb, and Section SECREF3 presents more quantitative comparisons between CN-Celeb and VoxCeleb on the speaker recognition task. Section SECREF4 concludes the entire paper. |
99 | How do the authors measure performance? | Accuracy across six datasets | We propose a novel data augmentation method for labeled sentences called conditional BERT contextual augmentation. Data augmentation methods are often applied to prevent overfitting and improve generalization of deep neural network models. Recently proposed contextual augmentation augments labeled sentences by randomly replacing words with more varied substitutions predicted by language model. BERT demonstrates that a deep bidirectional language model is more powerful than either an unidirectional language model or the shallow concatenation of a forward and backward model. We retrofit BERT to conditional BERT by introducing a new conditional masked language model\footnote{The term"conditional masked language model"appeared once in original BERT paper, which indicates context-conditional, is equivalent to term"masked language model". In our paper,"conditional masked language model"indicates we apply extra label-conditional constraint to the"masked language model".} task. The well trained conditional BERT can be applied to enhance contextual augmentation. Experiments on six various different text classification tasks show that our method can be easily applied to both convolutional or recurrent neural networks classifier to obtain obvious improvement. | Deep neural network-based models are easy to overfit and result in losing their generalization due to limited size of training data. In order to address the issue, data augmentation methods are often applied to generate more training samples. Recent years have witnessed great success in applying data augmentation in the field of speech area BIBREF0 , BIBREF1 and computer vision BIBREF2 , BIBREF3 , BIBREF4 . Data augmentation in these areas can be easily performed by transformations like resizing, mirroring, random cropping, and color shifting. However, applying these universal transformations to texts is largely randomized and uncontrollable, which makes it impossible to ensure the semantic invariance and label correctness. For example, given a movie review “The actors is good", by mirroring we get “doog si srotca ehT", or by random cropping we get “actors is", both of which are meaningless. Existing data augmentation methods for text are often loss of generality, which are developed with handcrafted rules or pipelines for specific domains. A general approach for text data augmentation is replacement-based method, which generates new sentences by replacing the words in the sentences with relevant words (e.g. synonyms). However, words with synonyms from a handcrafted lexical database likes WordNet BIBREF5 are very limited , and the replacement-based augmentation with synonyms can only produce limited diverse patterns from the original texts. To address the limitation of replacement-based methods, Kobayashi BIBREF6 proposed contextual augmentation for labeled sentences by offering a wide range of substitute words, which are predicted by a label-conditional bidirectional language model according to the context. But contextual augmentation suffers from two shortages: the bidirectional language model is simply shallow concatenation of a forward and backward model, and the usage of LSTM models restricts their prediction ability to a short range. BERT, which stands for Bidirectional Encoder Representations from Transformers, pre-trained deep bidirectional representations by jointly conditioning on both left and right context in all layers. BERT addressed the unidirectional constraint by proposing a “masked language model" (MLM) objective by masking some percentage of the input tokens at random, and predicting the masked words based on its context. This is very similar to how contextual augmentation predict the replacement words. But BERT was proposed to pre-train text representations, so MLM task is performed in an unsupervised way, taking no label variance into consideration. This paper focuses on the replacement-based methods, by proposing a novel data augmentation method called conditional BERT contextual augmentation. The method applies contextual augmentation by conditional BERT, which is fine-tuned on BERT. We adopt BERT as our pre-trained language model with two reasons. First, BERT is based on Transformer. Transformer provides us with a more structured memory for handling long-term dependencies in text. Second, BERT, as a deep bidirectional model, is strictly more powerful than the shallow concatenation of a left-to-right and right-to left model. So we apply BERT to contextual augmentation for labeled sentences, by offering a wider range of substitute words predicted by the masked language model task. However, the masked language model predicts the masked word based only on its context, so the predicted word maybe incompatible with the annotated labels of the original sentences. In order to address this issue, we introduce a new fine-tuning objective: the "conditional masked language model"(C-MLM). The conditional masked language model randomly masks some of the tokens from an input, and the objective is to predict a label-compatible word based on both its context and sentence label. Unlike Kobayashi's work, the C-MLM objective allows a deep bidirectional representations by jointly conditioning on both left and right context in all layers. In order to evaluate how well our augmentation method improves performance of deep neural network models, following Kobayashi BIBREF6 , we experiment it on two most common neural network structures, LSTM-RNN and CNN, on text classification tasks. Through the experiments on six various different text classification tasks, we demonstrate that the proposed conditional BERT model augments sentence better than baselines, and conditional BERT contextual augmentation method can be easily applied to both convolutional or recurrent neural networks classifier. We further explore our conditional MLM task’s connection with style transfer task and demonstrate that our conditional BERT can also be applied to style transfer too. Our contributions are concluded as follows: To our best knowledge, this is the first attempt to alter BERT to a conditional BERT or apply BERT on text generation tasks. |
100 | What learning paradigms do they cover in this survey? | Considering "What" and "How" separately versus jointly optimizing for both. | Emerging research in Neural Question Generation (NQG) has started to integrate a larger variety of inputs, and generating questions requiring higher levels of cognition. These trends point to NQG as a bellwether for NLP, about how human intelligence embodies the skills of curiosity and integration. We present a comprehensive survey of neural question generation, examining the corpora, methodologies, and evaluation methods. From this, we elaborate on what we see as emerging on NQG's trend: in terms of the learning paradigms, input modalities, and cognitive levels considered by NQG. We end by pointing out the potential directions ahead. | Question Generation (QG) concerns the task of “automatically generating questions from various inputs such as raw text, database, or semantic representation" BIBREF0 . People have the ability to ask rich, creative, and revealing questions BIBREF1 ; e.g., asking Why did Gollum betray his master Frodo Baggins? after reading the fantasy novel The Lord of the Rings. How can machines be endowed with the ability to ask relevant and to-the-point questions, given various inputs? This is a challenging, complementary task to Question Answering (QA). Both QA and QG require an in-depth understanding of the input source and the ability to reason over relevant contexts. But beyond understanding, QG additionally integrates the challenges of Natural Language Generation (NLG), i.e., generating grammatically and semantically correct questions. QG is of practical importance: in education, forming good questions are crucial for evaluating students’ knowledge and stimulating self-learning. QG can generate assessments for course materials BIBREF2 or be used as a component in adaptive, intelligent tutoring systems BIBREF3 . In dialog systems, fluent QG is an important skill for chatbots, e.g., in initiating conversations or obtaining specific information from human users. QA and reading comprehension also benefit from QG, by reducing the needed human labor for creating large-scale datasets. We can say that traditional QG mainly focused on generating factoid questions from a single sentence or a paragraph, spurred by a series of workshops during 2008–2012 BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . Recently, driven by advances in deep learning, QG research has also begun to utilize “neural” techniques, to develop end-to-end neural models to generate deeper questions BIBREF8 and to pursue broader applications BIBREF9 , BIBREF10 . While there have been considerable advances made in NQG, the area lacks a comprehensive survey. This paper fills this gap by presenting a systematic survey on recent development of NQG, focusing on three emergent trends that deep learning has brought in QG: (1) the change of learning paradigm, (2) the broadening of the input spectrum, and (3) the generation of deep questions. |
105 | How is the data annotated? | The data are self-reported by Twitter users and then verified by two human experts. | With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions. | 0pt*0*0 0pt*0*0 0pt*0*0 0.95 1]Amir Hossein Yazdavar 1]Mohammad Saeid Mahdavinejad 2]Goonmeet Bajaj 3]William Romine 1]Amirhassan Monadjemi 1]Krishnaprasad Thirunarayan 1]Amit Sheth 4]Jyotishman Pathak [1]Department of Computer Science & Engineering, Wright State University, OH, USA [2]Ohio State University, Columbus, OH, USA [3]Department of Biological Science, Wright State University, OH, USA [4] Division of Health Informatics, Weill Cornell University, New York, NY, USA [1] yazdavar.2@wright.edu With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions. |
108 | What result from experiments suggest that natural language based agents are more robust? | Average reward across 5 seeds show that NLP representations are robust to changes in the environment as well task-nuisances | Recent advances in Reinforcement Learning have highlighted the difficulties in learning within complex high dimensional domains. We argue that one of the main reasons that current approaches do not perform well, is that the information is represented sub-optimally. A natural way to describe what we observe, is through natural language. In this paper, we implement a natural language state representation to learn and complete tasks. Our experiments suggest that natural language based agents are more robust, converge faster and perform better than vision based agents, showing the benefit of using natural language representations for Reinforcement Learning. | “The world of our experiences must be enormously simplified and generalized before it is possible to make a symbolic inventory of all our experiences of things and relations." (Edward Sapir, Language: An Introduction to the Study of Speech, 1921) Deep Learning based algorithms use neural networks in order to learn feature representations that are good for solving high dimensional Machine Learning (ML) tasks. Reinforcement Learning (RL) is a subfield of ML that has been greatly affected by the use of deep neural networks as universal function approximators BIBREF0, BIBREF1. These deep neural networks are used in RL to estimate value functions, state-action value functions, policy mappings, next-state predictions, rewards, and more BIBREF2, BIBREF3, BIBREF4, thus combating the “curse of dimensionality". The term representation is used differently in different contexts. For the purpose of this paper we define a semantic representation of a state as one that reflects its meaning as it is understood by an expert. The semantic representation of a state should thus be paired with a reliable and computationally efficient method for extracting information from it. Previous success in RL has mainly focused on representing the state in its raw form (e.g., visual input in Atari-based games BIBREF2). This approach stems from the belief that neural networks (specifically convolutional networks) can extract meaningful features from complex inputs. In this work, we challenge current representation techniques and suggest to represent the state using natural language, similar to the way we, as humans, summarize and transfer information efficiently from one to the other BIBREF5. The ability to associate states with natural language sentences that describe them is a hallmark of understanding representations for reinforcement learning. Humans use rich natural language to describe and communicate their visual perceptions, feelings, beliefs, strategies, and more. The semantics inherent to natural language carry knowledge and cues of complex types of content, including: events, spatial relations, temporal relations, semantic roles, logical structures, support for inference and entailment, as well as predicates and arguments BIBREF6. The expressive nature of language can thus act as an alternative semantic state representation. Over the past few years, Natural Language Processing (NLP) has shown an acceleration in progress on a wide range of downstream applications ranging from Question Answering BIBREF7, BIBREF8, to Natural Language Inference BIBREF9, BIBREF10, BIBREF11 through Syntactic Parsing BIBREF12, BIBREF13, BIBREF14. Recent work has shown the ability to learn flexible, hierarchical, contextualized representations, obtaining state-of-the-art results on various natural language processing tasks BIBREF15. A basic observation of our work is that natural language representations are also beneficial for solving problems in which natural language is not the underlying source of input. Moreover, our results indicate that natural language is a strong alternative to current complementary methods for semantic representations of a state. In this work we assume a state can be described using natural language sentences. We use distributional embedding methods in order to represent sentences, processed with a standard Convolutional Neural Network for feature extraction. In Section SECREF2 we describe the basic frameworks we rely on. We discuss possible semantic representations in Section SECREF3, namely, raw visual inputs, semantic segmentation, feature vectors, and natural language representations. Then, in Section SECREF4 we compare NLP representations with their alternatives. Our results suggest that representation of the state using natural language can achieve better performance, even on difficult tasks, or tasks in which the description of the state is saturated with task-nuisances BIBREF17. Moreover, we observe that NLP representations are more robust to transfer and changes in the environment. We conclude the paper with a short discussion and related work. |
110 | Which datasets are used in the paper? | Google N-grams
COHA
Moral Foundations Dictionary (MFD)
| We present a text-based framework for investigating moral sentiment change of the public via longitudinal corpora. Our framework is based on the premise that language use can inform people's moral perception toward right or wrong, and we build our methodology by exploring moral biases learned from diachronic word embeddings. We demonstrate how a parameter-free model supports inference of historical shifts in moral sentiment toward concepts such as slavery and democracy over centuries at three incremental levels: moral relevance, moral polarity, and fine-grained moral dimensions. We apply this methodology to visualizing moral time courses of individual concepts and analyzing the relations between psycholinguistic variables and rates of moral sentiment change at scale. Our work offers opportunities for applying natural language processing toward characterizing moral sentiment change in society. | People's moral sentiment—our feelings toward right or wrong—can change over time. For instance, the public's views toward slavery have shifted substantially over the past centuries BIBREF0. How society's moral views evolve has been a long-standing issue and a constant source of controversy subject to interpretations from social scientists, historians, philosophers, among others. Here we ask whether natural language processing has the potential to inform moral sentiment change in society at scale, involving minimal human labour or intervention. The topic of moral sentiment has been thus far considered a traditional inquiry in philosophy BIBREF1, BIBREF2, BIBREF3, with contemporary development of this topic represented in social psychology BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, cognitive linguistics BIBREF9, and more recently, the advent of Moral Foundations Theory BIBREF10, BIBREF11, BIBREF12. Despite the fundamental importance and interdisciplinarity of this topic, large-scale formal treatment of moral sentiment, particularly its evolution, is still in infancy from the natural language processing (NLP) community (see overview in Section SECREF2). We believe that there is a tremendous potential to bring NLP methodologies to bear on the problem of moral sentiment change. We build on extensive recent work showing that word embeddings reveal implicit human biases BIBREF13, BIBREF14 and social stereotypes BIBREF15. Differing from this existing work, we demonstrate that moral sentiment change can be revealed by moral biases implicitly learned from diachronic text corpora. Accordingly, we present to our knowledge the first text-based framework for probing moral sentiment change at a large scale with support for different levels of analysis concerning moral relevance, moral polarity, and fine-grained moral dimensions. As such, for any query item such as slavery, our goal is to automatically infer its moral trajectories from sentiments at each of these levels over a long period of time. Our approach is based on the premise that people's moral sentiments are reflected in natural language, and more specifically, in text BIBREF16. In particular, we know that books are highly effective tools for conveying moral views to the public. For example, Uncle Tom's Cabin BIBREF17 was central to the anti-slavery movement in the United States. The framework that we develop builds on this premise to explore changes in moral sentiment reflected in longitudinal or historical text. Figure FIGREF1 offers a preview of our framework by visualizing the evolution trajectories of the public's moral sentiment toward concepts signified by the probe words slavery, democracy, and gay. Each of these concepts illustrates a piece of “moral history” tracked through a period of 200 years (1800 to 2000), and our framework is able to capture nuanced moral changes. For instance, slavery initially lies at the border of moral virtue (positive sentiment) and vice (negative sentiment) in the 1800s yet gradually moves toward the center of moral vice over the 200-year period; in contrast, democracy considered morally negative (e.g., subversion and anti-authority under monarchy) in the 1800s is now perceived as morally positive, as a mechanism for fairness; gay, which came to denote homosexuality only in the 1930s BIBREF18, is inferred to be morally irrelevant until the modern day. We will describe systematic evaluations and applications of our framework that extend beyond these anecdotal cases of moral sentiment change. The general text-based framework that we propose consists of a parameter-free approach that facilitates the prediction of public moral sentiment toward individual concepts, automated retrieval of morally changing concepts, and broad-scale psycholinguistic analyses of historical rates of moral sentiment change. We provide a description of the probabilistic models and data used, followed by comprehensive evaluations of our methodology. |
112 | How much is proposed model better in perplexity and BLEU score than typical UMT models? | Perplexity of the best model is 65.58 compared to best baseline 105.79.
Bleu of the best model is 6.57 compared to best baseline 5.50. | Classical Chinese poetry is a jewel in the treasure house of Chinese culture. Previous poem generation models only allow users to employ keywords to interfere the meaning of generated poems, leaving the dominion of generation to the model. In this paper, we propose a novel task of generating classical Chinese poems from vernacular, which allows users to have more control over the semantic of generated poems. We adapt the approach of unsupervised machine translation (UMT) to our task. We use segmentation-based padding and reinforcement learning to address under-translation and over-translation respectively. According to experiments, our approach significantly improve the perplexity and BLEU compared with typical UMT models. Furthermore, we explored guidelines on how to write the input vernacular to generate better poems. Human evaluation showed our approach can generate high-quality poems which are comparable to amateur poems. | During thousands of years, millions of classical Chinese poems have been written. They contain ancient poets' emotions such as their appreciation for nature, desiring for freedom and concerns for their countries. Among various types of classical poetry, quatrain poems stand out. On the one hand, their aestheticism and terseness exhibit unique elegance. On the other hand, composing such poems is extremely challenging due to their phonological, tonal and structural restrictions. Most previous models for generating classical Chinese poems BIBREF0, BIBREF1 are based on limited keywords or characters at fixed positions (e.g., acrostic poems). Since users could only interfere with the semantic of generated poems using a few input words, models control the procedure of poem generation. In this paper, we proposed a novel model for classical Chinese poem generation. As illustrated in Figure FIGREF1, our model generates a classical Chinese poem based on a vernacular Chinese paragraph. Our objective is not only to make the model generate aesthetic and terse poems, but also keep rich semantic of the original vernacular paragraph. Therefore, our model gives users more control power over the semantic of generated poems by carefully writing the vernacular paragraph. Although a great number of classical poems and vernacular paragraphs are easily available, there exist only limited human-annotated pairs of poems and their corresponding vernacular translations. Thus, it is unlikely to train such poem generation model using supervised approaches. Inspired by unsupervised machine translation (UMT) BIBREF2, we treated our task as a translation problem, namely translating vernacular paragraphs to classical poems. However, our work is not just a straight-forward application of UMT. In a training example for UMT, the length difference of source and target languages are usually not large, but this is not true in our task. Classical poems tend to be more concise and abstract, while vernacular text tends to be detailed and lengthy. Based on our observation on gold-standard annotations, vernacular paragraphs usually contain more than twice as many Chinese characters as their corresponding classical poems. Therefore, such discrepancy leads to two main problems during our preliminary experiments: (1) Under-translation: when summarizing vernacular paragraphs to poems, some vernacular sentences are not translated and ignored by our model. Take the last two vernacular sentences in Figure FIGREF1 as examples, they are not covered in the generated poem. (2) Over-translation: when expanding poems to vernacular paragraphs, certain words are unnecessarily translated for multiple times. For example, the last sentence in the generated poem of Figure FIGREF1, as green as sapphire, is back-translated as as green as as as sapphire. Inspired by the phrase segmentation schema in classical poems BIBREF3, we proposed the method of phrase-segmentation-based padding to handle with under-translation. By padding poems based on the phrase segmentation custom of classical poems, our model better aligns poems with their corresponding vernacular paragraphs and meanwhile lowers the risk of under-translation. Inspired by Paulus2018ADR, we designed a reinforcement learning policy to penalize the model if it generates vernacular paragraphs with too many repeated words. Experiments show our method can effectively decrease the possibility of over-translation. The contributions of our work are threefold: (1) We proposed a novel task for unsupervised Chinese poem generation from vernacular text. (2) We proposed using phrase-segmentation-based padding and reinforcement learning to address two important problems in this task, namely under-translation and over-translation. (3) Through extensive experiments, we proved the effectiveness of our models and explored how to write the input vernacular to inspire better poems. Human evaluation shows our models are able to generate high quality poems, which are comparable to amateur poems. |
114 | Do they train a different training method except from scheduled sampling? | Answer with content missing: (list missing)
Scheduled sampling: In our experiments, we found that models trained with scheduled sampling performed better (about 0.004 BLEU-4 on validation set) than the ones trained using teacher-forcing for the AVSD dataset. Hence, we use scheduled sampling for all the results we report in this paper.
Yes. | Understanding audio-visual content and the ability to have an informative conversation about it have both been challenging areas for intelligent systems. The Audio Visual Scene-aware Dialog (AVSD) challenge, organized as a track of the Dialog System Technology Challenge 7 (DSTC7), proposes a combined task, where a system has to answer questions pertaining to a video given a dialogue with previous question-answer pairs and the video itself. We propose for this task a hierarchical encoder-decoder model which computes a multi-modal embedding of the dialogue context. It first embeds the dialogue history using two LSTMs. We extract video and audio frames at regular intervals and compute semantic features using pre-trained I3D and VGGish models, respectively. Before summarizing both modalities into fixed-length vectors using LSTMs, we use FiLM blocks to condition them on the embeddings of the current question, which allows us to reduce the dimensionality considerably. Finally, we use an LSTM decoder that we train with scheduled sampling and evaluate using beam search. Compared to the modality-fusing baseline model released by the AVSD challenge organizers, our model achieves a relative improvements of more than 16%, scoring 0.36 BLEU-4 and more than 33%, scoring 0.997 CIDEr. | Deep neural networks have been successfully applied to several computer vision tasks such as image classification BIBREF0 , object detection BIBREF1 , video action classification BIBREF2 , etc. They have also been successfully applied to natural language processing tasks such as machine translation BIBREF3 , machine reading comprehension BIBREF4 , etc. There has also been an explosion of interest in tasks which combine multiple modalities such as audio, vision, and language together. Some popular multi-modal tasks combining these three modalities, and their differences are highlighted in Table TABREF1 . Given an image and a question related to the image, the vqa challenge BIBREF5 tasked users with selecting an answer to the question. BIBREF6 identified several sources of bias in the vqa dataset, which led to deep neural models answering several questions superficially. They found that in several instances, deep architectures exploited the statistics of the dataset to select answers ignoring the provided image. This prompted the release of vqa 2.0 BIBREF7 which attempts to balance the original dataset. In it, each question is paired to two similar images which have different answers. Due to the complexity of vqa, understanding the failures of deep neural architectures for this task has been a challenge. It is not easy to interpret whether the system failed in understanding the question or in understanding the image or in reasoning over it. The CLEVR dataset BIBREF8 was hence proposed as a useful benchmark to evaluate such systems on the task of visual reasoning. Extending question answering over images to videos, BIBREF9 have proposed MovieQA, where the task is to select the correct answer to a provided question given the movie clip on which it is based. Intelligent systems that can interact with human users for a useful purpose are highly valuable. To this end, there has been a recent push towards moving from single-turn qa to multi-turn dialogue, which is a natural and intuitive setting for humans. Among multi-modal dialogue tasks, visdial BIBREF10 provides an image and dialogue where each turn is a qa pair. The task is to train a model to answer these questions within the dialogue. The avsd challenge extends the visdial task from images to the audio-visual domain. We present our modelname model for the avsd task. modelname combines a hred for encoding and generating qa-dialogue with a novel FiLM-based audio-visual feature extractor for videos and an auxiliary multi-task learning-based decoder for decoding a summary of the video. It outperforms the baseline results for the avsd dataset BIBREF11 and was ranked 2nd overall among the dstc7 avsd challenge participants. In Section SECREF2 , we discuss existing literature on end-to-end dialogue systems with a special focus on multi-modal dialogue systems. Section SECREF3 describes the avsd dataset. In Section SECREF4 , we present the architecture of our modelname model. We describe our evaluation and experimental setup in Section SECREF5 and then conclude in Section SECREF6 . |
116 | How do they define upward and downward reasoning? | Upward reasoning is defined as going from one specific concept to a more general one. Downward reasoning is defined as the opposite, going from a general concept to one that is more specific. | Monotonicity reasoning is one of the important reasoning skills for any intelligent natural language inference (NLI) model in that it requires the ability to capture the interaction between lexical and syntactic structures. Since no test set has been developed for monotonicity reasoning with wide coverage, it is still unclear whether neural models can perform monotonicity reasoning in a proper way. To investigate this issue, we introduce the Monotonicity Entailment Dataset (MED). Performance by state-of-the-art NLI models on the new test set is substantially worse, under 55%, especially on downward reasoning. In addition, analysis using a monotonicity-driven data augmentation method showed that these models might be limited in their generalization ability in upward and downward reasoning. | Natural language inference (NLI), also known as recognizing textual entailment (RTE), has been proposed as a benchmark task for natural language understanding. Given a premise $P$ and a hypothesis $H$ , the task is to determine whether the premise semantically entails the hypothesis BIBREF0 . A number of recent works attempt to test and analyze what type of inferences an NLI model may be performing, focusing on various types of lexical inferences BIBREF1 , BIBREF2 , BIBREF3 and logical inferences BIBREF4 , BIBREF5 . Concerning logical inferences, monotonicity reasoning BIBREF6 , BIBREF7 , which is a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures. Consider examples in ( "Introduction" ) and ( "Introduction" ). All [ workers $\leavevmode {\color {blue!80!black}\downarrow }$ ] [joined for a French dinner $\leavevmode {\color {red!80!black}\uparrow }$ ] All workers joined for a dinner All new workers joined for a French dinner Not all [new workers $\leavevmode {\color {red!80!black}\uparrow }$ ] joined for a dinner Not all workers joined for a dinner A context is upward entailing (shown by [... $\leavevmode {\color {red!80!black}\uparrow }$ ]) that allows an inference from ( "Introduction" ) to ( "Introduction" ), where French dinner is replaced by a more general concept dinner. On the other hand, a downward entailing context (shown by [... $\leavevmode {\color {blue!80!black}\downarrow }$ ]) allows an inference from ( "Introduction" ) to ( "Introduction" ), where workers is replaced by a more specific concept new workers. Interestingly, the direction of monotonicity can be reversed again by embedding yet another downward entailing context (e.g., not in ( "Introduction" )), as witness the fact that ( "Introduction" ) entails ( "Introduction" ). To properly handle both directions of monotonicity, NLI models must detect monotonicity operators (e.g., all, not) and their arguments from the syntactic structure. For previous datasets containing monotonicity inference problems, FraCaS BIBREF8 and the GLUE diagnostic dataset BIBREF9 are manually-curated datasets for testing a wide range of linguistic phenomena. However, monotonicity problems are limited to very small sizes (FraCaS: 37/346 examples and GLUE: 93/1650 examples). The limited syntactic patterns and vocabularies in previous test sets are obstacles in accurately evaluating NLI models on monotonicity reasoning. To tackle this issue, we present a new evaluation dataset that covers a wide range of monotonicity reasoning that was created by crowdsourcing and collected from linguistics publications (Section "Dataset" ). Compared with manual or automatic construction, we can collect naturally-occurring examples by crowdsourcing and well-designed ones from linguistics publications. To enable the evaluation of skills required for monotonicity reasoning, we annotate each example in our dataset with linguistic tags associated with monotonicity reasoning. We measure the performance of state-of-the-art NLI models on monotonicity reasoning and investigate their generalization ability in upward and downward reasoning (Section "Results and Discussion" ). The results show that all models trained with SNLI BIBREF4 and MultiNLI BIBREF10 perform worse on downward inferences than on upward inferences. In addition, we analyzed the performance of models trained with an automatically created monotonicity dataset, HELP BIBREF11 . The analysis with monotonicity data augmentation shows that models tend to perform better in the same direction of monotonicity with the training set, while they perform worse in the opposite direction. This indicates that the accuracy on monotonicity reasoning depends solely on the majority direction in the training set, and models might lack the ability to capture the structural relations between monotonicity operators and their arguments. |
118 | Do they annotate their own dataset or use an existing one? | Use an existing one | Audiovisual synchronisation is the task of determining the time offset between speech audio and a video recording of the articulators. In child speech therapy, audio and ultrasound videos of the tongue are captured using instruments which rely on hardware to synchronise the two modalities at recording time. Hardware synchronisation can fail in practice, and no mechanism exists to synchronise the signals post hoc. To address this problem, we employ a two-stream neural network which exploits the correlation between the two modalities to find the offset. We train our model on recordings from 69 speakers, and show that it correctly synchronises 82.9% of test utterances from unseen therapy sessions and unseen speakers, thus considerably reducing the number of utterances to be manually synchronised. An analysis of model performance on the test utterances shows that directed phone articulations are more difficult to automatically synchronise compared to utterances containing natural variation in speech such as words, sentences, or conversations. | Ultrasound tongue imaging (UTI) is a non-invasive way of observing the vocal tract during speech production BIBREF0 . Instrumental speech therapy relies on capturing ultrasound videos of the patient's tongue simultaneously with their speech audio in order to provide a diagnosis, design treatments, and measure therapy progress BIBREF1 . The two modalities must be correctly synchronised, with a minimum shift of INLINEFORM0 45ms if the audio leads and INLINEFORM1 125ms if the audio lags, based on synchronisation standards for broadcast audiovisual signals BIBREF2 . Errors beyond this range can render the data unusable – indeed, synchronisation errors do occur, resulting in significant wasted effort if not corrected. No mechanism currently exists to automatically correct these errors, and although manual synchronisation is possible in the presence of certain audiovisual cues such as stop consonants BIBREF3 , it is time consuming and tedious. In this work, we exploit the correlation between the two modalities to synchronise them. We utilise a two-stream neural network architecture for the task BIBREF4 , using as our only source of supervision pairs of ultrasound and audio segments which have been automatically generated and labelled as positive (correctly synchronised) or negative (randomly desynchronised); a process known as self-supervision BIBREF5 . We demonstrate how this approach enables us to correctly synchronise the majority of utterances in our test set, and in particular, those exhibiting natural variation in speech. Section SECREF2 reviews existing approaches for audiovisual synchronisation, and describes the challenges specifically associated with UTI data, compared with lip videos for which automatic synchronisation has been previously attempted. Section SECREF3 describes our approach. Section SECREF4 describes the data we use, including data preprocessing and positive and negative sample creation using a self-supervision strategy. Section SECREF5 describes our experiments, followed by an analysis of the results. We conclude with a summary and future directions in Section SECREF6 . |
120 | What web and user-generated NER datasets are used for the analysis? | MUC, CoNLL, ACE, OntoNotes, MSM, Ritter, UMBC | Named Entity Recognition (NER) is a key NLP task, which is all the more challenging on Web and user-generated content with their diverse and continuously changing language. This paper aims to quantify how this diversity impacts state-of-the-art NER methods, by measuring named entity (NE) and context variability, feature sparsity, and their effects on precision and recall. In particular, our findings indicate that NER approaches struggle to generalise in diverse genres with limited training data. Unseen NEs, in particular, play an important role, which have a higher incidence in diverse genres such as social media than in more regular genres such as newswire. Coupled with a higher incidence of unseen features more generally and the lack of large training corpora, this leads to significantly lower F1 scores for diverse genres as compared to more regular ones. We also find that leading systems rely heavily on surface forms found in training data, having problems generalising beyond these, and offer explanations for this observation. | Named entity recognition and classification (NERC, short NER), the task of recognising and assigning a class to mentions of proper names (named entities, NEs) in text, has attracted many years of research BIBREF0 , BIBREF1 , analyses BIBREF2 , starting from the first MUC challenge in 1995 BIBREF3 . Recognising entities is key to many applications, including text summarisation BIBREF4 , search BIBREF5 , the semantic web BIBREF6 , topic modelling BIBREF7 , and machine translation BIBREF8 , BIBREF9 . As NER is being applied to increasingly diverse and challenging text genres BIBREF10 , BIBREF11 , BIBREF12 , this has lead to a noisier, sparser feature space, which in turn requires regularisation BIBREF13 and the avoidance of overfitting. This has been the case even for large corpora all of the same genre and with the same entity classification scheme, such as ACE BIBREF14 . Recall, in particular, has been a persistent problem, as named entities often seem to have unusual surface forms, e.g. unusual character sequences for the given language (e.g. Szeged in an English-language document) or words that individually are typically not NEs, unless they are combined together (e.g. the White House). Indeed, the move from ACE and MUC to broader kinds of corpora has presented existing NER systems and resources with a great deal of difficulty BIBREF15 , which some researchers have tried to address through domain adaptation, specifically with entity recognition in mind BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . However, more recent performance comparisons of NER methods over different corpora showed that older tools tend to simply fail to adapt, even when given a fair amount of in-domain data and resources BIBREF21 , BIBREF11 . Simultaneously, the value of NER in non-newswire data BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 has rocketed: for example, social media now provides us with a sample of all human discourse, unmolested by editors, publishing guidelines and the like, and all in digital format – leading to, for example, whole new fields of research opening in computational social science BIBREF26 , BIBREF27 , BIBREF28 . The prevailing assumption has been that this lower NER performance is due to domain differences arising from using newswire (NW) as training data, as well as from the irregular, noisy nature of new media (e.g. BIBREF21 ). Existing studies BIBREF11 further suggest that named entity diversity, discrepancy between named entities in the training set and the test set (entity drift over time in particular), and diverse context, are the likely reasons behind the significantly lower NER performance on social media corpora, as compared to newswire. No prior studies, however, have investigated these hypotheses quantitatively. For example, it is not yet established whether this performance drop is really due to a higher proportion of unseen NEs in the social media, or is it instead due to NEs being situated in different kinds of linguistic context. Accordingly, the contributions of this paper lie in investigating the following open research questions: In particular, the paper carries out a comparative analyses of the performance of several different approaches to statistical NER over multiple text genres, with varying NE and lexical diversity. In line with prior analyses of NER performance BIBREF2 , BIBREF11 , we carry out corpus analysis and introduce briefly the NER methods used for experimentation. Unlike prior efforts, however, our main objectives are to uncover the impact of NE diversity and context diversity on performance (measured primarily by F1 score), and also to study the relationship between OOV NEs and features and F1. See Section "Experiments" for details. To ensure representativeness and comprehensiveness, our experimental findings are based on key benchmark NER corpora spanning multiple genres, time periods, and corpus annotation methodologies and guidelines. As detailed in Section "Datasets" , the corpora studied are OntoNotes BIBREF29 , ACE BIBREF30 , MUC 7 BIBREF31 , the Ritter NER corpus BIBREF21 , the MSM 2013 corpus BIBREF32 , and the UMBC Twitter corpus BIBREF33 . To eliminate potential bias from the choice of statistical NER approach, experiments are carried out with three differently-principled NER approaches, namely Stanford NER BIBREF34 , SENNA BIBREF35 and CRFSuite BIBREF36 (see Section "NER Models and Features" for details). |
121 | Which unlabeled data do they pretrain with? | 1000 hours of WSJ audio data | We explore unsupervised pre-training for speech recognition by learning representations of raw audio. wav2vec is trained on large amounts of unlabeled audio data and the resulting representations are then used to improve acoustic model training. We pre-train a simple multi-layer convolutional neural network optimized via a noise contrastive binary classification task. Our experiments on WSJ reduce WER of a strong character-based log-mel filterbank baseline by up to 36% when only a few hours of transcribed data is available. Our approach achieves 2.43% WER on the nov92 test set. This outperforms Deep Speech 2, the best reported character-based system in the literature while using three orders of magnitude less labeled training data. | Current state of the art models for speech recognition require large amounts of transcribed audio data to attain good performance BIBREF1 . Recently, pre-training of neural networks has emerged as an effective technique for settings where labeled data is scarce. The key idea is to learn general representations in a setup where substantial amounts of labeled or unlabeled data is available and to leverage the learned representations to improve performance on a downstream task for which the amount of data is limited. This is particularly interesting for tasks where substantial effort is required to obtain labeled data, such as speech recognition. In computer vision, representations for ImageNet BIBREF2 and COCO BIBREF3 have proven to be useful to initialize models for tasks such as image captioning BIBREF4 or pose estimation BIBREF5 . Unsupervised pre-training for computer vision has also shown promise BIBREF6 . In natural language processing (NLP), unsupervised pre-training of language models BIBREF7 , BIBREF8 , BIBREF9 improved many tasks such as text classification, phrase structure parsing and machine translation BIBREF10 , BIBREF11 . In speech processing, pre-training has focused on emotion recogniton BIBREF12 , speaker identification BIBREF13 , phoneme discrimination BIBREF14 , BIBREF15 as well as transferring ASR representations from one language to another BIBREF16 . There has been work on unsupervised learning for speech but the resulting representations have not been applied to improve supervised speech recognition BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 . In this paper, we apply unsupervised pre-training to improve supervised speech recognition. This enables exploiting unlabeled audio data which is much easier to collect than labeled data. Our model, , is a convolutional neural network that takes raw audio as input and computes a general representation that can be input to a speech recognition system. The objective is a contrastive loss that requires distinguishing a true future audio sample from negatives BIBREF22 , BIBREF23 , BIBREF15 . Different to previous work BIBREF15 , we move beyond frame-wise phoneme classification and apply the learned representations to improve strong supervised ASR systems. relies on a fully convolutional architecture which can be easily parallelized over time on modern hardware compared to recurrent autoregressive models used in previous work (§ SECREF2 ). Our experimental results on the WSJ benchmark demonstrate that pre-trained representations estimated on about 1,000 hours of unlabeled speech can substantially improve a character-based ASR system and outperform the best character-based result in the literature, Deep Speech 2. On the TIMIT task, pre-training enables us to match the best reported result in the literature. In a simulated low-resource setup with only eight hours of transcriped audio data, reduces WER by up to 32% compared to a baseline model that relies on labeled data only (§ SECREF3 & § SECREF4 ). |
123 | How big are the datasets? | In-house dataset consists of 3716 documents
ACE05 dataset consists of 1635 documents | Relation extraction (RE) seeks to detect and classify semantic relationships between entities, which provides useful information for many NLP applications. Since the state-of-the-art RE models require large amounts of manually annotated data and language-specific resources to achieve high accuracy, it is very challenging to transfer an RE model of a resource-rich language to a resource-poor language. In this paper, we propose a new approach for cross-lingual RE model transfer based on bilingual word embedding mapping. It projects word embeddings from a target language to a source language, so that a well-trained source-language neural network RE model can be directly applied to the target language. Experiment results show that the proposed approach achieves very good performance for a number of target languages on both in-house and open datasets, using a small bilingual dictionary with only 1K word pairs. | Relation extraction (RE) is an important information extraction task that seeks to detect and classify semantic relationships between entities like persons, organizations, geo-political entities, locations, and events. It provides useful information for many NLP applications such as knowledge base construction, text mining and question answering. For example, the entity Washington, D.C. and the entity United States have a CapitalOf relationship, and extraction of such relationships can help answer questions like “What is the capital city of the United States?" Traditional RE models (e.g., BIBREF0, BIBREF1, BIBREF2) require careful feature engineering to derive and combine various lexical, syntactic and semantic features. Recently, neural network RE models (e.g., BIBREF3, BIBREF4, BIBREF5, BIBREF6) have become very successful. These models employ a certain level of automatic feature learning by using word embeddings, which significantly simplifies the feature engineering task while considerably improving the accuracy, achieving the state-of-the-art performance for relation extraction. All the above RE models are supervised machine learning models that need to be trained with large amounts of manually annotated RE data to achieve high accuracy. However, annotating RE data by human is expensive and time-consuming, and can be quite difficult for a new language. Moreover, most RE models require language-specific resources such as dependency parsers and part-of-speech (POS) taggers, which also makes it very challenging to transfer an RE model of a resource-rich language to a resource-poor language. There are a few existing weakly supervised cross-lingual RE approaches that require no human annotation in the target languages, e.g., BIBREF7, BIBREF8, BIBREF9, BIBREF10. However, the existing approaches require aligned parallel corpora or machine translation systems, which may not be readily available in practice. In this paper, we make the following contributions to cross-lingual RE: We propose a new approach for direct cross-lingual RE model transfer based on bilingual word embedding mapping. It projects word embeddings from a target language to a source language (e.g., English), so that a well-trained source-language RE model can be directly applied to the target language, with no manually annotated RE data needed for the target language. We design a deep neural network architecture for the source-language (English) RE model that uses word embeddings and generic language-independent features as the input. The English RE model achieves the-state-of-the-art performance without using language-specific resources. We conduct extensive experiments which show that the proposed approach achieves very good performance (up to $79\%$ of the accuracy of the supervised target-language RE model) for a number of target languages on both in-house and the ACE05 datasets BIBREF11, using a small bilingual dictionary with only 1K word pairs. To the best of our knowledge, this is the first work that includes empirical studies for cross-lingual RE on several languages across a variety of language families, without using aligned parallel corpora or machine translation systems. We organize the paper as follows. In Section 2 we provide an overview of our approach. In Section 3 we describe how to build monolingual word embeddings and learn a linear mapping between two languages. In Section 4 we present a neural network architecture for the source-language (English). In Section 5 we evaluate the performance of the proposed approach for a number of target languages. We discuss related work in Section 6 and conclude the paper in Section 7. |
125 | What baselines did they compare their model with? | the baseline where path generation uses a standard sequence-to-sequence model augmented with attention mechanism and path verification uses depth-first search | We propose an end-to-end deep learning model for translating free-form natural language instructions to a high-level plan for behavioral robot navigation. We use attention models to connect information from both the user instructions and a topological representation of the environment. We evaluate our model's performance on a new dataset containing 10,050 pairs of navigation instructions. Our model significantly outperforms baseline approaches. Furthermore, our results suggest that it is possible to leverage the environment map as a relevant knowledge base to facilitate the translation of free-form navigational instruction. | Enabling robots to follow navigation instructions in natural language can facilitate human-robot interaction across a variety of applications. For instance, within the service robotics domain, robots can follow navigation instructions to help with mobile manipulation BIBREF0 and delivery tasks BIBREF1 . Interpreting navigation instructions in natural language is difficult due to the high variability in the way people describe routes BIBREF2 . For example, there are a variety of ways to describe the route in Fig. FIGREF4 (a): Each fragment of a sentence within these instructions can be mapped to one or more than one navigation behaviors. For instance, assume that a robot counts with a number of primitive, navigation behaviors, such as “enter the room on the left (or on right)” , “follow the corridor”, “cross the intersection”, etc. Then, the fragment “advance forward” in a navigation instruction could be interpreted as a “follow the corridor” behavior, or as a sequence of “follow the corridor” interspersed with “cross the intersection” behaviors depending on the topology of the environment. Resolving such ambiguities often requires reasoning about “common-sense” concepts, as well as interpreting spatial information and landmarks, e.g., in sentences such as “the room on the left right before the end of the corridor” and “the room which is in the middle of two vases”. In this work, we pose the problem of interpreting navigation instructions as finding a mapping (or grounding) of the commands into an executable navigation plan. While the plan is typically modeled as a formal specification of low-level motions BIBREF2 or a grammar BIBREF3 , BIBREF4 , we focus specifically on translating instructions to a high-level navigation plan based on a topological representation of the environment. This representation is a behavioral navigation graph, as recently proposed by BIBREF5 , designed to take advantage of the semantic structure typical of human environments. The nodes of the graph correspond to semantically meaningful locations for the navigation task, such as kitchens or entrances to rooms in corridors. The edges are parameterized, visuo-motor behaviors that allow a robot to navigate between neighboring nodes, as illustrated in Fig. FIGREF4 (b). Under this framework, complex navigation routes can be achieved by sequencing behaviors without an explicit metric representation of the world. We formulate the problem of following instructions under the framework of BIBREF5 as finding a path in the behavioral navigation graph that follows the desired route, given a known starting location. The edges (behaviors) along this path serve to reach the – sometimes implicit – destination requested by the user. As in BIBREF6 , our focus is on the problem of interpreting navigation directions. We assume that a robot can realize valid navigation plans according to the graph. We contribute a new end-to-end model for following directions in natural language under the behavioral navigation framework. Inspired by the information retrieval and question answering literature BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , we propose to leverage the behavioral graph as a knowledge base to facilitate the interpretation of navigation commands. More specifically, the proposed model takes as input user directions in text form, the behavioral graph of the environment encoded as INLINEFORM0 node; edge; node INLINEFORM1 triplets, and the initial location of the robot in the graph. The model then predicts a set of behaviors to reach the desired destination according to the instructions and the map (Fig. FIGREF4 (c)). Our main insight is that using attention mechanisms to correlate navigation instructions with the topological map of the environment can facilitate predicting correct navigation plans. This work also contributes a new dataset of INLINEFORM0 pairs of free-form natural language instructions and high-level navigation plans. This dataset was collected through Mechanical Turk using 100 simulated environments with a corresponding topological map and, to the best of our knowledge, it is the first of its kind for behavioral navigation. The dataset opens up opportunities to explore data-driven methods for grounding navigation commands into high-level motion plans. We conduct extensive experiments to study the generalization capabilities of the proposed model for following natural language instructions. We investigate both generalization to new instructions in known and in new environments. We conclude this paper by discussing the benefits of the proposed approach as well as opportunities for future research based on our findings. |
126 | What additional features are proposed for future work? | distinguishing between clinically positive and negative phenomena within each risk factor domain and accounting for structured data collected on the target cohort | Readmission after discharge from a hospital is disruptive and costly, regardless of the reason. However, it can be particularly problematic for psychiatric patients, so predicting which patients may be readmitted is critically important but also very difficult. Clinical narratives in psychiatric electronic health records (EHRs) span a wide range of topics and vocabulary; therefore, a psychiatric readmission prediction model must begin with a robust and interpretable topic extraction component. We created a data pipeline for using document vector similarity metrics to perform topic extraction on psychiatric EHR data in service of our long-term goal of creating a readmission risk classifier. We show initial results for our topic extraction model and identify additional features we will be incorporating in the future. | Psychotic disorders typically emerge in late adolescence or early adulthood BIBREF0 , BIBREF1 and affect approximately 2.5-4% of the population BIBREF2 , BIBREF3 , making them one of the leading causes of disability worldwide BIBREF4 . A substantial proportion of psychiatric inpatients are readmitted after discharge BIBREF5 . Readmissions are disruptive for patients and families, and are a key driver of rising healthcare costs BIBREF6 , BIBREF7 . Reducing readmission risk is therefore a major unmet need of psychiatric care. Developing clinically implementable machine learning tools to enable accurate assessment of risk factors associated with readmission offers opportunities to inform the selection of treatment interventions and implement appropriate preventive measures. In psychiatry, traditional strategies to study readmission risk factors rely on clinical observation and manual retrospective chart review BIBREF8 , BIBREF9 . This approach, although benefitting from clinical expertise, does not scale well for large data sets, is effort-intensive, and lacks automation. An efficient, more robust, and cheaper NLP-based alternative approach has been developed and met with some success in other medical fields BIBREF10 . However, this approach has seldom been applied in psychiatry because of the unique characteristics of psychiatric medical record content. There are several challenges for topic extraction when dealing with clinical narratives in psychiatric EHRs. First, the vocabulary used is highly varied and context-sensitive. A patient may report “feeling `really great and excited'" – symptoms of mania – without any explicit mention of keywords that differ from everyday vocabulary. Also, many technical terms in clinical narratives are multiword expressions (MWEs) such as `obsessive body image', `linear thinking', `short attention span', or `panic attack'. These phrasemes are comprised of words that in isolation do not impart much information in determining relatedness to a given topic but do in the context of the expression. Second, the narrative structure in psychiatric clinical narratives varies considerably in how the same phenomenon can be described. Hallucinations, for example, could be described as “the patient reports auditory hallucinations," or “the patient has been hearing voices for several months," amongst many other possibilities. Third, phenomena can be directly mentioned without necessarily being relevant to the patient specifically. Psychosis patient discharge summaries, for instance, can include future treatment plans (e.g. “Prevent relapse of a manic or major depressive episode.", “Prevent recurrence of psychosis.") containing vocabulary that at the word-level seem strongly correlated with readmission risk. Yet at the paragraph-level these do not indicate the presence of a readmission risk factor in the patient and in fact indicate the absence of a risk factor that was formerly present. Lastly, given the complexity of phenotypic assessment in psychiatric illnesses, patients with psychosis exhibit considerable differences in terms of illness and symptom presentation. The constellation of symptoms leads to various diagnoses and comorbidities that can change over time, including schizophrenia, schizoaffective disorder, bipolar disorder with psychosis, and substance use induced psychosis. Thus, the lexicon of words and phrases used in EHRs differs not only across diagnoses but also across patients and time. Taken together, these factors make topic extraction a difficult task that cannot be accomplished by keyword search or other simple text-mining techniques. To identify specific risk factors to focus on, we not only reviewed clinical literature of risk factors associated with readmission BIBREF11 , BIBREF12 , but also considered research related to functional remission BIBREF13 , forensic risk factors BIBREF14 , and consulted clinicians involved with this project. Seven risk factor domains – Appearance, Mood, Interpersonal, Occupation, Thought Content, Thought Process, and Substance – were chosen because they are clinically relevant, consistent with literature, replicable across data sets, explainable, and implementable in NLP algorithms. In our present study, we evaluate multiple approaches to automatically identify which risk factor domains are associated with which paragraphs in psychotic patient EHRs. We perform this study in support of our long-term goal of creating a readmission risk classifier that can aid clinicians in targeting individual treatment interventions and assessing patient risk of harm (e.g. suicide risk, homicidal risk). Unlike other contemporary approaches in machine learning, we intend to create a model that is clinically explainable and flexible across training data while maintaining consistent performance. To incorporate clinical expertise in the identification of risk factor domains, we undertake an annotation project, detailed in section 3.1. We identify a test set of over 1,600 EHR paragraphs which a team of three domain-expert clinicians annotate paragraph-by-paragraph for relevant risk factor domains. Section 3.2 describes the results of this annotation task. We then use the gold standard from the annotation project to assess the performance of multiple neural classification models trained exclusively on Term Frequency – Inverse Document Frequency (TF-IDF) vectorized EHR data, described in section 4. To further improve the performance of our model, we incorporate domain-relevant MWEs identified using all in-house data. |
127 | How is morphology knowledge implemented in the method? | A BPE model is applied to the stem after morpheme segmentation. | Neural machine translation (NMT) has achieved impressive performance on machine translation task in recent years. However, in consideration of efficiency, a limited-size vocabulary that only contains the top-N highest frequency words are employed for model training, which leads to many rare and unknown words. It is rather difficult when translating from the low-resource and morphologically-rich agglutinative languages, which have complex morphology and large vocabulary. In this paper, we propose a morphological word segmentation method on the source-side for NMT that incorporates morphology knowledge to preserve the linguistic and semantic information in the word structure while reducing the vocabulary size at training time. It can be utilized as a preprocessing tool to segment the words in agglutinative languages for other natural language processing (NLP) tasks. Experimental results show that our morphologically motivated word segmentation method is better suitable for the NMT model, which achieves significant improvements on Turkish-English and Uyghur-Chinese machine translation tasks on account of reducing data sparseness and language complexity. | Neural machine translation (NMT) has achieved impressive performance on machine translation task in recent years for many language pairs BIBREF0, BIBREF1, BIBREF2. However, in consideration of time cost and space capacity, the NMT model generally employs a limited-size vocabulary that only contains the top-N highest frequency words (commonly in the range of 30K to 80K) BIBREF3, which leads to the Out-of-Vocabulary (OOV) problem following with inaccurate and terrible translation results. Research indicated that sentences with too many unknown words tend to be translated much more poorly than sentences with mainly frequent words. For the low-resource and source-side morphologically-rich machine translation tasks, such as Turkish-English and Uyghur-Chinese, all the above issues are more serious due to the fact that the NMT model cannot effectively identify the complex morpheme structure or capture the linguistic and semantic information with too many rare and unknown words in the training corpus. Both the Turkish and Uyghur are agglutinative and highly-inflected languages in which the word is formed by suffixes attaching to a stem BIBREF4. The word consists of smaller morpheme units without any splitter between them and its structure can be denoted as “stem + suffix1 + suffix2 + ... + suffixN”. A stem is attached in the rear by zero to many suffixes that have many inflected and morphological variants depending on case, number, gender, and so on. The complex morpheme structure and relatively free constituent order can produce very large vocabulary because of the derivational morphology, so when translating from the agglutinative languages, many words are unseen at training time. Moreover, due to the semantic context, the same word generally has different segmentation forms in the training corpus. For the purpose of incorporating morphology knowledge of agglutinative languages into word segmentation for NMT, we propose a morphological word segmentation method on the source-side of Turkish-English and Uyghur-Chinese machine translation tasks, which segments the complex words into simple and effective morpheme units while reducing the vocabulary size for model training. In this paper, we investigate and compare the following segmentation strategies: Stem with combined suffix Stem with singular suffix Byte Pair Encoding (BPE) BPE on stem with combined suffix BPE on stem with singular suffix The latter two segmentation strategies are our newly proposed methods. Experimental results show that our morphologically motivated word segmentation method can achieve significant improvement of up to 1.2 and 2.5 BLEU points on Turkish-English and Uyghur-Chinese machine translation tasks over the strong baseline of pure BPE method respectively, indicating that it can provide better translation performance for the NMT model. |
129 | How is the performance on the task evaluated? | Comparison of test accuracies of neural network models on an inflection task and qualitative analysis of the errors | How does knowledge of one language's morphology influence learning of inflection rules in a second one? In order to investigate this question in artificial neural network models, we perform experiments with a sequence-to-sequence architecture, which we train on different combinations of eight source and three target languages. A detailed analysis of the model outputs suggests the following conclusions: (i) if source and target language are closely related, acquisition of the target language's inflectional morphology constitutes an easier task for the model; (ii) knowledge of a prefixing (resp. suffixing) language makes acquisition of a suffixing (resp. prefixing) language's morphology more challenging; and (iii) surprisingly, a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology, independent of their relatedness. | A widely agreed-on fact in language acquisition research is that learning of a second language (L2) is influenced by a learner's native language (L1) BIBREF0, BIBREF1. A language's morphosyntax seems to be no exception to this rule BIBREF2, but the exact nature of this influence remains unknown. For instance, it is unclear whether it is constraints imposed by the phonological or by the morphosyntactic attributes of the L1 that are more important during the process of learning an L2's morphosyntax. Within the area of natural language processing (NLP) research, experimenting on neural network models just as if they were human subjects has recently been gaining popularity BIBREF3, BIBREF4, BIBREF5. Often, so-called probing tasks are used, which require a specific subset of linguistic knowledge and can, thus, be leveraged for qualitative evaluation. The goal is to answer the question: What do neural networks learn that helps them to succeed in a given task? Neural network models, and specifically sequence-to-sequence models, have pushed the state of the art for morphological inflection – the task of learning a mapping from lemmata to their inflected forms – in the last years BIBREF6. Thus, in this work, we experiment on such models, asking not what they learn, but, motivated by the respective research on human subjects, the related question of how what they learn depends on their prior knowledge. We manually investigate the errors made by artificial neural networks for morphological inflection in a target language after pretraining on different source languages. We aim at finding answers to two main questions: (i) Do errors systematically differ between source languages? (ii) Do these differences seem explainable, given the properties of the source and target languages? In other words, we are interested in exploring if and how L2 acquisition of morphological inflection depends on the L1, i.e., the "native language", in neural network models. To this goal, we select a diverse set of eight source languages from different language families – Basque, French, German, Hungarian, Italian, Navajo, Turkish, and Quechua – and three target languages – English, Spanish and Zulu. We pretrain a neural sequence-to-sequence architecture on each of the source languages and then fine-tune the resulting models on small datasets in each of the target languages. Analyzing the errors made by the systems, we find that (i) source and target language being closely related simplifies the successful learning of inflection in the target language, (ii) the task is harder to learn in a prefixing language if the source language is suffixing – as well as the other way around, and (iii) a source language which exhibits an agglutinative morphology simplifies learning of a second language's inflectional morphology. |
131 | what datasets were used? | IWSLT14 German-English, IWSLT14 Turkish-English, WMT14 English-German | Recently, neural machine translation has achieved remarkable progress by introducing well-designed deep neural networks into its encoder-decoder framework. From the optimization perspective, residual connections are adopted to improve learning performance for both encoder and decoder in most of these deep architectures, and advanced attention connections are applied as well. Inspired by the success of the DenseNet model in computer vision problems, in this paper, we propose a densely connected NMT architecture (DenseNMT) that is able to train more efficiently for NMT. The proposed DenseNMT not only allows dense connection in creating new features for both encoder and decoder, but also uses the dense attention structure to improve attention quality. Our experiments on multiple datasets show that DenseNMT structure is more competitive and efficient. | Neural machine translation (NMT) is a challenging task that attracts lots of attention in recent years. Starting from the encoder-decoder framework BIBREF0 , NMT starts to show promising results in many language pairs. The evolving structures of NMT models in recent years have made them achieve higher scores and become more favorable. The attention mechanism BIBREF1 added on top of encoder-decoder framework is shown to be very useful to automatically find alignment structure, and single-layer RNN-based structure has evolved into deeper models with more efficient transformation functions BIBREF2 , BIBREF3 , BIBREF4 . One major challenge of NMT is that its models are hard to train in general due to the complexity of both the deep models and languages. From the optimization perspective, deeper models are hard to efficiently back-propagate the gradients, and this phenomenon as well as its solution is better explored in the computer vision society. Residual networks (ResNet) BIBREF5 achieve great performance in a wide range of tasks, including image classification and image segmentation. Residual connections allow features from previous layers to be accumulated to the next layer easily, and make the optimization of the model efficiently focus on refining upper layer features. NMT is considered as a challenging problem due to its sequence-to-sequence generation framework, and the goal of comprehension and reorganizing from one language to the other. Apart from the encoder block that works as a feature generator, the decoder network combining with the attention mechanism bring new challenges to the optimization of the models. While nowadays best-performing NMT systems use residual connections, we question whether this is the most efficient way to propagate information through deep models. In this paper, inspired by the idea of using dense connections for training computer vision tasks BIBREF6 , we propose a densely connected NMT framework (DenseNMT) that efficiently propagates information from the encoder to the decoder through the attention component. Taking the CNN-based deep architecture as an example, we verify the efficiency of DenseNMT. Our contributions in this work include: (i) by comparing the loss curve, we show that DenseNMT allows the model to pass information more efficiently, and speeds up training; (ii) we show through ablation study that dense connections in all three blocks altogether help improve the performance, while not increasing the number of parameters; (iii) DenseNMT allows the models to achieve similar performance with much smaller embedding size; (iv) DenseNMT on IWSLT14 German-English and Turkish-English translation tasks achieves new benchmark BLEU scores, and the result on WMT14 English-German task is more competitive than the residual connections based baseline model. |
133 | How do they obtain human judgements? | Using crowdsourcing | Literary critics often attempt to uncover meaning in a single work of literature through careful reading and analysis. Applying natural language processing methods to aid in such literary analyses remains a challenge in digital humanities. While most previous work focuses on"distant reading"by algorithmically discovering high-level patterns from large collections of literary works, here we sharpen the focus of our methods to a single literary theory about Italo Calvino's postmodern novel Invisible Cities, which consists of 55 short descriptions of imaginary cities. Calvino has provided a classification of these cities into eleven thematic groups, but literary scholars disagree as to how trustworthy his categorization is. Due to the unique structure of this novel, we can computationally weigh in on this debate: we leverage pretrained contextualized representations to embed each city's description and use unsupervised methods to cluster these embeddings. Additionally, we compare results of our computational approach to similarity judgments generated by human readers. Our work is a first step towards incorporating natural language processing into literary criticism. | Literary critics form interpretations of meaning in works of literature. Building computational models that can help form and test these interpretations is a fundamental goal of digital humanities research BIBREF0 . Within natural language processing, most previous work that engages with literature relies on “distant reading” BIBREF1 , which involves discovering high-level patterns from large collections of stories BIBREF2 , BIBREF3 . We depart from this trend by showing that computational techniques can also engage with literary criticism at a closer distance: concretely, we use recent advances in text representation learning to test a single literary theory about the novel Invisible Cities by Italo Calvino. Framed as a dialogue between the traveler Marco Polo and the emperor Kublai Khan, Invisible Cities consists of 55 prose poems, each of which describes an imaginary city. Calvino categorizes these cities into eleven thematic groups that deal with human emotions (e.g., desires, memories), general objects (eyes, sky, signs), and unusual properties (continuous, hidden, thin). Many critics argue that Calvino's labels are not meaningful, while others believe that there is a distinct thematic separation between the groups, including the author himself BIBREF4 . The unique structure of this novel — each city's description is short and self-contained (Figure FIGREF1 ) — allows us to computationally examine this debate. As the book is too small to train any models, we leverage recent advances in large-scale language model-based representations BIBREF5 , BIBREF6 to compute a representation of each city. We feed these representations into a clustering algorithm that produces exactly eleven clusters of five cities each and evaluate them against both Calvino's original labels and crowdsourced human judgments. While the overall correlation with Calvino's labels is low, both computers and humans can reliably identify some thematic groups associated with concrete objects. While prior work has computationally analyzed a single book BIBREF7 , our work goes beyond simple word frequency or n-gram counts by leveraging the power of pretrained language models to engage with literary criticism. Admittedly, our approach and evaluations are specific to Invisible Cities, but we believe that similar analyses of more conventionally-structured novels could become possible as text representation methods improve. We also highlight two challenges of applying computational methods to literary criticisms: (1) text representation methods are imperfect, especially when given writing as complex as Calvino's; and (2) evaluation is difficult because there is no consensus among literary critics on a single “correct” interpretation. |
134 | Does this approach perform better in the multi-domain or single-domain setting? | single-domain setting | Existing approaches to dialogue state tracking rely on pre-defined ontologies consisting of a set of all possible slot types and values. Though such approaches exhibit promising performance on single-domain benchmarks, they suffer from computational complexity that increases proportionally to the number of pre-defined slots that need tracking. This issue becomes more severe when it comes to multi-domain dialogues which include larger numbers of slots. In this paper, we investigate how to approach DST using a generation framework without the pre-defined ontology list. Given each turn of user utterance and system response, we directly generate a sequence of belief states by applying a hierarchical encoder-decoder structure. In this way, the computational complexity of our model will be a constant regardless of the number of pre-defined slots. Experiments on both the multi-domain and the single domain dialogue state tracking dataset show that our model not only scales easily with the increasing number of pre-defined domains and slots but also reaches the state-of-the-art performance. | A Dialogue State Tracker (DST) is a core component of a modular task-oriented dialogue system BIBREF7 . For each dialogue turn, a DST module takes a user utterance and the dialogue history as input, and outputs a belief estimate of the dialogue state. Then a machine action is decided based on the dialogue state according to a dialogue policy module, after which a machine response is generated. Traditionally, a dialogue state consists of a set of requests and joint goals, both of which are represented by a set of slot-value pairs (e.g. (request, phone), (area, north), (food, Japanese)) BIBREF8 . In a recently proposed multi-domain dialogue state tracking dataset, MultiWoZ BIBREF9 , a representation of dialogue state consists of a hierarchical structure of domain, slot, and value is proposed. This is a more practical scenario since dialogues often include multiple domains simultaneously. Many recently proposed DSTs BIBREF2 , BIBREF10 are based on pre-defined ontology lists that specify all possible slot values in advance. To generate a distribution over the candidate set, previous works often take each of the slot-value pairs as input for scoring. However, in real-world scenarios, it is often not practical to enumerate all possible slot value pairs and perform scoring from a large dynamically changing knowledge base BIBREF11 . To tackle this problem, a popular direction is to build a fixed-length candidate set that is dynamically updated throughout the dialogue development. cpt briefly summaries the inference time complexity of multiple state-of-the-art DST models following this direction. Since the inference complexity of all of previous model is at least proportional to the number of the slots, these models will struggle to scale to multi-domain datasets with much larger numbers of pre-defined slots. In this work, we formulate the dialogue state tracking task as a sequence generation problem, instead of formulating the task as a pair-wise prediction problem as in existing work. We propose the COnditional MEmory Relation Network (COMER), a scalable and accurate dialogue state tracker that has a constant inference time complexity. Specifically, our model consists of an encoder-decoder network with a hierarchically stacked decoder to first generate the slot sequences in the belief state and then for each slot generate the corresponding value sequences. The parameters are shared among all of our decoders for the scalability of the depth of the hierarchical structure of the belief states. COMER applies BERT contextualized word embeddings BIBREF12 and BPE BIBREF13 for sequence encoding to ensure the uniqueness of the representations of the unseen words. The word embeddings for sequence generation are initialized and fixed with the static word embeddings generated from BERT to have the potential of generating unseen words. |
135 | How many samples did they generate for the artificial language? | 70,000 | Can neural nets learn logic? We approach this classic question with current methods, and demonstrate that recurrent neural networks can learn to recognize first order logical entailment relations between expressions. We define an artificial language in first-order predicate logic, generate a large dataset of sample 'sentences', and use an automatic theorem prover to infer the relation between random pairs of such sentences. We describe a Siamese neural architecture trained to predict the logical relation, and experiment with recurrent and recursive networks. Siamese Recurrent Networks are surprisingly successful at the entailment recognition task, reaching near perfect performance on novel sentences (consisting of known words), and even outperforming recursive networks. We report a series of experiments to test the ability of the models to perform compositional generalization. In particular, we study how they deal with sentences of unseen length, and sentences containing unseen words. We show that set-ups using LSTMs and GRUs obtain high scores on these tests, demonstrating a form of compositionality. | State-of-the-art models for almost all popular natural language processing tasks are based on deep neural networks, trained on massive amounts of data. A key question that has been raised in many different forms is to what extent these models have learned the compositional generalizations that characterize language, and to what extent they rely on storing massive amounts of exemplars and only make `local' generalizations BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . This question has led to (sometimes heated) debates between deep learning enthusiasts that are convinced neural networks can do almost anything, and skeptics that are convinced some types of generalization are fundamentally beyond reach for deep learning systems, pointing out that crucial tests distinguishing between generalization and memorization have not been applied. In this paper, we take a pragmatic perspective on these issues. As the target for learning we use entailment relations in an artificial language, defined using first order logic (FOL), that is unambiguously compositional. We ask whether popular deep learning methods are capable in principle of acquiring the compositional rules that characterize it, and focus in particular on recurrent neural networks that are unambiguously `connectionist': trained recurrent nets do not rely on symbolic data and control structures such as trees and global variable binding, and can straightforwardly be implemented in biological networks BIBREF8 or neuromorphic hardware BIBREF9 . We report positive results on this challenge, and in the process develop a series of tests for compositional generalization that address the concerns of deep learning skeptics. The paper makes three main contributions. First, we develop a protocol for automatically generating data that can be used in entailment recognition tasks. Second, we demonstrate that several deep learning architectures succeed at one such task. Third, we present and apply a number of experiments to test whether models are capable of compositional generalization. |
137 | Why does not the approach from English work on other languages? | Because, unlike other languages, English does not mark grammatical genders | Gender stereotypes are manifest in most of the world's languages and are consequently propagated or amplified by NLP systems. Although research has focused on mitigating gender stereotypes in English, the approaches that are commonly employed produce ungrammatical sentences in morphologically rich languages. We present a novel approach for converting between masculine-inflected and feminine-inflected sentences in such languages. For Spanish and Hebrew, our approach achieves F1 scores of 82% and 73% at the level of tags and accuracies of 90% and 87% at the level of forms. By evaluating our approach using four different languages, we show that, on average, it reduces gender stereotyping by a factor of 2.5 without any sacrifice to grammaticality. | One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases. This is because NLP systems depend on language corpora, which are inherently “not objective; they are creations of human design” BIBREF0 . One type of societal bias that has received considerable attention from the NLP community is gender stereotyping BIBREF1 , BIBREF2 , BIBREF3 . Gender stereotypes can manifest in language in overt ways. For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering. Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women BIBREF4 . To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English BIBREF5 , BIBREF6 , BIBREF7 . Yet, gender stereotypes also exist in other languages because they are a function of society, not of grammar. Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement BIBREF8 . In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns. This means that if the gender of one word changes, the others have to be updated to match. As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped BIBREF9 , will yield ungrammatical sentences. Consider the Spanish phrase el ingeniero experto (the skilled engineer). Replacing ingeniero with ingeniera is insufficient—el must also be replaced with la and experto with experta. In this paper, we present a new approach to counterfactual data augmentation BIBREF10 for mitigating gender stereotypes associated with animate nouns (i.e., nouns that represent people) for morphologically rich languages. We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns. We use this model as part of a four-step process, depicted in fig:pipeline, to reinflect entire sentences following an intervention on the grammatical gender of one word. We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level INLINEFORM0 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively. We also conduct an extrinsic evaluation using four languages. Following DBLP:journals/corr/abs-1807-11714, we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality. |
141 | How many layers does their system have? | 4 layers | For a news content distribution platform like Dailyhunt, Named Entity Recognition is a pivotal task for building better user recommendation and notification algorithms. Apart from identifying names, locations, organisations from the news for 13+ Indian languages and use them in algorithms, we also need to identify n-grams which do not necessarily fit in the definition of Named-Entity, yet they are important. For example, "me too movement", "beef ban", "alwar mob lynching". In this exercise, given an English language text, we are trying to detect case-less n-grams which convey important information and can be used as topics and/or hashtags for a news. Model is built using Wikipedia titles data, private English news corpus and BERT-Multilingual pre-trained model, Bi-GRU and CRF architecture. It shows promising results when compared with industry best Flair, Spacy and Stanford-caseless-NER in terms of F1 and especially Recall. | Named-Entity-Recognition(NER) approaches can be categorised broadly in three types. Detecting NER with predefined dictionaries and rulesBIBREF2, with some statistical approachesBIBREF3 and with deep learning approachesBIBREF4. Stanford CoreNLP NER is a widely used baseline for many applications BIBREF5. Authors have used approaches of Gibbs sampling and conditional random field (CRF) for non-local information gathering and then Viterbi algorithm to infer the most likely state in the CRF sequence outputBIBREF6. Deep learning approaches in NLP use document, word or token representations instead of one-hot encoded vectors. With the rise of transfer learning, pretrained Word2VecBIBREF7, GloVeBIBREF8, fasttextBIBREF9 which provides word embeddings were being used with recurrent neural networks (RNN) to detect NERs. Using LSTM layers followed by CRF layes with pretrained word-embeddings as input has been explored hereBIBREF10. Also, CNNs with character embeddings as inputs followed by bi-directional LSTM and CRF layers, were explored hereBIBREF11. With the introduction of attentions and transformersBIBREF12 many deep architectures emerged in last few years. Approach of using these pretrained models like ElmoBIBREF13, FlairBIBREF14 and BERTBIBREF0 for word representations followed by variety of LSMT and CRF combinations were tested by authors in BIBREF15 and these approaches show state-of-the-art performance. There are very few approaches where caseless NER task is explored. In this recent paperBIBREF16 authors have explored effects of "Cased" entities and how variety of networks perform and they show that the most effective strategy is a concatenation of cased and lowercased training data, producing a single model with high performance on both cased and uncased text. In another paperBIBREF17, authors have proposed True-Case pre-training before using BiLSTM+CRF approach to detect NERs effectively. Though it shows good results over previous approaches, it is not useful in Indian Languages context as there is no concept of cases. In our approach, we are focusing more on data preparation for our definition of topics using some of the state-of-art architectures based on BERT, LSTM/GRU and CRF layers as they have been explored in previous approaches mentioned above. Detecting caseless topics with higher recall and reasonable precision has been given a priority over f1 score. And comparisons have been made with available and ready-to-use open-source libraries from the productionization perspective. |
143 | What context modelling methods are evaluated? | Concat
Turn
Gate
Action Copy
Tree Copy
SQL Attn
Concat + Action Copy
Concat + Tree Copy
Concat + SQL Attn
Turn + Action Copy
Turn + Tree Copy
Turn + SQL Attn
Turn + SQL Attn + Action Copy | Recently semantic parsing in context has received a considerable attention, which is challenging since there are complex contextual phenomena. Previous works verified their proposed methods in limited scenarios, which motivates us to conduct an exploratory study on context modeling methods under real-world semantic parsing in context. We present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. We evaluate 13 context modeling methods on two large complex cross-domain datasets, and our best model achieves state-of-the-art performances on both datasets with significant improvements. Furthermore, we summarize the most frequent contextual phenomena, with a fine-grained analysis on representative models, which may shed light on potential research directions. | Semantic parsing, which translates a natural language sentence into its corresponding executable logic form (e.g. Structured Query Language, SQL), relieves users from the burden of learning techniques behind the logic form. The majority of previous studies on semantic parsing assume that queries are context-independent and analyze them in isolation. However, in reality, users prefer to interact with systems in a dialogue, where users are allowed to ask context-dependent incomplete questions BIBREF0. That arises the task of Semantic Parsing in Context (SPC), which is quite challenging as there are complex contextual phenomena. In general, there are two sorts of contextual phenomena in dialogues: Coreference and Ellipsis BIBREF1. Figure FIGREF1 shows a dialogue from the dataset SParC BIBREF2. After the question “What is id of the car with the max horsepower?”, the user poses an elliptical question “How about with the max mpg?”, and a question containing pronouns “Show its Make!”. Only when completely understanding the context, could a parser successfully parse the incomplete questions into their corresponding SQL queries. A number of context modeling methods have been suggested in the literature to address SPC BIBREF3, BIBREF4, BIBREF2, BIBREF5, BIBREF6. These methods proposed to leverage two categories of context: recent questions and precedent logic form. It is natural to leverage recent questions as context. Taking the example from Figure FIGREF1, when parsing $Q_3$, we also need to take $Q_1$ and $Q_2$ as input. We can either simply concatenate the input questions, or use a model to encode them hierarchically BIBREF4. As for the second category, instead of taking a bag of recent questions as input, it only considers the precedent logic form. For instance, when parsing $Q_3$, we only need to take $S_2$ as context. With such a context, the decoder can attend over it, or reuse it via a copy mechanism BIBREF4, BIBREF5. Intuitively, methods that fall into this category enjoy better generalizability, as they only rely on the last logic form as context, no matter at which turn. Notably, these two categories of context can be used simultaneously. However, it remains unclear how far we are from effective context modeling. First, there is a lack of thorough comparisons of typical context modeling methods on complex SPC (e.g. cross-domain). Second, none of previous works verified their proposed context modeling methods with the grammar-based decoding technique, which has been developed for years and proven to be highly effective in semantic parsing BIBREF7, BIBREF8, BIBREF9. To obtain better performance, it is worthwhile to study how context modeling methods collaborate with the grammar-based decoding. Last but not the least, there is limited understanding of how context modeling methods perform on various contextual phenomena. An in-depth analysis can shed light on potential research directions. In this paper, we try to fulfill the above insufficiency via an exploratory study on real-world semantic parsing in context. Concretely, we present a grammar-based decoding semantic parser and adapt typical context modeling methods on top of it. Through experiments on two large complex cross-domain datasets, SParC BIBREF2 and CoSQL BIBREF6, we carefully compare and analyze the performance of different context modeling methods. Our best model achieves state-of-the-art (SOTA) performances on both datasets with significant improvements. Furthermore, we summarize and generalize the most frequent contextual phenomena, with a fine-grained analysis on representative models. Through the analysis, we obtain some interesting findings, which may benefit the community on the potential research directions. We will open-source our code and materials to facilitate future work upon acceptance. |
145 | Is the baseline a non-heirarchical model like BERT? | There were hierarchical and non-hierarchical baselines; BERT was one of those baselines | Neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence-level labels, which are created heuristically using rule-based methods. Training the hierarchical encoder with these \emph{inaccurate} labels is challenging. Inspired by the recent work on pre-training transformer sentence encoders \cite{devlin:2018:arxiv}, we propose {\sc Hibert} (as shorthand for {\bf HI}erachical {\bf B}idirectional {\bf E}ncoder {\bf R}epresentations from {\bf T}ransformers) for document encoding and a method to pre-train it using unlabeled data. We apply the pre-trained {\sc Hibert} to our summarization model and it outperforms its randomly initialized counterpart by 1.25 ROUGE on the CNN/Dailymail dataset and by 2.0 ROUGE on a version of New York Times dataset. We also achieve the state-of-the-art performance on these two datasets. | Automatic document summarization is the task of rewriting a document into its shorter form while still retaining its important content. Over the years, many paradigms for document summarization have been explored (see Nenkova:McKeown:2011 for an overview). The most popular two among them are extractive approaches and abstractive approaches. As the name implies, extractive approaches generate summaries by extracting parts of the original document (usually sentences), while abstractive methods may generate new words or phrases which are not in the original document. Extractive summarization is usually modeled as a sentence ranking problem with length constraints (e.g., max number of words or sentences). Top ranked sentences (under constraints) are selected as summaries. Early attempts mostly leverage manually engineered features BIBREF1 . Based on these sparse features, sentence are selected using a classifier or a regression model. Later, the feature engineering part in this paradigm is replaced with neural networks. cheng:2016:acl propose a hierarchical long short-term memory network (LSTM; BIBREF2 ) to encode a document and then use another LSTM to predict binary labels for each sentence in the document. This architecture is widely adopted recently BIBREF3 , BIBREF4 , BIBREF5 . Our model also employs a hierarchical document encoder, but we adopt a hierarchical transformer BIBREF6 rather a hierarchical LSTM. Because recent studies BIBREF6 , BIBREF0 show the transformer model performs better than LSTM in many tasks. Abstractive models do not attract much attention until recently. They are mostly based on sequence to sequence (seq2seq) models BIBREF7 , where a document is viewed a sequence and its summary is viewed as another sequence. Although seq2seq based summarizers can be equipped with copy mechanism BIBREF8 , BIBREF9 , coverage model BIBREF9 and reinforcement learning BIBREF10 , there is still no guarantee that the generated summaries are grammatical and convey the same meaning as the original document does. It seems that extractive models are more reliable than their abstractive counterparts. However, extractive models require sentence level labels, which are usually not included in most summarization datasets (most datasets only contain document-summary pairs). Sentence labels are usually obtained by rule-based methods (e.g., maximizing the ROUGE score between a set of sentences and reference summaries) and may not be accurate. Extractive models proposed recently BIBREF11 , BIBREF3 employ hierarchical document encoders and even have neural decoders, which are complex. Training such complex neural models with inaccurate binary labels is challenging. We observed in our initial experiments on one of our dataset that our extractive model (see Section "Extractive Summarization" for details) overfits to the training set quickly after the second epoch, which indicates the training set may not be fully utilized. Inspired by the recent pre-training work in natural language processing BIBREF12 , BIBREF13 , BIBREF0 , our solution to this problem is to first pre-train the “complex”' part (i.e., the hierarchical encoder) of the extractive model on unlabeled data and then we learn to classify sentences with our model initialized from the pre-trained encoder. In this paper, we propose Hibert, which stands for HIerachical Bidirectional Encoder Representations from Transformers. We design an unsupervised method to pre-train Hibert for document modeling. We apply the pre-trained Hibert to the task of document summarization and achieve state-of-the-art performance on both the CNN/Dailymail and New York Times dataset. |
147 | How better are results compared to baseline models? | F1 score of best authors' model is 55.98 compared to BiLSTM and FastText that have F1 score slighlty higher than 46.61. | Research in the social sciences and psychology has shown that the persuasiveness of an argument depends not only the language employed, but also on attributes of the source/communicator, the audience, and the appropriateness and strength of the argument's claims given the pragmatic and discourse context of the argument. Among these characteristics of persuasive arguments, prior work in NLP does not explicitly investigate the effect of the pragmatic and discourse context when determining argument quality. This paper presents a new dataset to initiate the study of this aspect of argumentation: it consists of a diverse collection of arguments covering 741 controversial topics and comprising over 47,000 claims. We further propose predictive models that incorporate the pragmatic and discourse context of argumentative claims and show that they outperform models that rely only on claim-specific linguistic features for predicting the perceived impact of individual claims within a particular line of argument. | Previous work in the social sciences and psychology has shown that the impact and persuasive power of an argument depends not only on the language employed, but also on the credibility and character of the communicator (i.e. ethos) BIBREF0, BIBREF1, BIBREF2; the traits and prior beliefs of the audience BIBREF3, BIBREF4, BIBREF5, BIBREF6; and the pragmatic context in which the argument is presented (i.e. kairos) BIBREF7, BIBREF8. Research in Natural Language Processing (NLP) has only partially corroborated these findings. One very influential line of work, for example, develops computational methods to automatically determine the linguistic characteristics of persuasive arguments BIBREF9, BIBREF10, BIBREF11, but it does so without controlling for the audience, the communicator or the pragmatic context. Very recent work, on the other hand, shows that attributes of both the audience and the communicator constitute important cues for determining argument strength BIBREF12, BIBREF13. They further show that audience and communicator attributes can influence the relative importance of linguistic features for predicting the persuasiveness of an argument. These results confirm previous findings in the social sciences that show a person's perception of an argument can be influenced by his background and personality traits. To the best of our knowledge, however, no NLP studies explicitly investigate the role of kairos — a component of pragmatic context that refers to the context-dependent “timeliness" and “appropriateness" of an argument and its claims within an argumentative discourse — in argument quality prediction. Among the many social science studies of attitude change, the order in which argumentative claims are shared with the audience has been studied extensively: 10.1086/209393, for example, summarize studies showing that the argument-related claims a person is exposed to beforehand can affect his perception of an alternative argument in complex ways. article-3 similarly find that changes in an argument's context can have a big impact on the audience's perception of the argument. Some recent studies in NLP have investigated the effect of interactions on the overall persuasive power of posts in social media BIBREF10, BIBREF14. However, in social media not all posts have to express arguments or stay on topic BIBREF15, and qualitative evaluation of the posts can be influenced by many other factors such as interactions between the individuals BIBREF16. Therefore, it is difficult to measure the effect of argumentative pragmatic context alone in argument quality prediction without the effect of these confounding factors using the datasets and models currently available in this line of research. In this paper, we study the role of kairos on argument quality prediction by examining the individual claims of an argument for their timeliness and appropriateness in the context of a particular line of argument. We define kairos as the sequence of argumentative text (e.g. claims) along a particular line of argumentative reasoning. To start, we present a dataset extracted from kialo.com of over 47,000 claims that are part of a diverse collection of arguments on 741 controversial topics. The structure of the website dictates that each argument must present a supporting or opposing claim for its parent claim, and stay within the topic of the main thesis. Rather than being posts on a social media platform, these are community-curated claims. Furthermore, for each presented claim, the audience votes on its impact within the given line of reasoning. Critically then, the dataset includes the argument context for each claim, allowing us to investigate the characteristics associated with impactful arguments. With the dataset in hand, we propose the task of studying the characteristics of impactful claims by (1) taking the argument context into account, (2) studying the extent to which this context is important, and (3) determining the representation of context that is more effective. To the best of our knowledge, ours is the first dataset that includes claims with both impact votes and the corresponding context of the argument. |
148 | How big is dataset used for training/testing? | 4,261 days for France and 4,748 for the UK | While ubiquitous, textual sources of information such as company reports, social media posts, etc. are hardly included in prediction algorithms for time series, despite the relevant information they may contain. In this work, openly accessible daily weather reports from France and the United-Kingdom are leveraged to predict time series of national electricity consumption, average temperature and wind-speed with a single pipeline. Two methods of numerical representation of text are considered, namely traditional Term Frequency - Inverse Document Frequency (TF-IDF) as well as our own neural word embedding. Using exclusively text, we are able to predict the aforementioned time series with sufficient accuracy to be used to replace missing data. Furthermore the proposed word embeddings display geometric properties relating to the behavior of the time series and context similarity between words. | Whether it is in the field of energy, finance or meteorology, accurately predicting the behavior of time series is nowadays of paramount importance for optimal decision making or profit. While the field of time series forecasting is extremely prolific from a research point-of-view, up to now it has narrowed its efforts on the exploitation of regular numerical features extracted from sensors, data bases or stock exchanges. Unstructured data such as text on the other hand remains underexploited for prediction tasks, despite its potentially valuable informative content. Empirical studies have already proven that textual sources such as news articles or blog entries can be correlated to stock exchange time series and have explanatory power for their variations BIBREF0, BIBREF1. This observation has motivated multiple extensive experiments to extract relevant features from textual documents in different ways and use them for prediction, notably in the field of finance. In Lavrenko et al. BIBREF2, language models (considering only the presence of a word) are used to estimate the probability of trends such as surges or falls of 127 different stock values using articles from Biz Yahoo!. Their results show that this text driven approach could be used to make profit on the market. One of the most conventional ways for text representation is the TF-IDF (Term Frequency - Inverse Document Frequency) approach. Authors have included such features derived from news pieces in multiple traditional machine learning algorithms such as support vector machines (SVM) BIBREF3 or logistic regression BIBREF4 to predict the variations of financial series again. An alternative way to encode the text is through latent Dirichlet allocation (LDA) BIBREF5. It assigns topic probabilities to a text, which can be used as inputs for subsequent tasks. This is for instance the case in Wang's aforementioned work (alongside TF-IDF). In BIBREF6, the authors used Reuters news encoded by LDA to predict if NASDAQ and Dow Jones closing prices increased or decreased compared to the opening ones. Their empirical results show that this approach was efficient to improve the prediction of stock volatility. More recently Kanungsukkasem et al. BIBREF7 introduced a variant of the LDA graphical model, named FinLDA, to craft probabilities that are specifically tailored for a financial time series prediction task (although their approach could be generalized to other ones). Their results showed that indeed performance was better when using probabilities from their alternative than those of the original LDA. Deep learning with its natural ability to work with text through word embeddings has also been used for time series prediction with text. Combined with traditional time series features, the authors of BIBREF8 derived sentiment features from a convolutional neural network (CNN) to reduce the prediction error of oil prices. Akita et al. BIBREF9 represented news articles through the use of paragraph vectors BIBREF10 in order to predict 10 closing stock values from the Nikkei 225. While in the case of financial time series the existence of specialized press makes it easy to decide which textual source to use, it is much more tedious in other fields. Recently in Rodrigues et al. BIBREF11, short description of events (such as concerts, sports matches, ...) are leveraged through a word embedding and neural networks in addition to more traditional features. Their experiments show that including the text can bring an improvement of up to 2% of root mean squared error compared to an approach without textual information. Although the presented studies conclude on the usefulness of text to improve predictions, they never thoroughly analyze which aspects of the text are of importance, keeping the models as black-boxes. The field of electricity consumption is one where expert knowledge is broad. It is known that the major phenomena driving the load demand are calendar (time of the year, day of the week, ...) and meteorological. For instance generalized additive models (GAM) BIBREF12 representing the consumption as a sum of functions of the time of the year, temperature and wind speed (among others) typically yield less than 1.5% of relative error for French national electricity demand and 8% for local one BIBREF13, BIBREF14. Neural networks and their variants, with their ability to extract patterns from heterogeneous types of data have also obtained state-of-the-art results BIBREF15, BIBREF16, BIBREF17. However to our knowledge no exploratory work using text has been conducted yet. Including such data in electricity demand forecasting models would not only contribute to close the gap with other domains, but also help to understand better which aspects of text are useful, how the encoding of the text influences forecasts and to which extend a prediction algorithm can extract relevant information from unstructured data. Moreover the major drawback of all the aforementioned approaches is that they require meteorological data that may be difficult to find, unavailable in real time or expensive. Textual sources such as weather reports on the other hand are easy to find, usually available on a daily basis and free. The main contribution of our paper is to suggest the use of a certain type of textual documents, namely daily weather report, to build forecasters of the daily national electricity load, average temperature and wind speed for both France and the United-Kingdom (UK). Consequently this work represents a significant break with traditional methods, and we do not intend to best state-of-the-art approaches. Textual information is naturally more fuzzy than numerical one, and as such the same accuracy is not expected from the presented approaches. With a single text, we were already able to predict the electricity consumption with a relative error of less than 5% for both data sets. Furthermore, the quality of our predictions of temperature and wind speed is satisfying enough to replace missing or unavailable data in traditional models. Two different approaches are considered to represent the text numerically, as well as multiple forecasting algorithms. Our empirical results are consistent across encoding, methods and language, thus proving the intrinsic value weather reports have for the prediction of the aforementioned time series. Moreover, a major distinction between previous works is our interpretation of the models. We quantify the impact of a word on the forecast and analyze the geometric properties of the word embedding we trained ourselves. Note that although multiple time series are discussed in our paper, the main focus of this paper remains electricity consumption. As such, emphasis is put on the predictive results on the load demand time series. The rest of this paper is organized as follows. The following section introduces the two data sets used to conduct our study. Section 3 presents the different machine learning approaches used and how they were tuned. Section 4 highlights the main results of our study, while section 5 concludes this paper and gives insight on future possible work. |
149 | What are the state-of-the-art systems? | For sentiment analysis UWB, INF-UFRGS-OPINION-MINING, LitisMind, pkudblab and SVM + n-grams + sentiment and for emotion analysis MaxEnt, SVM, LSTM, BiLSTM and CNN | In this paper, we propose a two-layered multi-task attention based neural network that performs sentiment analysis through emotion analysis. The proposed approach is based on Bidirectional Long Short-Term Memory and uses Distributional Thesaurus as a source of external knowledge to improve the sentiment and emotion prediction. The proposed system has two levels of attention to hierarchically build a meaningful representation. We evaluate our system on the benchmark dataset of SemEval 2016 Task 6 and also compare it with the state-of-the-art systems on Stance Sentiment Emotion Corpus. Experimental results show that the proposed system improves the performance of sentiment analysis by 3.2 F-score points on SemEval 2016 Task 6 dataset. Our network also boosts the performance of emotion analysis by 5 F-score points on Stance Sentiment Emotion Corpus. | The emergence of social media sites with limited character constraint has ushered in a new style of communication. Twitter users within 280 characters per tweet share meaningful and informative messages. These short messages have a powerful impact on how we perceive and interact with other human beings. Their compact nature allows them to be transmitted efficiently and assimilated easily. These short messages can shape people's thought and opinion. This makes them an interesting and important area of study. Tweets are not only important for an individual but also for the companies, political parties or any organization. Companies can use tweets to gauge the performance of their products and predict market trends BIBREF0. The public opinion is particularly interesting for political parties as it gives them an idea of voter's inclination and their support. Sentiment and emotion analysis can help to gauge product perception, predict stock prices and model public opinions BIBREF1. Sentiment analysis BIBREF2 is an important area of research in natural language processing (NLP) where we automatically determine the sentiments (positive, negative, neutral). Emotion analysis focuses on the extraction of predefined emotion from documents. Discrete emotions BIBREF3, BIBREF4 are often classified into anger, anticipation, disgust, fear, joy, sadness, surprise and trust. Sentiments and emotions are subjective and hence they are understood similarly and often used interchangeably. This is also mostly because both emotions and sentiments refer to experiences that result from the combined influences of the biological, the cognitive, and the social BIBREF5. However, emotions are brief episodes and are shorter in length BIBREF6, whereas sentiments are formed and retained for a longer period. Moreover, emotions are not always target-centric whereas sentiments are directed. Another difference between emotion and sentiment is that a sentence or a document may contain multiple emotions but a single overall sentiment. Prior studies show that sentiment and emotion are generally tackled as two separate problems. Although sentiment and emotion are not exactly the same, they are closely related. Emotions, like joy and trust, intrinsically have an association with a positive sentiment. Similarly, anger, disgust, fear and sadness have a negative tone. Moreover, sentiment analysis alone is insufficient at times in imparting complete information. A negative sentiment can arise due to anger, disgust, fear, sadness or a combination of these. Information about emotion along with sentiment helps to better understand the state of the person or object. The close association of emotion with sentiment motivates us to build a system for sentiment analysis using the information obtained from emotion analysis. In this paper, we put forward a robust two-layered multi-task attention based neural network which performs sentiment analysis and emotion analysis simultaneously. The model uses two levels of attention - the first primary attention builds the best representation for each word using Distributional Thesaurus and the secondary attention mechanism creates the final sentence level representation. The system builds the representation hierarchically which gives it a good intuitive working insight. We perform several experiments to evaluate the usefulness of primary attention mechanism. Experimental results show that the two-layered multi-task system for sentiment analysis which uses emotion analysis as an auxiliary task improves over the existing state-of-the-art system of SemEval 2016 Task 6 BIBREF7. The main contributions of the current work are two-fold: a) We propose a novel two-layered multi-task attention based system for joint sentiment and emotion analysis. This system has two levels of attention which builds a hierarchical representation. This provides an intuitive explanation of its working; b) We empirically show that emotion analysis is relevant and useful in sentiment analysis. The multi-task system utilizing fine-grained information of emotion analysis performs better than the single task system of sentiment analysis. |
151 | What is the size of their collected dataset? | 3347 unique utterances | Understanding passenger intents and extracting relevant slots are important building blocks towards developing a contextual dialogue system responsible for handling certain vehicle-passenger interactions in autonomous vehicles (AV). When the passengers give instructions to AMIE (Automated-vehicle Multimodal In-cabin Experience), the agent should parse such commands properly and trigger the appropriate functionality of the AV system. In our AMIE scenarios, we describe usages and support various natural commands for interacting with the vehicle. We collected a multimodal in-cabin data-set with multi-turn dialogues between the passengers and AMIE using a Wizard-of-Oz scheme. We explored various recent Recurrent Neural Networks (RNN) based techniques and built our own hierarchical models to recognize passenger intents along with relevant slots associated with the action to be performed in AV scenarios. Our experimental results achieved F1-score of 0.91 on utterance-level intent recognition and 0.96 on slot extraction models. | Understanding passenger intents and extracting relevant slots are important building blocks towards developing a contextual dialogue system responsible for handling certain vehicle-passenger interactions in autonomous vehicles (AV). When the passengers give instructions to AMIE (Automated-vehicle Multimodal In-cabin Experience), the agent should parse such commands properly and trigger the appropriate functionality of the AV system. In our AMIE scenarios, we describe usages and support various natural commands for interacting with the vehicle. We collected a multimodal in-cabin data-set with multi-turn dialogues between the passengers and AMIE using a Wizard-of-Oz scheme. We explored various recent Recurrent Neural Networks (RNN) based techniques and built our own hierarchical models to recognize passenger intents along with relevant slots associated with the action to be performed in AV scenarios. Our experimental results achieved F1-score of 0.91 on utterance-level intent recognition and 0.96 on slot extraction models. |
152 | What kind of features are used by the HMM models, and how interpretable are those? | A continuous emission HMM uses the hidden states of a 2-layer LSTM as features and a discrete emission HMM uses data as features.
The interpretability of the model is shown in Figure 2. | As deep neural networks continue to revolutionize various application domains, there is increasing interest in making these powerful models more understandable and interpretable, and narrowing down the causes of good and bad predictions. We focus on recurrent neural networks (RNNs), state of the art models in speech recognition and translation. Our approach to increasing interpretability is by combining an RNN with a hidden Markov model (HMM), a simpler and more transparent model. We explore various combinations of RNNs and HMMs: an HMM trained on LSTM states; a hybrid model where an HMM is trained first, then a small LSTM is given HMM state distributions and trained to fill in gaps in the HMM's performance; and a jointly trained hybrid model. We find that the LSTM and HMM learn complementary information about the features in the text. | Following the recent progress in deep learning, researchers and practitioners of machine learning are recognizing the importance of understanding and interpreting what goes on inside these black box models. Recurrent neural networks have recently revolutionized speech recognition and translation, and these powerful models could be very useful in other applications involving sequential data. However, adoption has been slow in applications such as health care, where practitioners are reluctant to let an opaque expert system make crucial decisions. If we can make the inner workings of RNNs more interpretable, more applications can benefit from their power. There are several aspects of what makes a model or algorithm understandable to humans. One aspect is model complexity or parsimony. Another aspect is the ability to trace back from a prediction or model component to particularly influential features in the data BIBREF0 BIBREF1 . This could be useful for understanding mistakes made by neural networks, which have human-level performance most of the time, but can perform very poorly on seemingly easy cases. For instance, convolutional networks can misclassify adversarial examples with very high confidence BIBREF2 , and made headlines in 2015 when the image tagging algorithm in Google Photos mislabeled African Americans as gorillas. It's reasonable to expect recurrent networks to fail in similar ways as well. It would thus be useful to have more visibility into where these sorts of errors come from, i.e. which groups of features contribute to such flawed predictions. Several promising approaches to interpreting RNNs have been developed recently. BIBREF3 have approached this by using gradient boosting trees to predict LSTM output probabilities and explain which features played a part in the prediction. They do not model the internal structure of the LSTM, but instead approximate the entire architecture as a black box. BIBREF4 showed that in LSTM language models, around 10% of the memory state dimensions can be interpreted with the naked eye by color-coding the text data with the state values; some of them track quotes, brackets and other clearly identifiable aspects of the text. Building on these results, we take a somewhat more systematic approach to looking for interpretable hidden state dimensions, by using decision trees to predict individual hidden state dimensions (Figure 2 ). We visualize the overall dynamics of the hidden states by coloring the training data with the k-means clusters on the state vectors (Figures 3 , 3 ). We explore several methods for building interpretable models by combining LSTMs and HMMs. The existing body of literature mostly focuses on methods that specifically train the RNN to predict HMM states BIBREF5 or posteriors BIBREF6 , referred to as hybrid or tandem methods respectively. We first investigate an approach that does not require the RNN to be modified in order to make it understandable, as the interpretation happens after the fact. Here, we model the big picture of the state changes in the LSTM, by extracting the hidden states and approximating them with a continuous emission hidden Markov model (HMM). We then take the reverse approach where the HMM state probabilities are added to the output layer of the LSTM (see Figure 1 ). The LSTM model can then make use of the information from the HMM, and fill in the gaps when the HMM is not performing well, resulting in an LSTM with a smaller number of hidden state dimensions that could be interpreted individually (Figures 3 , 3 ). |
153 | what was their system's f1 performance? | Proposed model achieves 0.86, 0.924, 0.71 F1 score on SR, HATE, HAR datasets respectively. | We present a neural-network based approach to classifying online hate speech in general, as well as racist and sexist speech in particular. Using pre-trained word embeddings and max/mean pooling from simple, fully-connected transformations of these embeddings, we are able to predict the occurrence of hate speech on three commonly used publicly available datasets. Our models match or outperform state of the art F1 performance on all three datasets using significantly fewer parameters and minimal feature preprocessing compared to previous methods. | The increasing popularity of social media platforms like Twitter for both personal and political communication BIBREF0 has seen a well-acknowledged rise in the presence of toxic and abusive speech on these platforms BIBREF1 , BIBREF2 . Although the terms of services on these platforms typically forbid hateful and harassing speech, enforcing these rules has proved challenging, as identifying hate speech speech at scale is still a largely unsolved problem in the NLP community. BIBREF3 , for example, identify many ambiguities in classifying abusive communications, and highlight the difficulty of clearly defining the parameters of such speech. This problem is compounded by the fact that identifying abusive or harassing speech is a challenge for humans as well as automated systems. Despite the lack of consensus around what constitutes abusive speech, some definition of hate speech must be used to build automated systems to address it. We rely on BIBREF4 's definition of hate speech, specifically: “language that is used to express hatred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group.” In this paper, we present a neural classification system that uses minimal preprocessing to take advantage of a modified Simple Word Embeddings-based Model BIBREF5 to predict the occurrence of hate speech. Our classifier features: In the following sections, we discuss related work on hate speech classification, followed by a description of the datasets, methods and results of our study. |
155 | How was quality measured? | Inter-annotator agreement, comparison against expert annotation, agreement with PropBank Data annotations. | Question-answer driven Semantic Role Labeling (QA-SRL) has been proposed as an attractive open and natural form of SRL, easily crowdsourceable for new corpora. Recently, a large-scale QA-SRL corpus and a trained parser were released, accompanied by a densely annotated dataset for evaluation. Trying to replicate the QA-SRL annotation and evaluation scheme for new texts, we observed that the resulting annotations were lacking in quality and coverage, particularly insufficient for creating gold standards for evaluation. In this paper, we present an improved QA-SRL annotation protocol, involving crowd-worker selection and training, followed by data consolidation. Applying this process, we release a new gold evaluation dataset for QA-SRL, yielding more consistent annotations and greater coverage. We believe that our new annotation protocol and gold standard will facilitate future replicable research of natural semantic annotations. | Semantic Role Labeling (SRL) provides explicit annotation of predicate-argument relations, which have been found useful in various downstream tasks BIBREF0, BIBREF1, BIBREF2, BIBREF3. Question-Answer driven Semantic Role Labeling (QA-SRL) BIBREF4 is an SRL scheme in which roles are captured by natural language questions, while arguments represent their answers, making the annotations intuitive, semantically rich, and easily attainable by laymen. For example, in Table TABREF4, the question Who cut something captures the traditional “agent” role. Previous attempts to annotate QA-SRL initially involved trained annotators BIBREF4 but later resorted to crowdsourcing BIBREF5 to achieve scalability. Naturally, employing crowd workers raises challenges when annotating semantic structures like SRL. As BIBREF5 acknowledged, the main shortage of the large-scale 2018 dataset is the lack of recall, estimated by experts to be in the lower 70s. In light of this and other annotation inconsistencies, we propose an improved QA-SRL crowdsourcing protocol for high-quality annotation, allowing for substantially more reliable performance evaluation of QA-SRL parsers. To address worker quality, we systematically screen workers, provide concise yet effective guidelines, and perform a short training procedure, all within a crowd-sourcing platform. To address coverage, we employ two independent workers plus an additional one for consolidation — similar to conventional expert-annotation practices. In addition to yielding 25% more roles, our coverage gain is demonstrated by evaluating against expertly annotated data and comparison with PropBank (Section SECREF4). To foster future research, we release an assessed high-quality gold dataset along with our reproducible protocol and evaluation scheme, and report the performance of the existing parser BIBREF5 as a baseline. |
156 | What data were they used to train the multilingual encoder? | WMT 2014 En-Fr parallel corpus | Transferring representations from large supervised tasks to downstream tasks has shown promising results in AI fields such as Computer Vision and Natural Language Processing (NLP). In parallel, the recent progress in Machine Translation (MT) has enabled one to train multilingual Neural MT (NMT) systems that can translate between multiple languages and are also capable of performing zero-shot translation. However, little attention has been paid to leveraging representations learned by a multilingual NMT system to enable zero-shot multilinguality in other NLP tasks. In this paper, we demonstrate a simple framework, a multilingual Encoder-Classifier, for cross-lingual transfer learning by reusing the encoder from a multilingual NMT system and stitching it with a task-specific classifier component. Our proposed model achieves significant improvements in the English setup on three benchmark tasks - Amazon Reviews, SST and SNLI. Further, our system can perform classification in a new language for which no classification data was seen during training, showing that zero-shot classification is possible and remarkably competitive. In order to understand the underlying factors contributing to this finding, we conducted a series of analyses on the effect of the shared vocabulary, the training data type for NMT, classifier complexity, encoder representation power, and model generalization on zero-shot performance. Our results provide strong evidence that the representations learned from multilingual NMT systems are widely applicable across languages and tasks. | Transfer learning has been shown to work well in Computer Vision where pre-trained components from a model trained on ImageNet BIBREF0 are used to initialize models for other tasks BIBREF1 . In most cases, the other tasks are related to and share architectural components with the ImageNet task, enabling the use of such pre-trained models for feature extraction. With this transfer capability, improvements have been obtained on other image classification datasets, and on other tasks such as object detection, action recognition, image segmentation, etc BIBREF2 . Analogously, we propose a method to transfer a pre-trained component - the multilingual encoder from an NMT system - to other NLP tasks. In NLP, initializing word embeddings with pre-trained word representations obtained from Word2Vec BIBREF3 or GloVe BIBREF4 has become a common way of transferring information from large unlabeled data to downstream tasks. Recent work has further shown that we can improve over this approach significantly by considering representations in context, i.e. modeled depending on the sentences that contain them, either by taking the outputs of an encoder in MT BIBREF5 or by obtaining representations from the internal states of a bi-directional Language Model (LM) BIBREF6 . There has also been successful recent work in transferring sentence representations from resource-rich tasks to improve resource-poor tasks BIBREF7 , however, most of the above transfer learning examples have focused on transferring knowledge across tasks for a single language, in English. Cross-lingual or multilingual NLP, the task of transferring knowledge from one language to another, serves as a good test bed for evaluating various transfer learning approaches. For cross-lingual NLP, the most widely studied approach is to use multilingual embeddings as features in neural network models. However, research has shown that representations learned in context are more effective BIBREF5 , BIBREF6 ; therefore, we aim at doing better than just using multilingual embeddings in the cross-lingual tasks. Recent progress in multilingual NMT provides a compelling opportunity for obtaining contextualized multilingual representations, as multilingual NMT systems are capable of generalizing to an unseen language direction, i.e. zero-shot translation. There is also evidence that the encoder of a multilingual NMT system learns language agnostic, universal interlingua representations, which can be further exploited BIBREF8 . In this paper, we focus on using the representations obtained from a multilingual NMT system to enable cross-lingual transfer learning on downstream NLP tasks. Our contributions are three-fold: |
157 | From when are many VQA datasets collected? | late 2014 | In visual question answering (VQA), an algorithm must answer text-based questions about images. While multiple datasets for VQA have been created since late 2014, they all have flaws in both their content and the way algorithms are evaluated on them. As a result, evaluation scores are inflated and predominantly determined by answering easier questions, making it difficult to compare different methods. In this paper, we analyze existing VQA algorithms using a new dataset. It contains over 1.6 million questions organized into 12 different categories. We also introduce questions that are meaningless for a given image to force a VQA system to reason about image content. We propose new evaluation schemes that compensate for over-represented question-types and make it easier to study the strengths and weaknesses of algorithms. We analyze the performance of both baseline and state-of-the-art VQA models, including multi-modal compact bilinear pooling (MCB), neural module networks, and recurrent answering units. Our experiments establish how attention helps certain categories more than others, determine which models work better than others, and explain how simple models (e.g. MLP) can surpass more complex models (MCB) by simply learning to answer large, easy question categories. | In open-ended visual question answering (VQA) an algorithm must produce answers to arbitrary text-based questions about images BIBREF0 , BIBREF1 . VQA is an exciting computer vision problem that requires a system to be capable of many tasks. Truly solving VQA would be a milestone in artificial intelligence, and would significantly advance human computer interaction. However, VQA datasets must test a wide range of abilities for progress to be adequately measured. VQA research began in earnest in late 2014 when the DAQUAR dataset was released BIBREF0 . Including DAQUAR, six major VQA datasets have been released, and algorithms have rapidly improved. On the most popular dataset, `The VQA Dataset' BIBREF1 , the best algorithms are now approaching 70% accuracy BIBREF2 (human performance is 83%). While these results are promising, there are critical problems with existing datasets in terms of multiple kinds of biases. Moreover, because existing datasets do not group instances into meaningful categories, it is not easy to compare the abilities of individual algorithms. For example, one method may excel at color questions compared to answering questions requiring spatial reasoning. Because color questions are far more common in the dataset, an algorithm that performs well at spatial reasoning will not be appropriately rewarded for that feat due to the evaluation metrics that are used. Contributions: Our paper has four major contributions aimed at better analyzing and comparing VQA algorithms: 1) We create a new VQA benchmark dataset where questions are divided into 12 different categories based on the task they solve; 2) We propose two new evaluation metrics that compensate for forms of dataset bias; 3) We balance the number of yes/no object presence detection questions to assess whether a balanced distribution can help algorithms learn better; and 4) We introduce absurd questions that force an algorithm to determine if a question is valid for a given image. We then use the new dataset to re-train and evaluate both baseline and state-of-the-art VQA algorithms. We found that our proposed approach enables more nuanced comparisons of VQA algorithms, and helps us understand the benefits of specific techniques better. In addition, it also allowed us to answer several key questions about VQA algorithms, such as, `Is the generalization capacity of the algorithms hindered by the bias in the dataset?', `Does the use of spatial attention help answer specific question-types?', `How successful are the VQA algorithms in answering less-common questions?', and 'Can the VQA algorithms differentiate between real and absurd questions?' |
158 | Does proposed end-to-end approach learn in reinforcement or supervised learning manner? | supervised learning | In this work we propose a novel end-to-end imitation learning approach which combines natural language, vision, and motion information to produce an abstract representation of a task, which in turn is used to synthesize specific motion controllers at run-time. This multimodal approach enables generalization to a wide variety of environmental conditions and allows an end-user to direct a robot policy through verbal communication. We empirically validate our approach with an extensive set of simulations and show that it achieves a high task success rate over a variety of conditions while remaining amenable to probabilistic interpretability. | A significant challenge when designing robots to operate in the real world lies in the generation of control policies that can adapt to changing environments. Programming such policies is a labor and time-consuming process which requires substantial technical expertise. Imitation learning BIBREF0, is an appealing methodology that aims at overcoming this challenge – instead of complex programming, the user only provides a set of demonstrations of the intended behavior. These demonstrations are consequently distilled into a robot control policy by learning appropriate parameter settings of the controller. Popular approaches to imitation, such as Dynamic Motor Primitives (DMPs) BIBREF1 or Gaussian Mixture Regression (GMR) BIBREF2 largely focus on motion as the sole input and output modality, i.e., joint angles, forces or positions. Critical semantic and visual information regarding the task, such as the appearance of the target object or the type of task performed, is not taken into account during training and reproduction. The result is often a limited generalization capability which largely revolves around adaptation to changes in the object position. While imitation learning has been successfully applied to a wide range of tasks including table-tennis BIBREF3, locomotion BIBREF4, and human-robot interaction BIBREF5 an important question is how to incorporate language and vision into a differentiable end-to-end system for complex robot control. In this paper, we present an imitation learning approach that combines language, vision, and motion in order to synthesize natural language-conditioned control policies that have strong generalization capabilities while also capturing the semantics of the task. We argue that such a multi-modal teaching approach enables robots to acquire complex policies that generalize to a wide variety of environmental conditions based on descriptions of the intended task. In turn, the network produces control parameters for a lower-level control policy that can be run on a robot to synthesize the corresponding motion. The hierarchical nature of our approach, i.e., a high-level policy generating the parameters of a lower-level policy, allows for generalization of the trained task to a variety of spatial, visual and contextual changes. |
163 | How big dataset is used for training this system? | For the question generation model 15,000 images with 75,000 questions. For the chatbot model, around 460k utterances over 230k dialogues. | With people living longer than ever, the number of cases with dementia such as Alzheimer's disease increases steadily. It affects more than 46 million people worldwide, and it is estimated that in 2050 more than 100 million will be affected. While there are not effective treatments for these terminal diseases, therapies such as reminiscence, that stimulate memories from the past are recommended. Currently, reminiscence therapy takes place in care homes and is guided by a therapist or a carer. In this work, we present an AI-based solution to automatize the reminiscence therapy, which consists in a dialogue system that uses photos as input to generate questions. We run a usability case study with patients diagnosed of mild cognitive impairment that shows they found the system very entertaining and challenging. Overall, this paper presents how reminiscence therapy can be automatized by using machine learning, and deployed to smartphones and laptops, making the therapy more accessible to every person affected by dementia. | Increases in life expectancy in the last century have resulted in a large number of people living to old ages and will result in a double number of dementia cases by the middle of the century BIBREF0BIBREF1. The most common form of dementia is Alzheimer disease which contributes to 60–70% of cases BIBREF2. Research focused on identifying treatments to slow down the evolution of Alzheimer's disease is a very active pursuit, but it has been only successful in terms of developing therapies that eases the symptoms without addressing the cause BIBREF3BIBREF4. Besides, people with dementia might have some barriers to access to the therapies, such as cost, availability and displacement to the care home or hospital, where the therapy takes place. We believe that Artificial Intelligence (AI) can contribute in innovative systems to give accessibility and offer new solutions to the patients needs, as well as help relatives and caregivers to understand the illness of their family member or patient and monitor the progress of the dementia. Therapies such as reminiscence, that stimulate memories of the patient's past, have well documented benefits on social, mental and emotional well-being BIBREF5BIBREF6, making them a very desirable practice, especially for older adults. Reminiscence therapy in particular involves the discussion of events and past experiences using tangible prompts such as pictures or music to evoke memories and stimulate conversation BIBREF7. With this aim, we explore multi-modal deep learning architectures to be used to develop an intuitive, easy to use, and robust dialogue system to automatize the reminiscence therapy for people affected by mild cognitive impairment or at early stages of Alzheimer's disease. We propose a conversational agent that simulates a reminiscence therapist by asking questions about the patient's experiences. Questions are generated from pictures provided by the patient, which contain significant moments or important people in user's life. Moreover, to engage the user in the conversation we propose a second model which generates comments on user's answers. A chatbot model trained with a dataset containing simple conversations between different people. The activity pretends to be challenging for the patient, as the questions may require the user to exercise the memory. Our contributions include: Automation of the Reminiscence therapy by using a multi-modal approach that generates questions from pictures, without using a reminiscence therapy dataset. An end-to-end deep learning approach which do not require hand-crafted rules and it is ready to be used by mild cognitive impairment patients. The system is designed to be intuitive and easy to use for the users and could be reached by any smartphone with internet connection. |
164 | How do they obtain word lattices from words? | By considering words as vertices and generating directed edges between neighboring words within a sentence | Short text matching often faces the challenges that there are great word mismatch and expression diversity between the two texts, which would be further aggravated in languages like Chinese where there is no natural space to segment words explicitly. In this paper, we propose a novel lattice based CNN model (LCNs) to utilize multi-granularity information inherent in the word lattice while maintaining strong ability to deal with the introduced noisy information for matching based question answering in Chinese. We conduct extensive experiments on both document based question answering and knowledge based question answering tasks, and experimental results show that the LCNs models can significantly outperform the state-of-the-art matching models and strong baselines by taking advantages of better ability to distill rich but discriminative information from the word lattice input. | Short text matching plays a critical role in many natural language processing tasks, such as question answering, information retrieval, and so on. However, matching text sequences for Chinese or similar languages often suffers from word segmentation, where there are often no perfect Chinese word segmentation tools that suit every scenario. Text matching usually requires to capture the relatedness between two sequences in multiple granularities. For example, in Figure FIGREF4 , the example phrase is generally tokenized as “China – citizen – life – quality – high”, but when we plan to match it with “Chinese – live – well”, it would be more helpful to have the example segmented into “Chinese – livelihood – live” than its common segmentation. Existing efforts use neural network models to improve the matching based on the fact that distributed representations can generalize discrete word features in traditional bag-of-words methods. And there are also works fusing word level and character level information, which, to some extent, could relieve the mismatch between different segmentations, but these solutions still suffer from the original word sequential structures. They usually depend on an existing word tokenization, which has to make segmentation choices at one time, e.g., “ZhongGuo”(China) and “ZhongGuoRen”(Chinese) when processing “ZhongGuoRenMin”(Chinese people). And the blending just conducts at one position in their frameworks. Specific tasks such as question answering (QA) could pose further challenges for short text matching. In document based question answering (DBQA), the matching degree is expected to reflect how likely a sentence can answer a given question, where questions and candidate answer sentences usually come from different sources, and may exhibit significantly different styles or syntactic structures, e.g. queries in web search and sentences in web pages. This could further aggravate the mismatch problems. In knowledge based question answering (KBQA), one of the key tasks is to match relational expressions in questions with knowledge base (KB) predicate phrases, such as “ZhuCeDi”(place of incorporation). Here the diversity between the two kinds of expressions is even more significant, where there may be dozens of different verbal expressions in natural language questions corresponding to only one KB predicate phrase. Those expression problems make KBQA a further tough task. Previous works BIBREF0 , BIBREF1 adopt letter-trigrams for the diverse expressions, which is similar to character level of Chinese. And the lattices are combinations of words and characters, so with lattices, we can utilize words information at the same time. Recent advances have put efforts in modeling multi-granularity information for matching. BIBREF2 , BIBREF3 blend words and characters to a simple sequence (in word level), and BIBREF4 utilize multiple convoluational kernel sizes to capture different n-grams. But most characters in Chinese can be seen as words on their own, so combining characters with corresponding words directly may lose the meanings that those characters can express alone. Because of the sequential inputs, they will either lose word level information when conducting on character sequences or have to make segmentation choices. In this paper, we propose a multi-granularity method for short text matching in Chinese question answering which utilizes lattice based CNNs to extract sentence level features over word lattice. Specifically, instead of relying on character or word level sequences, LCNs take word lattices as input, where every possible word and character will be treated equally and have their own context so that they can interact at every layer. For each word in each layer, LCNs can capture different context words in different granularity via pooling methods. To the best of our knowledge, we are the first to introduce word lattice into the text matching tasks. Because of the similar IO structures to original CNNs and the high efficiency, LCNs can be easily adapted to more scenarios where flexible sentence representation modeling is required. We evaluate our LCNs models on two question answering tasks, document based question answering and knowledge based question answering, both in Chinese. Experimental results show that LCNs significantly outperform the state-of-the-art matching methods and other competitive CNNs baselines in both scenarios. We also find that LCNs can better capture the multi-granularity information from plain sentences, and, meanwhile, maintain better de-noising capability than vanilla graphic convolutional neural networks thanks to its dynamic convolutional kernels and gated pooling mechanism. |
167 | How better is proposed method than baselines perpexity wise? | Perplexity of proposed MEED model is 19.795 vs 19.913 of next best result on test set. | Open-domain dialog systems (also known as chatbots) have increasingly drawn attention in natural language processing. Some of the recent work aims at incorporating affect information into sequence-to-sequence neural dialog modeling, making the response emotionally richer, while others use hand-crafted rules to determine the desired emotion response. However, they do not explicitly learn the subtle emotional interactions captured in human dialogs. In this paper, we propose a multi-turn dialog system aimed at learning and generating emotional responses that so far only humans know how to do. Compared with two baseline models, offline experiments show that our method performs the best in perplexity scores. Further human evaluations confirm that our chatbot can keep track of the conversation context and generate emotionally more appropriate responses while performing equally well on grammar. | Recent development in neural language modeling has generated significant excitement in the open-domain dialog generation community. The success of sequence-to-sequence learning BIBREF0, BIBREF1 in the field of neural machine translation has inspired researchers to apply the recurrent neural network (RNN) encoder-decoder structure to response generation BIBREF2. Specifically, the encoder RNN reads the input message, encodes it into a fixed context vector, and the decoder RNN uses it to generate the response. Shang et al. BIBREF3 applied the same structure combined with attention mechanism BIBREF4 on Twitter-style microblogging data. Following the vanilla sequence-to-sequence structure, various improvements have been made on the neural conversation model—for example, increasing the diversity of the response BIBREF5, BIBREF6, modeling personalities of the speakers BIBREF7, and developing topic aware dialog systems BIBREF8. Some of the recent work aims at incorporating affect information into neural conversational models. While making the responses emotionally richer, existing approaches either explicitly require an emotion label as input BIBREF9, or rely on hand-crafted rules to determine the desired emotion responses BIBREF10, BIBREF11, ignoring the subtle emotional interactions captured in multi-turn conversations, which we believe to be an important aspect of human dialogs. For example, Gottman BIBREF12 found that couples are likely to practice the so called emotional reciprocity. When an argument starts, one partner's angry and aggressive utterance is often met with equally furious and negative utterance, resulting in more heated exchanges. On the other hand, responding with complementary emotions (such as reassurance and sympathy) is more likely to lead to a successful relationship. However, to the best of our knowledge, the psychology and social science literature does not offer clear rules for emotional interaction. It seems such social and emotional intelligence is captured in our conversations. This is why we believe that the data driven approach will have an advantage. In this paper, we propose an end-to-end data driven multi-turn dialog system capable of learning and generating emotionally appropriate and human-like responses with the ultimate goal of reproducing social behaviors that are habitual in human-human conversations. We chose the multi-turn setting because only in such cases is the emotion appropriateness most necessary. To this end, we employ the latest multi-turn dialog model by Xing et al. BIBREF13, but we add an additional emotion RNN to process the emotional information in each history utterance. By leveraging an external text analysis program, we encode the emotion aspects of each utterance into a fixed-sized one-zero vector. This emotion RNN reads and encodes the input affect information, and then uses the final hidden state as the emotion representation vector for the context. When decoding, at each time step, this emotion vector is concatenated with the hidden state of the decoder and passed to the softmax layer to produce the probability distribution over the vocabulary. Thereby, our contributions are threefold. (1) We propose a novel emotion-tracking dialog generation model that learns the emotional interactions directly from the data. This approach is free of human-defined heuristic rules, and hence, is more robust and fundamental than those described in existing work BIBREF9, BIBREF10, BIBREF11. (2) We apply the emotion-tracking mechanism to multi-turn dialogs, which has never been attempted before. Human evaluation shows that our model produces responses that are emotionally more appropriate than the baselines, while slightly improving the language fluency. (3) We illustrate a human-evaluation approach for judging machine-produced emotional dialogs. We consider factors such as the balance of positive and negative sentiments in test dialogs, a well-chosen range of topics, and dialogs that our human evaluators can relate. It is the first time such an approach is designed with consideration for the human judges. Our main goal is to increase the objectivity of the results and reduce judges' mistakes due to out-of-context dialogs they have to evaluate. The rest of the paper unfolds as follows. Section SECREF2 discusses some related work. In Section SECREF3, we give detailed description of the methodology. We present experimental results and some analysis in Section SECREF4. The paper is concluded in Section SECREF5, followed by some future work we plan to do. |
169 | Who manually annotated the semantic roles for the set of learner texts? | Authors | This paper studies semantic parsing for interlanguage (L2), taking semantic role labeling (SRL) as a case task and learner Chinese as a case language. We first manually annotate the semantic roles for a set of learner texts to derive a gold standard for automatic SRL. Based on the new data, we then evaluate three off-the-shelf SRL systems, i.e., the PCFGLA-parser-based, neural-parser-based and neural-syntax-agnostic systems, to gauge how successful SRL for learner Chinese can be. We find two non-obvious facts: 1) the L1-sentence-trained systems performs rather badly on the L2 data; 2) the performance drop from the L1 data to the L2 data of the two parser-based systems is much smaller, indicating the importance of syntactic parsing in SRL for interlanguages. Finally, the paper introduces a new agreement-based model to explore the semantic coherency information in the large-scale L2-L1 parallel data. We then show such information is very effective to enhance SRL for learner texts. Our model achieves an F-score of 72.06, which is a 2.02 point improvement over the best baseline. | A learner language (interlanguage) is an idiolect developed by a learner of a second or foreign language which may preserve some features of his/her first language. Previously, encouraging results of automatically building the syntactic analysis of learner languages were reported BIBREF0 , but it is still unknown how semantic processing performs, while parsing a learner language (L2) into semantic representations is the foundation of a variety of deeper analysis of learner languages, e.g., automatic essay scoring. In this paper, we study semantic parsing for interlanguage, taking semantic role labeling (SRL) as a case task and learner Chinese as a case language. Before discussing a computation system, we first consider the linguistic competence and performance. Can human robustly understand learner texts? Or to be more precise, to what extent, a native speaker can understand the meaning of a sentence written by a language learner? Intuitively, the answer is towards the positive side. To validate this, we ask two senior students majoring in Applied Linguistics to carefully annotate some L2-L1 parallel sentences with predicate–argument structures according to the specification of Chinese PropBank BIBREF1 , which is developed for L1. A high inter-annotator agreement is achieved, suggesting the robustness of language comprehension for L2. During the course of semantic annotation, we find a non-obvious fact that we can re-use the semantic annotation specification, Chinese PropBank in our case, which is developed for L1. Only modest rules are needed to handle some tricky phenomena. This is quite different from syntactic treebanking for learner sentences, where defining a rich set of new annotation heuristics seems necessary BIBREF2 , BIBREF0 , BIBREF3 . Our second concern is to mimic the human's robust semantic processing ability by computer programs. The feasibility of reusing the annotation specification for L1 implies that we can reuse standard CPB data to train an SRL system to process learner texts. To test the robustness of the state-of-the-art SRL algorithms, we evaluate two types of SRL frameworks. The first one is a traditional SRL system that leverages a syntactic parser and heavy feature engineering to obtain explicit information of semantic roles BIBREF4 . Furthermore, we employ two different parsers for comparison: 1) the PCFGLA-based parser, viz. Berkeley parser BIBREF5 , and 2) a minimal span-based neural parser BIBREF6 . The other SRL system uses a stacked BiLSTM to implicitly capture local and non-local information BIBREF7 . and we call it the neural syntax-agnostic system. All systems can achieve state-of-the-art performance on L1 texts but show a significant degradation on L2 texts. This highlights the weakness of applying an L1-sentence-trained system to process learner texts. While the neural syntax-agnostic system obtains superior performance on the L1 data, the two syntax-based systems both produce better analyses on the L2 data. Furthermore, as illustrated in the comparison between different parsers, the better the parsing results we get, the better the performance on L2 we achieve. This shows that syntactic parsing is important in semantic construction for learner Chinese. The main reason, according to our analysis, is that the syntax-based system may generate correct syntactic analyses for partial grammatical fragments in L2 texts, which provides crucial information for SRL. Therefore, syntactic parsing helps build more generalizable SRL models that transfer better to new languages, and enhancing syntactic parsing can improve SRL to some extent. Our last concern is to explore the potential of a large-scale set of L2-L1 parallel sentences to enhance SRL systems. We find that semantic structures of the L2-L1 parallel sentences are highly consistent. This inspires us to design a novel agreement-based model to explore such semantic coherency information. In particular, we define a metric for comparing predicate–argument structures and searching for relatively good automatic syntactic and semantic annotations to extend the training data for SRL systems. Experiments demonstrate the value of the L2-L1 parallel sentences as well as the effectiveness of our method. We achieve an F-score of 72.06, which is a 2.02 percentage point improvement over the best neural-parser-based baseline. To the best of our knowledge, this is the first time that the L2-L1 parallel data is utilized to enhance NLP systems for learner texts. For research purpose, we have released our SRL annotations on 600 sentence pairs and the L2-L1 parallel dataset . |
170 | How do they obtain region descriptions and object annotations? | they are available in the Visual Genome dataset | A key aspect of VQA models that are interpretable is their ability to ground their answers to relevant regions in the image. Current approaches with this capability rely on supervised learning and human annotated groundings to train attention mechanisms inside the VQA architecture. Unfortunately, obtaining human annotations specific for visual grounding is difficult and expensive. In this work, we demonstrate that we can effectively train a VQA architecture with grounding supervision that can be automatically obtained from available region descriptions and object annotations. We also show that our model trained with this mined supervision generates visual groundings that achieve a higher correlation with respect to manually-annotated groundings, meanwhile achieving state-of-the-art VQA accuracy. | We are interested in the problem of visual question answering (VQA), where an algorithm is presented with an image and a question that is formulated in natural language and relates to the contents of the image. The goal of this task is to get the algorithm to correctly answer the question. The VQA task has recently received significant attention from the computer vision community, in particular because obtaining high accuracies would presumably require precise understanding of both natural language as well as visual stimuli. In addition to serving as a milestone towards visual intelligence, there are practical applications such as development of tools for the visually impaired. The problem of VQA is challenging due to the complex interplay between the language and visual modalities. On one hand, VQA algorithms must be able to parse and interpret the input question, which is provided in natural language BIBREF0 , BIBREF1 , BIBREF2 . This may potentially involve understanding of nouns, verbs and other linguistic elements, as well as their visual significance. On the other hand, the algorithms must analyze the image to identify and recognize the visual elements relevant to the question. Furthermore, some questions may refer directly to the contents of the image, but may require external, common sense knowledge to be answered correctly. Finally, the algorithms should generate a textual output in natural language that correctly answers the input visual question. In spite of the recent research efforts to address these challenges, the problem remains largely unsolved BIBREF3 . We are particularly interested in giving VQA algorithms the ability to identify the visual elements that are relevant to the question. In the VQA literature, such ability has been implemented by attention mechanisms. Such attention mechanisms generate a heatmap over the input image, which highlights the regions of the image that lead to the answer. These heatmaps are interpreted as groundings of the answer to the most relevant areas of the image. Generally, these mechanisms have either been considered as latent variables for which there is no supervision, or have been treated as output variables that receive direct supervision from human annotations. Unfortunately, both of these approaches have disadvantages. First, unsupervised training of attention tends to lead to models that cannot ground their decision in the image in a human interpretable manner. Second, supervised training of attention is difficult and expensive: human annotators may consider different regions to be relevant for the question at hand, which entails ambiguity and increased annotation cost. Our goal is to leverage the best of both worlds by providing VQA algorithms with interpretable grounding of their answers, without the need of direct and explicit manual annotation of attention. From a practical point of view, as autonomous machines are increasingly finding real world applications, there is an increasing need to provide them with suitable capabilities to explain their decisions. However, in most applications, including VQA, current state-of-the-art techniques operate as black-box models that are usually trained using a discriminative approach. Similarly to BIBREF4 , in this work we show that, in the context of VQA, such approaches lead to internal representations that do not capture the underlying semantic relations between textual questions and visual information. Consequently, as we show in this work, current state-of-the-art approaches for VQA are not able to support their answers with a suitable interpretable representation. In this work, we introduce a methodology that provides VQA algorithms with the ability to generate human interpretable attention maps which effectively ground the answer to the relevant image regions. We accomplish this by leveraging region descriptions and object annotations available in the Visual Genome dataset, and using these to automatically construct attention maps that can be used for attention supervision, instead of requiring human annotators to manually provide grounding labels. Our framework achieves competitive state-of-the-art VQA performance, while generating visual groundings that outperform other algorithms that use human annotated attention during training. The contributions of this paper are: (1) we introduce a mechanism to automatically obtain meaningful attention supervision from both region descriptions and object annotations in the Visual Genome dataset; (2) we show that by using the prediction of region and object label attention maps as auxiliary tasks in a VQA application, it is possible to obtain more interpretable intermediate representations. (3) we experimentally demonstrate state-of-the-art performances in VQA benchmarks as well as visual grounding that closely matches human attention annotations. |
171 | Which training dataset allowed for the best generalization to benchmark sets? | MultiNLI | Neural network models have been very successful in natural language inference, with the best models reaching 90% accuracy in some benchmarks. However, the success of these models turns out to be largely benchmark specific. We show that models trained on a natural language inference dataset drawn from one benchmark fail to perform well in others, even if the notion of inference assumed in these benchmarks is the same or similar. We train six high performing neural network models on different datasets and show that each one of these has problems of generalizing when we replace the original test set with a test set taken from another corpus designed for the same task. In light of these results, we argue that most of the current neural network models are not able to generalize well in the task of natural language inference. We find that using large pre-trained language models helps with transfer learning when the datasets are similar enough. Our results also highlight that the current NLI datasets do not cover the different nuances of inference extensively enough. | Natural Language Inference (NLI) has attracted considerable interest in the NLP community and, recently, a large number of neural network-based systems have been proposed to deal with the task. One can attempt a rough categorization of these systems into: a) sentence encoding systems, and b) other neural network systems. Both of them have been very successful, with the state of the art on the SNLI and MultiNLI datasets being 90.4%, which is our baseline with BERT BIBREF0 , and 86.7% BIBREF0 respectively. However, a big question with respect to these systems is their ability to generalize outside the specific datasets they are trained and tested on. Recently, BIBREF1 have shown that state-of-the-art NLI systems break considerably easily when, instead of tested on the original SNLI test set, they are tested on a test set which is constructed by taking premises from the training set and creating several hypotheses from them by changing at most one word within the premise. The results show a very significant drop in accuracy for three of the four systems. The system that was more difficult to break and had the least loss in accuracy was the system by BIBREF2 which utilizes external knowledge taken from WordNet BIBREF3 . In this paper we show that NLI systems that have been very successful in specific NLI benchmarks, fail to generalize when trained on a specific NLI dataset and then these trained models are tested across test sets taken from different NLI benchmarks. The results we get are in line with BIBREF1 , showing that the generalization capability of the individual NLI systems is very limited, but, what is more, they further show the only system that was less prone to breaking in BIBREF1 , breaks too in the experiments we have conducted. We train six different state-of-the-art models on three different NLI datasets and test these trained models on an NLI test set taken from another dataset designed for the same NLI task, namely for the task to identify for sentence pairs in the dataset if one sentence entails the other one, if they are in contradiction with each other or if they are neutral with respect to inferential relationship. One would expect that if a model learns to correctly identify inferential relationships in one dataset, then it would also be able to do so in another dataset designed for the same task. Furthermore, two of the datasets, SNLI BIBREF4 and MultiNLI BIBREF5 , have been constructed using the same crowdsourcing approach and annotation instructions BIBREF5 , leading to datasets with the same or at least very similar definition of entailment. It is therefore reasonable to expect that transfer learning between these datasets is possible. As SICK BIBREF6 dataset has been machine-constructed, a bigger difference in performance is expected. In this paper we show that, contrary to our expectations, most models fail to generalize across the different datasets. However, our experiments also show that BERT BIBREF0 performs much better than the other models in experiments between SNLI and MultiNLI. Nevertheless, even BERT fails when testing on SICK. In addition to the negative results, our experiments further highlight the power of pre-trained language models, like BERT, in NLI. The negative results of this paper are significant for the NLP research community as well as to NLP practice as we would like our best models to not only to be able to perform well in a specific benchmark dataset, but rather capture the more general phenomenon this dataset is designed for. The main contribution of this paper is that it shows that most of the best performing neural network models for NLI fail in this regard. The second, and equally important, contribution is that our results highlight that the current NLI datasets do not capture the nuances of NLI extensively enough. |
172 | What dataset do they use? | They used Wiki Vietnamese language and Vietnamese newspapers to pretrain embeddings and dataset provided in HSD task to train model (details not mentioned in paper). | Nowadays, Social network sites (SNSs) such as Facebook, Twitter are common places where people show their opinions, sentiments and share information with others. However, some people use SNSs to post abuse and harassment threats in order to prevent other SNSs users from expressing themselves as well as seeking different opinions. To deal with this problem, SNSs have to use a lot of resources including people to clean the aforementioned content. In this paper, we propose a supervised learning model based on the ensemble method to solve the problem of detecting hate content on SNSs in order to make conversations on SNSs more effective. Our proposed model got the first place for public dashboard with 0.730 F1 macro-score and the third place with 0.584 F1 macro-score for private dashboard at the sixth international workshop on Vietnamese Language and Speech Processing 2019. | Currently, social networks are so popular. Some of the biggest ones include Facebook, Twitter, Youtube,... with extremely number of users. Thus, controlling content of those platforms is essential. For years, social media companies such as Twitter, Facebook, and YouTube have been investing hundreds of millions euros on this task BIBREF0, BIBREF1. However, their effort is not enough since such efforts are primarily based on manual moderation to identify and delete offensive materials. The process is labour intensive, time consuming, and not sustainable or scalable in reality BIBREF2, BIBREF0, BIBREF3. In the sixth international workshop on Vietnamese Language and Speech Processing (VLSP 2019), the Hate Speech Detection (HSD) task is proposed as one of the shared-tasks to handle the problem related to controlling content in SNSs. HSD is required to build a multi-class classification model that is capable of classifying an item to one of 3 classes (hate, offensive, clean). Hate speech (hate): an item is identified as hate speech if it (1) targets individuals or groups on the basis of their characteristics; (2) demonstrates a clear intention to incite harm, or to promote hatred; (3) may or may not use offensive or profane words. Offensive but not hate speech (offensive): an item (posts/comments) may contain offensive words but it does not target individuals or groups on the basis of their characteristics. Neither offensive nor hate speech (clean): normal item, it does not contain offensive language or hate speech. The term `hate speech' was formally defined as `any communication that disparages a person or a group on the basis of some characteristics (to be referred to as types of hate or hate classes) such as race, colour, ethnicity, gender, sexual orientation, nationality, religion, or other characteristics' BIBREF4. Many researches have been conducted in recent years to develop automatic methods for hate speech detection in the social media domain. These typically employ semantic content analysis techniques built on Natural Language Processing (NLP) and Machine Learning (ML) methods. The task typically involves classifying textual content into non-hate or hateful. This HSD task is much more difficult when it requires classify text in three classes, with hate and offensive class quite hard to classify even with humans. In this paper, we propose a method to handle this HSD problem. Our system combines multiple text representations and models architecture in order to make diverse predictions. The system is heavily based on the ensemble method. The next section will present detail of our system including data preparation (how we clean text and build text representation), architecture of the model using in the system, and how we combine them together. The third section is our experiment and result report in HSD shared-task VLSP 2019. The final section is our conclusion with advantages and disadvantages of the system following by our perspective. |
178 | Do the use word embeddings alone or they replace some previous features of the model with word embeddings? | They use it as addition to previous model - they add new edge between words if word embeddings are similar. | Word co-occurrence networks have been employed to analyze texts both in the practical and theoretical scenarios. Despite the relative success in several applications, traditional co-occurrence networks fail in establishing links between similar words whenever they appear distant in the text. Here we investigate whether the use of word embeddings as a tool to create virtual links in co-occurrence networks may improve the quality of classification systems. Our results revealed that the discriminability in the stylometry task is improved when using Glove, Word2Vec and FastText. In addition, we found that optimized results are obtained when stopwords are not disregarded and a simple global thresholding strategy is used to establish virtual links. Because the proposed approach is able to improve the representation of texts as complex networks, we believe that it could be extended to study other natural language processing tasks. Likewise, theoretical languages studies could benefit from the adopted enriched representation of word co-occurrence networks. | The ability to construct complex and diverse linguistic structures is one of the main features that set us apart from all other species. Despite its ubiquity, some language aspects remain unknown. Topics such as language origin and evolution have been studied by researchers from diverse disciplines, including Linguistic, Computer Science, Physics and Mathematics BIBREF0, BIBREF1, BIBREF2. In order to better understand the underlying language mechanisms and universal linguistic properties, several models have been developed BIBREF3, BIBREF4. A particular language representation regards texts as complex systems BIBREF5. Written texts can be considered as complex networks (or graphs), where nodes could represent syllables, words, sentences, paragraphs or even larger chunks BIBREF5. In such models, network edges represent the proximity between nodes, e.g. the frequency of the co-occurrence of words. Several interesting results have been obtained from networked models, such as the explanation of Zipf's Law as a consequence of the least effort principle and theories on the nature of syntactical relationships BIBREF6, BIBREF7. In a more practical scenario, text networks have been used in text classification tasks BIBREF8, BIBREF9, BIBREF10. The main advantage of the model is that it does not rely on deep semantical information to obtain competitive results. Another advantage of graph-based approaches is that, when combined with other approaches, it yields competitive results BIBREF11. A simple, yet recurrent text model is the well-known word co-occurrence network. After optional textual pre-processing steps, in a co-occurrence network each different word becomes a node and edges are established via co-occurrence in a desired window. A common strategy connects only adjacent words in the so called word adjacency networks. While the co-occurrence representation yields good results in classification scenarios, some important features are not considered in the model. For example, long-range syntactical links, though less frequent than adjacent syntactical relationships, might be disregarded from a simple word adjacency approach BIBREF12. In addition, semantically similar words not sharing the same lemma are mapped into distinct nodes. In order to address these issues, here we introduce a modification of the traditional network representation by establishing additional edges, referred to as “virtual” edges. In the proposed model, in addition to the co-occurrence edges, we link two nodes (words) if the corresponding word embedding representation is similar. While this approach still does not merge similar nodes into the same concept, similar nodes are explicitly linked via virtual edges. Our main objective here is to evaluate whether such an approach is able to improve the discriminability of word co-occurrence networks in a typical text network classification task. We evaluate the methodology for different embedding techniques, including GloVe, Word2Vec and FastText. We also investigated different thresholding strategies to establish virtual links. Our results revealed, as a proof of principle, that the proposed approach is able to improve the discriminability of the classification when compared to the traditional co-occurrence network. While the gain in performance depended upon the text length being considered, we found relevant gains for intermediary text lengths. Additional results also revealed that a simple thresholding strategy combined with the use of stopwords tends to yield the best results. We believe that the proposed representation could be applied in other text classification tasks, which could lead to potential gains in performance. Because the inclusion of virtual edges is a simple technique to make the network denser, such an approach can benefit networked representations with a limited number of nodes and edges. This representation could also shed light into language mechanisms in theoretical studies relying on the representation of text as complex networks. Potential novel research lines leveraging the adopted approach to improve the characterization of texts in other applications are presented in the conclusion. |
179 | How many natural language explanations are human-written? | Totally 6980 validation and test image-sentence pairs have been corrected. | The recently proposed SNLI-VE corpus for recognising visual-textual entailment is a large, real-world dataset for fine-grained multimodal reasoning. However, the automatic way in which SNLI-VE has been assembled (via combining parts of two related datasets) gives rise to a large number of errors in the labels of this corpus. In this paper, we first present a data collection effort to correct the class with the highest error rate in SNLI-VE. Secondly, we re-evaluate an existing model on the corrected corpus, which we call SNLI-VE-2.0, and provide a quantitative comparison with its performance on the non-corrected corpus. Thirdly, we introduce e-SNLI-VE-2.0, which appends human-written natural language explanations to SNLI-VE-2.0. Finally, we train models that learn from these explanations at training time, and output such explanations at testing time. | Inspired by textual entailment BIBREF0, Xie BIBREF1 introduced the visual-textual entailment (VTE) task, which considers semantic entailment between a premise image and a textual hypothesis. Semantic entailment consists in determining if the hypothesis can be concluded from the premise, and assigning to each pair of (premise image, textual hypothesis) a label among entailment, neutral, and contradiction. In Figure FIGREF3, the label for the first image-sentence pair is entailment, because the hypothesis states that “a bunch of people display different flags”, which can be clearly derived from the image. On the contrary, the second image-sentence pair is labelled as contradiction, because the hypothesis stating that “people [are] running a marathon” contradicts the image with static people. Xie also propose the SNLI-VE dataset as the first dataset for VTE. SNLI-VE is built from the textual entailment SNLI dataset BIBREF0 by replacing textual premises with the Flickr30k images that they originally described BIBREF2. However, images contain more information than their descriptions, which may entail or contradict the textual hypotheses (see Figure FIGREF3). As a result, the neutral class in SNLI-VE has substantial labelling errors. Vu BIBREF3 estimated ${\sim }31\%$ errors in this class, and ${\sim }1\%$ for the contradiction and entailment classes. Xie BIBREF1 introduced the VTE task under the name of “visual entailment”, which could imply recognizing entailment between images only. This paper prefers to follow Suzuki BIBREF4 and call it “visual-textual entailment” instead, as it involves reasoning on image-sentence pairs. In this work, we first focus on decreasing the error in the neutral class by collecting new labels for the neutral pairs in the validation and test sets of SNLI-VE, using Amazon Mechanical Turk (MTurk). To ensure high quality annotations, we used a series of quality control measures, such as in-browser checks, inserting trusted examples, and collecting three annotations per instance. Secondly, we re-evaluate current image-text understanding systems, such as the bottom-up top-down attention network (BUTD) BIBREF5 on VTE using our corrected dataset, which we call SNLI-VE-2.0. Thirdly, we introduce the e-SNLI-VE-2.0 corpus, which we form by appending human-written natural language explanations to SNLI-VE-2.0. These explanations were collected in e-SNLI BIBREF6 to support textual entailment for SNLI. For the same reasons as above, we re-annotate the explanations for the neutral pairs in the validation and test sets, while keeping the explanations from e-SNLI for all the rest. Finally, we extend a current VTE model with the capacity of learning from these explanations at training time and outputting an explanation for each predicted label at testing time. |
180 | What is the dataset used as input to the Word2Vec algorithm? | Italian Wikipedia and Google News extraction producing final vocabulary of 618224 words | Word representation is fundamental in NLP tasks, because it is precisely from the coding of semantic closeness between words that it is possible to think of teaching a machine to understand text. Despite the spread of word embedding concepts, still few are the achievements in linguistic contexts other than English. In this work, analysing the semantic capacity of the Word2Vec algorithm, an embedding for the Italian language is produced. Parameter setting such as the number of epochs, the size of the context window and the number of negatively backpropagated samples is explored. | In order to make human language comprehensible to a computer, it is obviously essential to provide some word encoding. The simplest approach is the one-hot encoding, where each word is represented by a sparse vector with dimension equal to the vocabulary size. In addition to the storage need, the main problem of this representation is that any concept of word similarity is completely ignored (each vector is orthogonal and equidistant from each other). On the contrary, the understanding of natural language cannot be separated from the semantic knowledge of words, which conditions a different closeness between them. Indeed, the semantic representation of words is the basic problem of Natural Language Processing (NLP). Therefore, there is a necessary need to code words in a space that is linked to their meaning, in order to facilitate a machine in potential task of “understanding" it. In particular, starting from the seminal work BIBREF0, words are usually represented as dense distributed vectors that preserve their uniqueness but, at the same time, are able to encode the similarities. These word representations are called Word Embeddings since the words (points in a space of vocabulary size) are mapped in an embedding space of lower dimension. Supported by the distributional hypothesis BIBREF1 BIBREF2, which states that a word can be semantically characterized based on its context (i.e. the words that surround it in the sentence), in recent years many word embedding representations have been proposed (a fairly complete and updated review can be found in BIBREF3 and BIBREF4). These methods can be roughly categorized into two main classes: prediction-based models and count-based models. The former is generally linked to work on Neural Network Language Models (NNLM) and use a training algorithm that predicts the word given its local context, the latter leverage word-context statistics and co-occurrence counts in an entire corpus. The main prediction-based and count-based models are respectively Word2Vec BIBREF5 (W2V) and GloVe BIBREF6. Despite the widespread use of these concepts BIBREF7 BIBREF8, few contributions exist regarding the development of a W2V that is not in English. In particular, no detailed analysis on an Italian W2V seems to be present in the literature, except for BIBREF9 and BIBREF10. However, both seem to leave out some elements of fundamental interest in the learning of the neural network, in particular relating to the number of epochs performed during learning, reducing the importance that it may have on the final result. In BIBREF9, this for example leads to the simplistic conclusion that (being able to organize with more freedom in space) the more space is given to the vectors, the better the results may be. However, the problem in complex structures is that large embedding spaces can make training too difficult. In this work, by setting the size of the embedding to a commonly used average value, various parameters are analysed as the number of learning epochs changes, depending on the window sizes and the negatively backpropagated samples. |
182 | What methodology is used to compensate for limited labelled data? | Influential tweeters ( who they define as individuals certain to have a classifiable sentiment regarding the topic at hand) is used to label tweets in bulk in the absence of manually-labeled tweets. | While it is well-documented that climate change accepters and deniers have become increasingly polarized in the United States over time, there has been no large-scale examination of whether these individuals are prone to changing their opinions as a result of natural external occurrences. On the sub-population of Twitter users, we examine whether climate change sentiment changes in response to five separate natural disasters occurring in the U.S. in 2018. We begin by showing that relevant tweets can be classified with over 75% accuracy as either accepting or denying climate change when using our methodology to compensate for limited labeled data; results are robust across several machine learning models and yield geographic-level results in line with prior research. We then apply RNNs to conduct a cohort-level analysis showing that the 2018 hurricanes yielded a statistically significant increase in average tweet sentiment affirming climate change. However, this effect does not hold for the 2018 blizzard and wildfires studied, implying that Twitter users' opinions on climate change are fairly ingrained on this subset of natural disasters. | Much prior work has been done at the intersection of climate change and Twitter, such as tracking climate change sentiment over time BIBREF2 , finding correlations between Twitter climate change sentiment and seasonal effects BIBREF3 , and clustering Twitter users based on climate mentalities using network analysis BIBREF4 . Throughout, Twitter has been accepted as a powerful tool given the magnitude and reach of samples unattainable from standard surveys. However, the aforementioned studies are not scalable with regards to training data, do not use more recent sentiment analysis tools (such as neural nets), and do not consider unbiased comparisons pre- and post- various climate events (which would allow for a more concrete evaluation of shocks to climate change sentiment). This paper aims to address these three concerns as follows. First, we show that machine learning models formed using our labeling technique can accurately predict tweet sentiment (see Section SECREF2 ). We introduce a novel method to intuit binary sentiments of large numbers of tweets for training purposes. Second, we quantify unbiased outcomes from these predicted sentiments (see Section SECREF4 ). We do this by comparing sentiments within the same cohort of Twitter users tweeting both before and after specific natural disasters; this removes bias from over-weighting Twitter users who are only compelled to compose tweets after a disaster. |
183 | What are the baseline state of the art models? | Stanford NER, BiLSTM+CRF, LSTM+CNN+CRF, T-NER and BiLSTM+CNN+Co-Attention | Named Entity Recognition (NER) from social media posts is a challenging task. User generated content which forms the nature of social media, is noisy and contains grammatical and linguistic errors. This noisy content makes it much harder for tasks such as named entity recognition. However some applications like automatic journalism or information retrieval from social media, require more information about entities mentioned in groups of social media posts. Conventional methods applied to structured and well typed documents provide acceptable results while compared to new user generated media, these methods are not satisfactory. One valuable piece of information about an entity is the related image to the text. Combining this multimodal data reduces ambiguity and provides wider information about the entities mentioned. In order to address this issue, we propose a novel deep learning approach utilizing multimodal deep learning. Our solution is able to provide more accurate results on named entity recognition task. Experimental results, namely the precision, recall and F1 score metrics show the superiority of our work compared to other state-of-the-art NER solutions. | A common social media delivery system such as Twitter supports various media types like video, image and text. This media allows users to share their short posts called Tweets. Users are able to share their tweets with other users that are usually following the source user. Hovewer there are rules to protect the privacy of users from unauthorized access to their timeline BIBREF0. The very nature of user interactions in Twitter micro-blogging social media is oriented towards their daily life, first witness news-reporting and engaging in various events (sports, political stands etc.). According to studies, news in twitter is propagated and reported faster than conventional news media BIBREF1. Thus, extracting first hand news and entities occurring in this fast and versatile online media gives valuable information. However, abridged and noisy content of Tweets makes it even more difficult and challenging for tasks such as named entity recognition and information retrieval BIBREF2. The task of tracking and recovering information from social media posts is a concise definition of information retrieval in social media BIBREF3, BIBREF4. However many challenges are blocking useful solutions to this issue, namely, the noisy nature of user generated content and the perplexity of words used in short posts. Sometimes different entities are called exactly the same, for example "Micheal Jordan" refers to a basketball player and also a computer scientist in the field of artificial intelligence. The only thing that divides both of these is the context in which entity appeared. If the context refers to something related to AI, the reader can conclude "Micheal Jordan" is the scientist, and if the context is refers to sports and basketball then he is the basketball player. The task of distinguishing between different named entities that appear to have the same textual appearance is called named entity disambiguation. There is more useful data on the subject rather than on plain text. For example images and visual data are more descriptive than just text for tasks such as named entity recognition and disambiguation BIBREF5 while some methods only use the textual data BIBREF6. The provided extra information is closely related to the textual data. As a clear example, figure FIGREF1 shows a tweet containing an image. The combination of these multimodal data in order to achieve better performance in NLP related tasks is a promising alternative explored recently. An NLP task such as named entity recognition in social media is a most challenging task because users tend to invent, mistype and epitomize words. Sometimes these words correspond to named entities which makes the recognition task even more difficult BIBREF7. In some cases, the context that carries the entity (surrounding words and related image) is more descriptive than the entity word presentation BIBREF8. To find a solution to the issues at hand, and keeping multimodal data in mind, recognition of named entities from social media has become a research interest which utilizes image compared to NER task in a conventional text. Researchers in this field have tried to propose multimodal architectures based on deep neural networks with multimodal input that are capable of combining text and image BIBREF9, BIBREF8, BIBREF10. In this paper we draw a better solution in terms of performance by proposing a new novel method called CWI (Character-Word-Image model). We used multimodal deep neural network to overcome the NER task in micro-blogging social media. The rest of the paper is organized as follows: section SECREF2 provides an insight view of previous methods; section SECREF3 describes the method we propose; section SECREF4 shows experimental evaluation and test results; finally, section SECREF5 concludes the whole article. |
187 | How do they extract causality from text? | They identify documents that contain the unigrams 'caused', 'causing', or 'causes' | Identifying and communicating relationships between causes and effects is important for understanding our world, but is affected by language structure, cognitive and emotional biases, and the properties of the communication medium. Despite the increasing importance of social media, much remains unknown about causal statements made online. To study real-world causal attribution, we extract a large-scale corpus of causal statements made on the Twitter social network platform as well as a comparable random control corpus. We compare causal and control statements using statistical language and sentiment analysis tools. We find that causal statements have a number of significant lexical and grammatical differences compared with controls and tend to be more negative in sentiment than controls. Causal statements made online tend to focus on news and current events, medicine and health, or interpersonal relationships, as shown by topic models. By quantifying the features and potential biases of causality communication, this study improves our understanding of the accuracy of information and opinions found online. | Social media and online social networks now provide vast amounts of data on human online discourse and other activities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . With so much communication taking place online and with social media being capable of hosting powerful misinformation campaigns BIBREF7 such as those claiming vaccines cause autism BIBREF8 , BIBREF9 , it is more important than ever to better understand the discourse of causality and the interplay between online communication and the statement of cause and effect. Causal inference is a crucial way that humans comprehend the world, and it has been a major focus of philosophy, statistics, mathematics, psychology, and the cognitive sciences. Philosophers such as Hume and Kant have long argued whether causality is a human-centric illusion or the discovery of a priori truth BIBREF10 , BIBREF11 . Causal inference in science is incredibly important, and researchers have developed statistical measures such as Granger causality BIBREF12 , mathematical and probabilistic frameworks BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , and text mining procedures BIBREF17 , BIBREF18 , BIBREF19 to better infer causal influence from data. In the cognitive sciences, the famous perception experiments of Michotte et al. led to a long line of research exploring the cognitive biases that humans possess when attempting to link cause and effect BIBREF20 , BIBREF21 , BIBREF22 . How humans understand and communicate cause and effect relationships is complicated, and is influenced by language structure BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 and sentiment or valence BIBREF27 . A key finding is that the perceived emphasis or causal weight changes between the agent (the grammatical construct responsible for a cause) and the patient (the construct effected by the cause) depending on the types of verbs used to describe the cause and effect. Researchers have hypothesized BIBREF28 that this is because of the innate weighting property of the verbs in the English language that humans use to attribute causes and effects. Another finding is the role of a valence bias: the volume and intensity of causal reasoning may increase due to negative feedback or negative events BIBREF27 . Despite these long lines of research, causal attributions made via social media or online social networks have not been well studied. The goal of this paper is to explore the language and topics of causal statements in a large corpus of social media taken from Twitter. We hypothesize that language and sentiment biases play a significant role in these statements, and that tools from natural language processing and computational linguistics can be used to study them. We do not attempt to study the factual correctness of these statements or offer any degree of verification, nor do we exhaustively identify and extract all causal statements from these data. Instead, here we focus on statements that are with high certainty causal statements, with the goal to better understand key characteristics about causal statements that differ from everyday online communication. The rest of this paper is organized as follows: In Sec. "Materials and Methods" we discuss our materials and methods, including the dataset we studied, how we preprocessed that data and extracted a `causal' corpus and a corresponding `control' corpus, and the details of the statistical and language analysis tools we studied these corpora with. In Sec. "Results" we present results using these tools to compare the causal statements to control statements. We conclude with a discussion in Sec. "Discussion" . |