Unnamed: 0
int64
0
886
question
stringlengths
19
151
answer
stringlengths
1
1.08k
abstract
stringlengths
279
2.02k
introduction
stringlengths
52
9.04k
382
What is the size of the second dataset?
1 000 labeled dialogues for training and 240 unlabeled dialogues for evaluation
We present an overview of the EmotionX 2019 Challenge, held at the 7th International Workshop on Natural Language Processing for Social Media (SocialNLP), in conjunction with IJCAI 2019. The challenge entailed predicting emotions in spoken and chat-based dialogues using augmented EmotionLines datasets. EmotionLines contains two distinct datasets: the first includes excerpts from a US-based TV sitcom episode scripts (Friends) and the second contains online chats (EmotionPush). A total of thirty-six teams registered to participate in the challenge. Eleven of the teams successfully submitted their predictions performance evaluation. The top-scoring team achieved a micro-F1 score of 81.5% for the spoken-based dialogues (Friends) and 79.5% for the chat-based dialogues (EmotionPush).
Emotions are a central component of our existence as human beings, and are manifested by physiological and psychological changes that often affect behavior and action. Emotions involve a complicated interplay of mind, body, language, and culture BIBREF0. Detecting and recognizing emotions is a difficult task for machines. Nevertheless, following the successful use of computational linguistics to analyze sentiment in texts, there is growing interest in the more difficult task of the automatic detection and classification of emotions in texts. The detection of emotions in text is a complicated challenge for multiple reasons: first, emotions are complex entities, and no universally-agreed upon psychological model of emotions exists. Second, isolated texts convey less information compared to a complete human interaction in which emotions can be detected from the other person's facial expressions, listening to their tone of voice, etc. However, due to important applications in fields such as psychology, marketing, and political science, research in this topic is now expanding rapidly BIBREF1. In particular, dialogue systems such as those available on social media or instant messaging services are rich sources of textual data and have become the focus of much attention. Emotions of utterances within dialogues can be detected more precisely due to the presence of more context. For example, a single utterance (“OK!”) might convey different emotions (happiness, anger, surprise), depending on its context. Taking all this into consideration, in 2018 the EmotionX Challenge asked participants to detect emotions in complete dialogues BIBREF2. Participants were challenged to classify utterances using Ekman's well-known theory of six basic emotions (sadness, happiness, anger, fear, disgust, and surprise) BIBREF3. For the 2019 challenge, we built and expanded upon the 2018 challenge. We provided an additional 20% of data for training, as well as augmenting the dataset using two-way translation. The metric used was micro-F1 score, and we also report the macro-F1 score. A total of thirty-six teams registered to participate in the challenge. Eleven of the teams successfully submitted their data for performance evaluation, and seven of them submitted technical papers for the workshop. Approaches used by the teams included deep neural networks and SVM classifiers. In the following sections we expand on the challenge and the data. We then briefly describe the various approaches used by the teams, and conclude with a summary and some notes. Detailed descriptions of the various submissions are available in the teams' technical reports.
383
Why is big data not appropriate for this task?
Training embeddings from small-corpora can increase the performance of some tasks
Word embeddings improve the performance of NLP systems by revealing the hidden structural relationships between words. Despite their success in many applications, word embeddings have seen very little use in computational social science NLP tasks, presumably due to their reliance on big data, and to a lack of interpretability. I propose a probabilistic model-based word embedding method which can recover interpretable embeddings, without big data. The key insight is to leverage mixed membership modeling, in which global representations are shared, but individual entities (i.e. dictionary words) are free to use these representations to uniquely differing degrees. I show how to train the model using a combination of state-of-the-art training techniques for word embeddings and topic models. The experimental results show an improvement in predictive language modeling of up to 63% in MRR over the skip-gram, and demonstrate that the representations are beneficial for supervised learning. I illustrate the interpretability of the models with computational social science case studies on State of the Union addresses and NIPS articles.
Word embedding models, which learn to encode dictionary words with vector space representations, have been shown to be valuable for a variety of natural language processing (NLP) tasks such as statistical machine translation BIBREF2 , part-of-speech tagging, chunking, and named entity recogition BIBREF3 , as they provide a more nuanced representation of words than a simple indicator vector into a dictionary. These models follow a long line of research in data-driven semantic representations of text, including latent semantic analysis BIBREF4 and its probabilistic extensions BIBREF5 , BIBREF6 . In particular, topic models BIBREF7 have found broad applications in computational social science BIBREF8 , BIBREF9 and the digital humanities BIBREF10 , where interpretable representations reveal meaningful insights. Despite widespread success at NLP tasks, word embeddings have not yet supplanted topic models as the method of choice in computational social science applications. I speculate that this is due to two primary factors: 1) a perceived reliance on big data, and 2) a lack of interpretability. In this work, I develop new models to address both of these limitations. Word embeddings have risen in popularity for NLP applications due to the success of models designed specifically for the big data setting. In particular, BIBREF0 , BIBREF1 showed that very simple word embedding models with high-dimensional representations can scale up to massive datasets, allowing them to outperform more sophisticated neural network language models which can process fewer documents. In this work, I offer a somewhat contrarian perspective to the currently prevailing trend of big data optimism, as exemplified by the work of BIBREF0 , BIBREF1 , BIBREF3 , and others, who argue that massive datasets are sufficient to allow language models to automatically resolve many challenging NLP tasks. Note that “big” datasets are not always available, particularly in computational social science NLP applications, where the data of interest are often not obtained from large scale sources such as the internet and social media, but from sources such as press releases BIBREF11 , academic journals BIBREF10 , books BIBREF12 , and transcripts of recorded speech BIBREF13 , BIBREF14 , BIBREF15 . A standard practice in the literature is to train word embedding models on a generic large corpus such as Wikipedia, and use the embeddings for NLP tasks on the target dataset, cf. BIBREF3 , BIBREF0 , BIBREF16 , BIBREF17 . However, as we shall see here, this standard practice might not always be effective, as the size of a dataset does not correspond to its degree of relevance for a particular analysis. Even very large corpora have idiosyncrasies that can make their embeddings invalid for other domains. For instance, suppose we would like to use word embeddings to analyze scientific articles on machine learning. In Table TABREF1 , I report the most similar words to the word “learning” based on word embedding models trained on two corpora. For embeddings trained on articles from the NIPS conference, the most similar words are related to machine learning, as desired, while for embeddings trained on the massive, generic Google News corpus, the most similar words relate to learning and teaching in the classroom. Evidently, domain-specific data can be important. Even more concerningly, BIBREF18 show that word embeddings can encode implicit sexist assumptions. This suggests that when trained on large generic corpora they could also encode the hegemonic worldview, which is inappropriate for studying, e.g., black female hip-hop artists' lyrics, or poetry by Syrian refugees, and could potentially lead to systematic bias against minorities, women, and people of color in NLP applications with real-world consequences, such as automatic essay grading and college admissions. In order to proactively combat these kinds of biases in large generic datasets, and to address computational social science tasks, there is a need for effective word embeddings for small datasets, so that the most relevant datasets can be used for training, even when they are small. To make word embeddings a viable alternative to topic models for applications in the social sciences, we further desire that the embeddings are semantically meaningful to human analysts. In this paper, I introduce an interpretable word embedding model, and an associated topic model, which are designed to work well when trained on a small to medium-sized corpus of interest. The primary insight is to use a data-efficient parameter sharing scheme via mixed membership modeling, with inspiration from topic models. Mixed membership models provide a flexible yet efficient latent representation, in which entities are associated with shared, global representations, but to uniquely varying degrees. I identify the skip-gram word2vec model of BIBREF0 , BIBREF1 as corresponding to a certain naive Bayes topic model, which leads to mixed membership extensions, allowing the use of fewer vectors than words. I show that this leads to better modeling performance without big data, as measured by predictive performance (when the context is leveraged for prediction), as well as to interpretable latent representations that are highly valuable for computational social science applications. The interpretability of the representations arises from defining embeddings for words (and hence, documents) in terms of embeddings for topics. My experiments also shed light on the relative merits of training embeddings on generic big data corpora versus domain-specific data.
386
Which major geographical regions are studied?
Northeast U.S, South U.S., West U.S. and Midwest U.S.
Recently, the emergence of the #MeToo trend on social media has empowered thousands of people to share their own sexual harassment experiences. This viral trend, in conjunction with the massive personal information and content available on Twitter, presents a promising opportunity to extract data driven insights to complement the ongoing survey based studies about sexual harassment in college. In this paper, we analyze the influence of the #MeToo trend on a pool of college followers. The results show that the majority of topics embedded in those #MeToo tweets detail sexual harassment stories, and there exists a significant correlation between the prevalence of this trend and official reports on several major geographical regions. Furthermore, we discover the outstanding sentiments of the #MeToo tweets using deep semantic meaning representations and their implications on the affected users experiencing different types of sexual harassment. We hope this study can raise further awareness regarding sexual misconduct in academia.
Sexual harassment is defined as "bullying or coercion of a sexual nature, or the unwelcome or inappropriate promise of rewards in exchange for sexual favors." In fact, it is an ongoing problem in the U.S., especially within the higher education community. According to the National Sexual Violence Resource Center (NSRVC), one in five women and one in sixteen men are sexually assaulted while they are attending college. In addition to the prevalence of campus sexual harassment, it has been shown to have detrimental effects on student's well-being, including health-related disorders and psychological distress BIBREF0, BIBREF1. However, these studies on college sexual misconduct usually collect data based on questionnaires from a small sample of the college population, which might not be sufficiently substantial to capture the big picture of sexual harassment risk of the entire student body. Alternatively, social media opens up new opportunities to gather a larger and more comprehensive amount of data and mitigate the risk of false or inaccurate narratives from the studied subjects. On October 15 of 2017, prominent Hollywood actress Alyssa Milano, by accusing Oscar-winning film producer, Harvey Weinstein, for multiple sexual impropriety attempts on herself and many other women in the film industry, ignited the "MeToo" trend on social media that called for women and men to share their own sexual harassment experience. According to CNN, over 1.7 million users had used the hash-tag in 85 countries. Benefiting from the tremendous amount of data supplied by this trend and the existing state-of-the-art semantic parser and generative statistical models, we propose a new approach to characterizing sexual harassment by mining the tweets from college users with the hash-tag #metoo on Twitter. Our main contributions are several folds. We investigate campus sexual harassment using a big-data approach by collecting data from Twitter. We employ traditional topic modeling and linear regression methods on a new dataset to highlight patterns of the ongoing troubling social behaviors at both institutional and individual levels. We propose a novel approach to combining domain-general deep semantic parsing and sentiment analysis to dissect personal narratives.
387
What two components are included in their proposed framework?
evidence extraction and answer synthesis
In this paper, we present a novel approach to machine reading comprehension for the MS-MARCO dataset. Unlike the SQuAD dataset that aims to answer a question with exact text spans in a passage, the MS-MARCO dataset defines the task as answering a question from multiple passages and the words in the answer are not necessary in the passages. We therefore develop an extraction-then-synthesis framework to synthesize answers from extraction results. Specifically, the answer extraction model is first employed to predict the most important sub-spans from the passage as evidence, and the answer synthesis model takes the evidence as additional features along with the question and passage to further elaborate the final answers. We build the answer extraction model with state-of-the-art neural networks for single passage reading comprehension, and propose an additional task of passage ranking to help answer extraction in multiple passages. The answer synthesis model is based on the sequence-to-sequence neural networks with extracted evidences as features. Experiments show that our extraction-then-synthesis method outperforms state-of-the-art methods.
Machine reading comprehension BIBREF0 , BIBREF1 , which attempts to enable machines to answer questions after reading a passage or a set of passages, attracts great attentions from both research and industry communities in recent years. The release of the Stanford Question Answering Dataset (SQuAD) BIBREF0 and the Microsoft MAchine Reading COmprehension Dataset (MS-MARCO) BIBREF1 provides the large-scale manually created datasets for model training and testing of machine learning (especially deep learning) algorithms for this task. There are two main differences in existing machine reading comprehension datasets. First, the SQuAD dataset constrains the answer to be an exact sub-span in the passage, while words in the answer are not necessary in the passages in the MS-MARCO dataset. Second, the SQuAD dataset only has one passage for a question, while the MS-MARCO dataset contains multiple passages. Existing methods for the MS-MARCO dataset usually follow the extraction based approach for single passage in the SQuAD dataset. It formulates the task as predicting the start and end positions of the answer in the passage. However, as defined in the MS-MARCO dataset, the answer may come from multiple spans, and the system needs to elaborate the answer using words in the passages and words from the questions as well as words that cannot be found in the passages or questions. Table 1 shows several examples from the MS-MARCO dataset. Except in the first example the answer is an exact text span in the passage, in other examples the answers need to be synthesized or generated from the question and passage. In the second example the answer consists of multiple text spans (hereafter evidence snippets) from the passage. In the third example, the answer contains words from the question. In the fourth example, the answer has words that cannot be found in the passages or question. In the last example, all words are not in the passages or questions. In this paper, we present an extraction-then-synthesis framework for machine reading comprehension shown in Figure 1 , in which the answer is synthesized from the extraction results. We build an evidence extraction model to predict the most important sub-spans from the passages as evidence, and then develop an answer synthesis model which takes the evidence as additional features along with the question and passage to further elaborate the final answers. Specifically, we develop the answer extraction model with state-of-the-art attention based neural networks which predict the start and end positions of evidence snippets. As multiple passages are provided for each question in the MS-MARCO dataset, we propose incorporating passage ranking as an additional task to improve the results of evidence extraction under a multi-task learning framework. We use the bidirectional recurrent neural networks (RNN) for the word-level representation, and then apply the attention mechanism BIBREF2 to incorporate matching information from question to passage at the word level. Next, we predict start and end positions of the evidence snippet by pointer networks BIBREF3 . Moreover, we aggregate the word-level matching information of each passage using the attention pooling, and use the passage-level representation to rank all candidate passages as an additional task. For the answer synthesis, we apply the sequence-to-sequence model to synthesize the final answer based on the extracted evidence. The question and passage are encoded by a bi-directional RNN in which the start and end positions of extracted snippet are labeled as features. We combine the question and passage information in the encoding part to initialize the attention-equipped decoder to generate the answer. We conduct experiments on the MS-MARCO dataset. The results show our extraction-then-synthesis framework outperforms our baselines and all other existing methods in terms of ROUGE-L and BLEU-1. Our contributions can be summarized as follows:
390
Which modifications do they make to well-established Seq2seq architectures?
Replacing attention mechanism to query-key attention, and adding a loss to make the attention mask as diagonal as possible
Recent trends in neural network based text-to-speech/speech synthesis pipelines have employed recurrent Seq2seq architectures that can synthesize realistic sounding speech directly from text characters. These systems however have complex architectures and takes a substantial amount of time to train. We introduce several modifications to these Seq2seq architectures that allow for faster training time, and also allows us to reduce the complexity of the model architecture at the same time. We show that our proposed model can achieve attention alignment much faster than previous architectures and that good audio quality can be achieved with a model that's much smaller in size. Sample audio available at https://soundcloud.com/gary-wang-23/sets/tts-samples-for-cmpt-419.
Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech. The recent push to utilize deep, end-to-end TTS architectures BIBREF1 BIBREF2 that can be trained on <text,audio> pairs shows that deep neural networks can indeed be used to synthesize realistic sounding speech, while at the same time eliminating the need for complex sub-systems that neede to be developed and trained seperately. The problem of TTS can be summed up as a signal-inversion problem: given a highly compressed source signal (text), we need to invert or "decompress" it into audio. This is a difficult problem as there're multi ways for the same text to be spoken. In addtion, unlike end-to-end translation or speech recognition, TTS ouptuts are continuous, and output sequences are much longer than input squences. Recent work on neural TTS can be split into two camps, in one camp Seq2Seq models with recurrent architectures are used BIBREF1 BIBREF3 . In the other camp, full convolutional Seq2Seq models are used BIBREF2 . Our model belongs in the first of these classes using recurrent architectures. Specifically we make the following contributions:
392
How was speed measured?
how long it takes the system to lemmatize a set number of words
In this paper we describe the complexity of building a lemmatizer for Arabic which has a rich and complex derivational morphology, and we discuss the need for a fast and accurate lammatization to enhance Arabic Information Retrieval (IR) results. We also introduce a new data set that can be used to test lemmatization accuracy, and an efficient lemmatization algorithm that outperforms state-of-the-art Arabic lemmatization in terms of accuracy and speed. We share the data set and the code for public.
Lemmatization is the process of finding the base form (or lemma) of a word by considering its inflected forms. Lemma is also called dictionary form, or citation form, and it refers to all words having the same meaning. Lemmatization is an important preprocessing step for many applications of text mining and question-answering systems, and researches in Arabic Information Retrieval (IR) systems show the need for representing Arabic words at lemma level for many applications, including keyphrase extraction BIBREF0 and machine translation BIBREF1 . In addition, lemmatization provides a productive way to generate generic keywords for search engines (SE) or labels for concept maps BIBREF2 . Word stem is that core part of the word that never changes even with morphological inflections; the part that remains after prefix and suffix removal. Sometimes the stem of the word is different than its lemma, for example the words: believe, believed, believing, and unbelievable share the stem (believ-), and have the normalized word form (believe) standing for the infinitive of the verb (believe). While stemming tries to remove prefixes and suffixes from words that appear with inflections in free text, lemmatization tries to replace word suffixes with (typically) different suffix to get its lemma. This extended abstract is organized as follows: Section SECREF2 shows some complexities in building Arabic lemmatization, and surveys prior work on Arabic stemming and lemmatization; Section SECREF3 introduces the dataset that we created to test lemmatization accuracy; Section SECREF4 describes the algorithm of the system that we built and report results and error analysis in section SECREF5 ; and Section SECREF6 discusses the results and concludes the abstract.
397
For which languages most of the existing MRC datasets are created?
English
Machine Reading Comprehension (MRC) has become enormously popular recently and has attracted a lot of attention. However, existing reading comprehension datasets are mostly in English. To add diversity in reading comprehension datasets, in this paper we propose a new Chinese reading comprehension dataset for accelerating related research in the community. The proposed dataset contains two different types: cloze-style reading comprehension and user query reading comprehension, associated with large-scale training data as well as human-annotated validation and hidden test set. Along with this dataset, we also hosted the first Evaluation on Chinese Machine Reading Comprehension (CMRC-2017) and successfully attracted tens of participants, which suggest the potential impact of this dataset.
Machine Reading Comprehension (MRC) has become enormously popular in recent research, which aims to teach the machine to comprehend human languages and answer the questions based on the reading materials. Among various reading comprehension tasks, the cloze-style reaing comprehension is relatively easy to follow due to its simplicity in definition, which requires the model to fill an exact word into the query to form a coherent sentence according to the document material. Several cloze-style reading comprehension datasets are publicly available, such as CNN/Daily Mail BIBREF0 , Children's Book Test BIBREF1 , People Daily and Children's Fairy Tale BIBREF2 . In this paper, we provide a new Chinese reading comprehension dataset, which has the following features We also host the 1st Evaluation on Chinese Machine Reading Comprehension (CMRC2017), which has attracted over 30 participants and finally there were 17 participants submitted their evaluation systems for testing their reading comprehension models on our newly developed dataset, suggesting its potential impact. We hope the release of the dataset to the public will accelerate the progress of Chinese research community on machine reading comprehension field. We also provide four official baselines for the evaluations, including two traditional baselines and two neural baselines. In this paper, we adopt two widely used neural reading comprehension model: AS Reader BIBREF3 and AoA Reader BIBREF4 . The rest of the paper will be organized as follows. In Section 2, we will introduce the related works on the reading comprehension dataset, and then the proposed dataset as well as our competitions will be illustrated in Section 3. The baseline and participant system results will be given in Section 4 and we will made a brief conclusion at the end of this paper.
400
Which sentiment analysis tasks are addressed?
12 binary-class classification and multi-class classification of reviews based on rating
Cross-domain sentiment analysis is currently a hot topic in the research and engineering areas. One of the most popular frameworks in this field is the domain-invariant representation learning (DIRL) paradigm, which aims to learn a distribution-invariant feature representation across domains. However, in this work, we find out that applying DIRL may harm domain adaptation when the label distribution $\rm{P}(\rm{Y})$ changes across domains. To address this problem, we propose a modification to DIRL, obtaining a novel weighted domain-invariant representation learning (WDIRL) framework. We show that it is easy to transfer existing SOTA DIRL models to WDIRL. Empirical studies on extensive cross-domain sentiment analysis tasks verified our statements and showed the effectiveness of our proposed solution.
Sentiment analysis aims to predict sentiment polarity of user-generated data with emotional orientation like movie reviews. The exponentially increase of online reviews makes it an interesting topic in research and industrial areas. However, reviews can span so many different domains and the collection and preprocessing of large amounts of data for new domains is often time-consuming and expensive. Therefore, cross-domain sentiment analysis is currently a hot topic, which aims to transfer knowledge from a label-rich source domain (S) to the label-few target domain (T). In recent years, one of the most popular frameworks for cross-domain sentiment analysis is the domain invariant representation learning (DIRL) framework BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. Methods of this framework follow the idea of extracting a domain-invariant feature representation, in which the data distributions of the source and target domains are similar. Based on the resultant representations, they learn the supervised classifier using source rich labeled data. The main difference among these methods is the applied technique to force the feature representations to be domain-invariant. However, in this work, we discover that applying DIRL may harm domain adaptation in the situation that the label distribution $\rm {P}(\rm {Y})$ shifts across domains. Specifically, let $\rm {X}$ and $\rm {Y}$ denote the input and label random variable, respectively, and $G(\rm {X})$ denote the feature representation of $\rm {X}$. We found out that when $\rm {P}(\rm {Y})$ changes across domains while $\rm {P}(\rm {X}|\rm {Y})$ stays the same, forcing $G(\rm {X})$ to be domain-invariant will make $G(\rm {X})$ uninformative to $\rm {Y}$. This will, in turn, harm the generation of the supervised classifier to the target domain. In addition, for the more general condition that both $\rm {P}(\rm {Y})$ and $\rm {P}(\rm {X}|\rm {Y})$ shift across domains, we deduced a conflict between the object of making the classification error small and that of making $G(\rm {X})$ domain-invariant. We argue that the problem is worthy of studying since the shift of $\rm {P}(\rm {Y})$ exists in many real-world cross-domain sentiment analysis tasks BIBREF0. For example, the marginal distribution of the sentiment of a product can be affected by the overall social environment and change in different time periods; and for different products, their marginal distributions of the sentiment are naturally considered different. Moreover, there are many factors, such as the original data distribution, data collection time, and data clearing method, that can affect $\rm {P}(\rm {Y})$ of the collected target domain unlabeled dataset. Note that in the real-world cross-domain tasks, we do not know the labels of the collected target domain data. Thus, we cannot previously align its label distribution $\rm {P}_T(\mathbf {Y})$ with that of source domain labeled data $\rm {P}_S(\mathbf {Y})$, as done in many previous works BIBREF0, BIBREF2, BIBREF5, BIBREF4, BIBREF6, BIBREF7. To address the problem of DIRL resulted from the shift of $\rm {P}(\rm {Y})$, we propose a modification to DIRL, obtaining a weighted domain-invariant representation learning (WDIRL) framework. This framework additionally introduces a class weight $\mathbf {w}$ to weigh source domain examples by class, hoping to make $\rm {P}(\rm {Y})$ of the weighted source domain close to that of the target domain. Based on $\mathbf {w}$, it resolves domain shift in two steps. In the first step, it forces the marginal distribution $\rm {P}(\rm {X})$ to be domain-invariant between the target domain and the weighted source domain instead of the original source, obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight $\mathbf {w}$. In the second step, it resolves the shift of $\rm {P}(\rm {Y}|\rm {X})$ by adjusting $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ for label prediction in the target domain. We detail these two steps in §SECREF4. Moreover, we will illustrate how to transfer existing DIRL models to their WDIRL counterparts, taking the representative metric-based CMD model BIBREF3 and the adversarial-learning-based DANN model BIBREF2 as an example, respectively. In summary, the contributions of this paper include: ($\mathbf {i}$) We theoretically and empirically analyse the problem of DIRL for domain adaptation when the marginal distribution $\rm {P}(\rm {Y})$ shifts across domains. ($\mathbf {ii}$) We proposed a novel method to address the problem and show how to incorporate it with existent DIRL models. ($\mathbf {iii}$) Experimental studies on extensive cross-domain sentiment analysis tasks show that models of our WDIRL framework can greatly outperform their DIRL counterparts.
401
Which 5 languages appear most frequently in AA paper titles?
English, Chinese, French, Japanese and Arabic
The ACL Anthology (AA) is a digital repository of tens of thousands of articles on Natural Language Processing (NLP). This paper examines the literature as a whole to identify broad trends in productivity, focus, and impact. It presents the analyses in a sequence of questions and answers. The goal is to record the state of the AA literature: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? The answers are usually in the form of numbers, graphs, and inter-connected visualizations. Special emphasis is laid on the demographics and inclusiveness of NLP publishing. Notably, we find that only about 30% of first authors are female, and that this percentage has not improved since the year 2000. We also show that, on average, female first authors are cited less than male first authors, even when controlling for experience. We hope that recording citation and participation gaps across demographic groups will encourage more inclusiveness and fairness in research.
The ACL Anthology (AA) is a digital repository of tens of thousands of articles on Natural Language Processing (NLP) / Computational Linguistics (CL). It includes papers published in the family of ACL conferences as well as in other NLP conferences such as LREC and RANLP. AA is the largest single source of scientific literature on NLP. This project, which we call NLP Scholar, examines the literature as a whole to identify broad trends in productivity, focus, and impact. We will present the analyses in a sequence of questions and answers. The questions range from fairly mundane to oh-that-will-be-good-to-know. Our broader goal here is simply to record the state of the AA literature: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? The answers are usually in the form of numbers, graphs, and inter-connected visualizations. We focus on the following aspects of NLP research: size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender). Target Audience: The analyses presented here are likely to be of interest to any NLP researcher. This might be particularly the case for those that are new to the field and wish to get a broad overview of the NLP publishing landscape. On the other hand, even seasoned NLP'ers have likely wondered about the questions raised here and might be interested in the empirical evidence. Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). Thus, all subsequent papers and citations are not included in the analysis. A fresh data collection is planned for January 2020. Interactive Visualizations: The visualizations we are developing for this work (using Tableau) are interactive—so one can hover, click to select and filter, move sliders, etc. Since this work is high in the number of visualizations, the main visualizations are presented as figures in the paper and some sets of visualizations are pointed to online. The interactive visualizations and data will be made available through the first author's website after peer review. Related Work: This work builds on past research, including that on Google Scholar BIBREF0, BIBREF1, BIBREF2, BIBREF3, on the analysis of NLP papers BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, on citation intent BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, and on measuring scholarly impact BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Caveats and Ethical Considerations: We list several caveats and limitations throughout the paper. A compilation of these is also available online in the About NLP Scholar page. The analyses presented here are also available as a series of blog posts.
402
How much F1 was improved after adding skip connections?
Simple Skip improves F1 from 74.34 to 74.81 Transformer Skip improes F1 from 74.34 to 74.95
In this work, we extend the Bidirectional Encoder Representations from Transformers (BERT) with an emphasis on directed coattention to obtain an improved F1 performance on the SQUAD2.0 dataset. The Transformer architecture on which BERT is based places hierarchical global attention on the concatenation of the context and query. Our additions to the BERT architecture augment this attention with a more focused context to query (C2Q) and query to context (Q2C) attention via a set of modified Transformer encoder units. In addition, we explore adding convolution-based feature extraction within the coattention architecture to add localized information to self-attention. We found that coattention significantly improves the no answer F1 by 4 points in the base and 1 point in the large architecture. After adding skip connections the no answer F1 improved further without causing an additional loss in has answer F1. The addition of localized feature extraction added to attention produced an overall dev F1 of 77.03 in the base architecture. We applied our findings to the large BERT model which contains twice as many layers and further used our own augmented version of the SQUAD 2.0 dataset created by back translation, which we have named SQUAD 2.Q. Finally, we performed hyperparameter tuning and ensembled our best models for a final F1/EM of 82.317/79.442 (Attention on Steroids, PCE Test Leaderboard).
Through this CS224N Pre-trained Contextual Embeddings (PCE) project, we tackle the question answering problem which is one of the most popular in NLP and has been brought to the forefront by datasets such as SQUAD 2.0. This problem's success stems from both the challenge it presents and the recent successes in approaching human level function. As most, if not all, of the problems humans solve every day can be posed as a question, creating an deep learning based solution that has access to the entire internet is a critical milestone for NLP. Through our project, our group had tested the limits of applying attention in BERT BIBREF0 to improving the network's performance on the SQUAD2.0 dataset BIBREF1. BERT applies attention to the concatenation of the query and context vectors and thus attends these vectors in a global fashion. We propose BERTQA BIBREF2 which adds Context-to-Query (C2Q) and Query-to-Context (Q2C) attention in addition to localized feature extraction via 1D convolutions. We implemented the additions ourselves, while the Pytorch baseline BERT code was obtained from BIBREF3. The SQUAD2.0 answers span from a length of zero to multiple words and this additional attention provides hierarchical information that will allow the network to better learn to detect answer spans of varying sizes. We applied the empirical findings from this part of our project to the large BERT model, which has twice as many layers as the base BERT model. We also augmented the SQUAD2.0 dataset with additional backtranslated examples. This augmented dataset will be publicly available on our github BIBREF4 on the completion of this course. After performing hyperparameter tuning, we ensembled our two best networks to get F1 and EM scores of 82.317 and 79.442 respectively. The experiments took around 300 GPU hours to train.
404
How much gain does the model achieve with pretraining MVCNN?
0.8 points on Binary; 0.7 points on Fine-Grained; 0.6 points on Senti140; 0.7 points on Subj
We propose MVCNN, a convolution neural network (CNN) architecture for sentence classification. It (i) combines diverse versions of pretrained word embeddings and (ii) extracts features of multigranular phrases with variable-size convolution filters. We also show that pretraining MVCNN is critical for good performance. MVCNN achieves state-of-the-art performance on four tasks: on small-scale binary, small-scale multi-class and largescale Twitter sentiment prediction and on subjectivity classification.
Different sentence classification tasks are crucial for many Natural Language Processing (NLP) applications. Natural language sentences have complicated structures, both sequential and hierarchical, that are essential for understanding them. In addition, how to decode and compose the features of component units, including single words and variable-size phrases, is central to the sentence classification problem. In recent years, deep learning models have achieved remarkable results in computer vision BIBREF0 , speech recognition BIBREF1 and NLP BIBREF2 . A problem largely specific to NLP is how to detect features of linguistic units, how to conduct composition over variable-size sequences and how to use them for NLP tasks BIBREF3 , BIBREF4 , BIBREF5 . socher2011dynamic proposed recursive neural networks to form phrases based on parsing trees. This approach depends on the availability of a well performing parser; for many languages and domains, especially noisy domains, reliable parsing is difficult. Hence, convolution neural networks (CNN) are getting increasing attention, for they are able to model long-range dependencies in sentences via hierarchical structures BIBREF6 , BIBREF5 , BIBREF7 . Current CNN systems usually implement a convolution layer with fixed-size filters (i.e., feature detectors), in which the concrete filter size is a hyperparameter. They essentially split a sentence into multiple sub-sentences by a sliding window, then determine the sentence label by using the dominant label across all sub-sentences. The underlying assumption is that the sub-sentence with that granularity is potentially good enough to represent the whole sentence. However, it is hard to find the granularity of a “good sub-sentence” that works well across sentences. This motivates us to implement variable-size filters in a convolution layer in order to extract features of multigranular phrases. Breakthroughs of deep learning in NLP are also based on learning distributed word representations – also called “word embeddings” – by neural language models BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Word embeddings are derived by projecting words from a sparse, 1-of- $V$ encoding ( $V$ : vocabulary size) onto a lower dimensional and dense vector space via hidden layers and can be interpreted as feature extractors that encode semantic and syntactic features of words. Many papers study the comparative performance of different versions of word embeddings, usually learned by different neural network (NN) architectures. For example, chen2013expressive compared HLBL BIBREF9 , SENNA BIBREF2 , Turian BIBREF13 and Huang BIBREF14 , showing great variance in quality and characteristics of the semantics captured by the tested embedding versions. hill2014not showed that embeddings learned by neural machine translation models outperform three representative monolingual embedding versions: skip-gram BIBREF15 , GloVe BIBREF16 and C&W BIBREF3 in some cases. These prior studies motivate us to explore combining multiple versions of word embeddings, treating each of them as a distinct description of words. Our expectation is that the combination of these embedding versions, trained by different NNs on different corpora, should contain more information than each version individually. We want to leverage this diversity of different embedding versions to extract higher quality sentence features and thereby improve sentence classification performance. The letters “M” and “V” in the name “MVCNN” of our architecture denote the multichannel and variable-size convolution filters, respectively. “Multichannel” employs language from computer vision where a color image has red, green and blue channels. Here, a channel is a description by an embedding version. For many sentence classification tasks, only relatively small training sets are available. MVCNN has a large number of parameters, so that overfitting is a danger when they are trained on small training sets. We address this problem by pretraining MVCNN on unlabeled data. These pretrained weights can then be fine-tuned for the specific classification task. In sum, we attribute the success of MVCNN to: (i) designing variable-size convolution filters to extract variable-range features of sentences and (ii) exploring the combination of multiple public embedding versions to initialize words in sentences. We also employ two “tricks” to further enhance system performance: mutual learning and pretraining. In remaining parts, Section "Related Work" presents related work. Section "Model Description" gives details of our classification model. Section "Model Enhancements" introduces two tricks that enhance system performance: mutual-learning and pretraining. Section "Experiments" reports experimental results. Section "Conclusion" concludes this work.
405
What is the highest accuracy score achieved?
82.0%
Natural Language Inference is an important task for Natural Language Understanding. It is concerned with classifying the logical relation between two sentences. In this paper, we propose several text generative neural networks for generating text hypothesis, which allows construction of new Natural Language Inference datasets. To evaluate the models, we propose a new metric -- the accuracy of the classifier trained on the generated dataset. The accuracy obtained by our best generative model is only 2.7% lower than the accuracy of the classifier trained on the original, human crafted dataset. Furthermore, the best generated dataset combined with the original dataset achieves the highest accuracy. The best model learns a mapping embedding for each training example. By comparing various metrics we show that datasets that obtain higher ROUGE or METEOR scores do not necessarily yield higher classification accuracies. We also provide analysis of what are the characteristics of a good dataset including the distinguishability of the generated datasets from the original one.
The challenge in Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is to correctly decide whether a sentence (referred to as a premise) entails or contradicts or is neutral in respect to another sentence (a hypothesis). This classification task requires various natural language comprehension skills. In this paper, we are focused on the following natural language generation task based on NLI. Given the premise the goal is to generate a stream of hypotheses that comply with the label (entailment, contradiction or neutral). In addition to reading capabilities this task also requires language generation capabilities. The Stanford Natural Language Inference (SNLI) Corpus BIBREF0 is a NLI dataset that contains over a half a million examples. The size of the dataset is sufficient to train powerful neural networks. Several successful classification neural networks have already been proposed BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . In this paper, we utilize SNLI to train generative neural networks. Each example in the dataset consist of two human-written sentences, a premise and a hypothesis, and a corresponding label that describes the relationship between them. Few examples are presented in Table TABREF1 . The proposed generative networks are trained to generate a hypothesis given a premise and a label, which allow us to construct new, unseen examples. Some generative models are build to generate a single optimal response given the input. Such models have been applied to machine translation BIBREF5 , image caption generation BIBREF6 , or dialogue systems BIBREF7 . Another type of generative models are autoencoders that generate a stream of random samples from the original distribution. For instance, autoencoders have been used to generate text BIBREF8 , BIBREF9 , and images BIBREF10 . In our setting we combine both approaches to generate a stream of random responses (hypotheses) that comply with the input (premise, label). But what is a good stream of hypotheses? We argue that a good stream contains diverse, comprehensible, accurate and non-trivial hypotheses. A hypothesis is comprehensible if it is grammatical and semantically makes sense. It is accurate if it clearly expresses the relationship (signified by the label) with the premise. Finally, it is non-trivial if it is not trivial to determine the relationship (label) between the hypothesis and premise. For instance, given a premise ”A man drives a red car” and label entailment, the hypothesis ”A man drives a car” is more trivial than ”A person is sitting in a red vehicle”. The next question is how to automatically measure the quality of generated hypotheses. One way is to use metrics that are standard in text generation tasks, for instance ROUGE BIBREF11 , BLEU BIBREF12 , METEOR BIBREF13 . These metrics estimate the similarity between the generated text and the original reference text. In our task they can be used by comparing the generated and reference hypotheses with the same premise and label. The main issue of these metrics is that they penalize the diversity since they penalize the generated hypotheses that are dissimilar to the reference hypothesis. An alternative metric is to use a NLI classifier to test the generated hypothesis if the input label is correct in respect to the premise. A perfect classifier would not penalize diverse hypotheses and would reward accurate and (arguably to some degree) comprehensible hypotheses. However, it would not reward non-trivial hypotheses. Non-trivial examples are essential in a dataset for training a capable machine learning model. Furthermore, we make the following hypothesis. A good dataset for training a NLI classifier consists of a variety of accurate, non-trivial and comprehensible examples. Based on this hypothesis, we propose the following approach for evaluation of generative models, which is also presented in Figure FIGREF2 . First, the generative model is trained on the original training dataset. Then, the premise and label from an example in the original dataset are taken as the input to the generative model to generate a new random hypothesis. The generated hypothesis is combined with the premise and the label to form a new unseen example. This is done for every example in the original dataset to construct a new dataset. Next, a classifier is trained on the new dataset. Finally, the classifier is evaluated on the original test set. The accuracy of the classifier is the proposed quality metric for the generative model. It can be compared to the accuracy of the classifier trained on the original training set and tested on the original test set. The generative models learn solely from the original training set to regenerate the dataset. Thus, the model learns the distribution of the original dataset. Furthermore, the generated dataset is just a random sample from the estimated distribution. To determine how well did the generative model learn the distribution, we observe how close does the accuracy of the classifier trained on the generated dataset approach the accuracy of classifier trained on the original dataset. Our flagship generative network EmbedDecoder works in a similar fashion as the encoder-decoder networks, where the encoder is used to transform the input into a low-dimensional latent representation, from which the decoder reconstructs the input. The difference is that EmbedDecoder consists only of the decoder, and the latent representation is learned as an embedding for each training example separately. In our models, the latent representation represents the mapping between the premise and the label on one side and the hypothesis on the other side. Our main contributions are i) a novel generative neural network, which consist of the decoder that learns a mapping embedding for each training example separately, ii) a procedure for generating NLI datasets automatically, iii) and a novel evaluation metric for NLI generative models – the accuracy of the classifier trained on the generated dataset. In Section SECREF2 we present the related work. In Section SECREF3 the considered neural networks are presented. Besides the main generative networks, we also present classification and discriminative networks, which are used for evaluation. The results are presented in Section SECREF5 , where the generative models are evaluated and compared. From the experiments we can see that the best dataset was generated by the attention-based model EmbedDecoder. The classifier on this dataset achieved accuracy of INLINEFORM0 , which is INLINEFORM1 less than the accuracy achieved on the original dataset. We also investigate the influence of latent dimensionality on the performance, compare different evaluation metrics, and provide deeper insights of the generated datasets. The conclusion is presented in Section SECREF6 .
406
What are the three datasets used in the paper?
Data released for APDA shared task contains 3 datasets.
We report our models for detecting age, language variety, and gender from social media data in the context of the Arabic author profiling and deception detection shared task (APDA). We build simple models based on pre-trained bidirectional encoders from transformers (BERT). We first fine-tune the pre-trained BERT model on each of the three datasets with shared task released data. Then we augment shared task data with in-house data for gender and dialect, showing the utility of augmenting training data. Our best models on the shared task test data are acquired with a majority voting of various BERT models trained under different data conditions. We acquire 54.72% accuracy for age, 93.75% for dialect, 81.67% for gender, and 40.97% joint accuracy across the three tasks.
The proliferation of social media has made it possible to collect user data in unprecedented ways. These data can come in the form of usage and behavior (e.g., who likes what on Facebook), network (e.g., who follows a given user on Instagram), and content (e.g., what people post to Twitter). Availability of such data have made it possible to make discoveries about individuals and communities, mobilizing social and psychological research and employing natural language processing methods. In this work, we focus on predicting social media user age, dialect, and gender based on posted language. More specifically, we use the total of 100 tweets from each manually-labeled user to predict each of these attributes. Our dataset comes from the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focus on building simple models using pre-trained bidirectional encoders from transformers(BERT) BIBREF1 under various data conditions. Our results show (1) the utility of augmenting training data, and (2) the benefit of using majority votes from our simple classifiers. In the rest of the paper, we introduce the dataset, followed by our experimental conditions and results. We then provide a literature review and conclude.
409
What is improvement in accuracy for short Jokes in relation other types of jokes?
It had the highest accuracy comparing to all datasets 0.986% and It had the highest improvement comparing to previous methods on the same dataset by 8%
Much previous work has been done in attempting to identify humor in text. In this paper we extend that capability by proposing a new task: assessing whether or not a joke is humorous. We present a novel way of approaching this problem by building a model that learns to identify humorous jokes based on ratings gleaned from Reddit pages, consisting of almost 16,000 labeled instances. Using these ratings to determine the level of humor, we then employ a Transformer architecture for its advantages in learning from sentence context. We demonstrate the effectiveness of this approach and show results that are comparable to human performance. We further demonstrate our model's increased capabilities on humor identification problems, such as the previously created datasets for short jokes and puns. These experiments show that this method outperforms all previous work done on these tasks, with an F-measure of 93.1% for the Puns dataset and 98.6% on the Short Jokes dataset.
Recent advances in natural language processing and neural network architecture have allowed for widespread application of these methods in Text Summarization BIBREF0, Natural Language Generation BIBREF1, and Text Classification BIBREF2. Such advances have enabled scientists to study common language practices. One such area, humor, has garnered focus in classification BIBREF3, BIBREF4, generation BIBREF5, BIBREF6, and in social media BIBREF7. The next question then is, what makes a joke humorous? Although humor is a universal construct, there is a wide variety between what each individual may find humorous. We attempt to focus on a subset of the population where we can quantitatively measure reactions: the popular Reddit r/Jokes thread. This forum is highly popular - with tens of thousands of jokes being posted monthly and over 16 million members. Although larger joke datasets exist, the r/Jokes thread is unparalleled in the amount of rated jokes it contains. To the best of our knowledge there is no comparable source of rated jokes in any other language. These Reddit posts consist of the body of the joke, the punchline, and the number of reactions or upvotes. Although this type of humor may only be most enjoyable to a subset of the population, it is an effective way to measure responses to jokes in a large group setting. What enables us to perform such an analysis are the recent improvements in neural network architecture for natural language processing. These breakthroughs started with the Convolutional Neural Network BIBREF8 and have recently included the inception BIBREF9 and progress of the Attention mechanism BIBREF10, BIBREF11, and the Transformer architecture BIBREF12.
411
How did they detect entity mentions?
Exact matches to the entity string and predictions from a coreference resolution system
Most research in reading comprehension has focused on answering questions based on individual documents or even single paragraphs. We introduce a neural model which integrates and reasons relying on information spread within documents and across multiple documents. We frame it as an inference problem on a graph. Mentions of entities are nodes of this graph while edges encode relations between different mentions (e.g., within-and crossdocument coreference). Graph convolutional networks (GCNs) are applied to these graphs and trained to perform multi-step reasoning. Our Entity-GCN method is scalable and compact, and it achieves state-of-the-art results on a multi-document question answering dataset, WIKIHOP (Welbl et al., 2018).
The long-standing goal of natural language understanding is the development of systems which can acquire knowledge from text collections. Fresh interest in reading comprehension tasks was sparked by the availability of large-scale datasets, such as SQuAD BIBREF1 and CNN/Daily Mail BIBREF2 , enabling end-to-end training of neural models BIBREF3 , BIBREF4 , BIBREF5 . These systems, given a text and a question, need to answer the query relying on the given document. Recently, it has been observed that most questions in these datasets do not require reasoning across the document, but they can be answered relying on information contained in a single sentence BIBREF6 . The last generation of large-scale reading comprehension datasets, such as a NarrativeQA BIBREF7 , TriviaQA BIBREF8 , and RACE BIBREF9 , have been created in such a way as to address this shortcoming and to ensure that systems relying only on local information cannot achieve competitive performance. Even though these new datasets are challenging and require reasoning within documents, many question answering and search applications require aggregation of information across multiple documents. The WikiHop dataset BIBREF0 was explicitly created to facilitate the development of systems dealing with these scenarios. Each example in WikiHop consists of a collection of documents, a query and a set of candidate answers (Figure 1 ). Though there is no guarantee that a question cannot be answered by relying just on a single sentence, the authors ensure that it is answerable using a chain of reasoning crossing document boundaries. Though an important practical problem, the multi-hop setting has so far received little attention. The methods reported by BIBREF0 approach the task by merely concatenating all documents into a single long text and training a standard RNN-based reading comprehension model, namely, BiDAF BIBREF3 and FastQA BIBREF6 . Document concatenation in this setting is also used in Weaver BIBREF10 and MHPGM BIBREF11 . The only published paper which goes beyond concatenation is due to BIBREF12 , where they augment RNNs with jump-links corresponding to co-reference edges. Though these edges provide a structural bias, the RNN states are still tasked with passing the information across the document and performing multi-hop reasoning. Instead, we frame question answering as an inference problem on a graph representing the document collection. Nodes in this graph correspond to named entities in a document whereas edges encode relations between them (e.g., cross- and within-document coreference links or simply co-occurrence in a document). We assume that reasoning chains can be captured by propagating local contextual information along edges in this graph using a graph convolutional network (GCN) BIBREF13 . The multi-document setting imposes scalability challenges. In realistic scenarios, a system needs to learn to answer a query for a given collection (e.g., Wikipedia or a domain-specific set of documents). In such scenarios one cannot afford to run expensive document encoders (e.g., RNN or transformer-like self-attention BIBREF14 ), unless the computation can be preprocessed both at train and test time. Even if (similarly to WikiHop creators) one considers a coarse-to-fine approach, where a set of potentially relevant documents is provided, re-encoding them in a query-specific way remains the bottleneck. In contrast to other proposed methods (e.g., BIBREF12 , BIBREF10 , BIBREF3 ), we avoid training expensive document encoders. In our approach, only a small query encoder, the GCN layers and a simple feed-forward answer selection component are learned. Instead of training RNN encoders, we use contextualized embeddings (ELMo) to obtain initial (local) representations of nodes. This implies that only a lightweight computation has to be performed online, both at train and test time, whereas the rest is preprocessed. Even in the somewhat contrived WikiHop setting, where fairly small sets of candidates are provided, the model is at least 5 times faster to train than BiDAF. Interestingly, when we substitute ELMo with simple pre-trained word embeddings, Entity-GCN still performs on par with many techniques that use expensive question-aware recurrent document encoders. Despite not using recurrent document encoders, the full Entity-GCN model achieves over 2% improvement over the best previously-published results. As our model is efficient, we also reported results of an ensemble which brings further 3.6% of improvement and only 3% below the human performance reported by BIBREF0 . Our contributions can be summarized as follows:
414
What document context was added?
Preceding and following sentence of each metaphor and paraphrase are added as document context
We conduct two experiments to study the effect of context on metaphor paraphrase aptness judgments. The first is an AMT crowd source task in which speakers rank metaphor paraphrase candidate sentence pairs in short document contexts for paraphrase aptness. In the second we train a composite DNN to predict these human judgments, first in binary classifier mode, and then as gradient ratings. We found that for both mean human judgments and our DNN's predictions, adding document context compresses the aptness scores towards the center of the scale, raising low out of context ratings and decreasing high out of context scores. We offer a provisional explanation for this compression effect.
A metaphor is a way of forcing the normal boundaries of a word's meaning in order to better express an experience, a concept or an idea. To a native speaker's ear some metaphors sound more conventional (like the usage of the words ear and sound in this sentence), others more original. This is not the only dimension along which to judge a metaphor. One of the most important qualities of a metaphor is its appropriateness, its aptness: how good is a metaphor for conveying a given experience or concept. While a metaphor's degree of conventionality can be measured through probabilistic methods, like language models, it is harder to represent its aptness. BIBREF0 define aptness as “the extent to which a comparison captures important features of the topic". It is possible to express an opinion about some metaphors' and similes' aptness (at least to a degree) without previously knowing what they are trying to convey, or the context in which they appear. For example, we don't need a particular context or frame of reference to construe the simile She was screaming like a turtle as strange, and less apt for expressing the quality of a scream than She was screaming like a banshee. In this case, the reason why the simile in the second sentence works best is intuitive. A salient characteristic of a banshee is a powerful scream. Turtles are not known for screaming, and so it is harder to define the quality of a scream through such a comparison, except as a form of irony. Other cases are more complicated to decide upon. The simile crying like a fire in the sun (It's All Over Now, Baby Blue, Bob Dylan) is powerfully apt for many readers, but simply odd for others. Fire and sun are not known to cry in any way. But at the same time the simile can capture the association we draw between something strong and intense in other senses - vision, touch, etc. - and a loud cry. Nonetheless, most metaphors and similes need some kind of context, or external reference point to be interpreted. The sentence The old lady had a heart of stone is apt if the old lady is cruel or indifferent, but it is inappropriate as a description of a situation in which the old lady is kind and caring. We assume that, to an average reader's sensibility, the sentence models the situation in a satisfactory way only in the first case. This is the approach to metaphor aptness that we assume in this paper. Following BIBREF3 , we treat a metaphor as apt in relation to a literal expression that it paraphrases. If the metaphor is judged to be a good paraphrase, then it closely expresses the core information of the literal sentence through its metaphorical shift. We refer to the prediction of readers' judgments on the aptness candidates for the literal paraphrase of a metaphor as the metaphor paraphrase aptness task (MPAT). BIBREF3 address the MPAT by using Amazon Mechanical Turk (AMT) to obtain crowd sourced annotations of metaphor-paraphrase candidate pairs. They train a composite Deep Neural Network (DNN) on a portion of their annotated corpus, and test it on the remaining part. Testing involves using the DNN as a binary classifier on paraphrase candidates. They derive predictions of gradient paraphrase aptness for their test set, and assess them by Pearson coefficient correlation to the mean judgments of their crowd sourced annotation of this set. Both training and testing are done independently of any document context for the metaphorical sentence and its literal paraphrase candidates. In this paper we study the role of context on readers' judgments concerning the aptness of metaphor paraphrase candidates. We look at the accuracy of BIBREF3 's DNN when trained and tested on contextually embedded metaphor-paraphrase pairs for the MPAT. In Section SECREF2 we describe an AMT experiment in which annotators judge metaphors and paraphrases embodied in small document contexts, and in Section SECREF3 we discuss the results of this experiment. In Section SECREF4 we describe our MPAT modeling experiment, and in Section SECREF5 we discuss the results of this experiment. Section SECREF6 briefly surveys some related work. In Section SECREF7 we draw conclusions from our study, and we indicate directions for future work in this area.
415
What is the performance of their model?
Answer with content missing: (Table II) Proposed model has F1 score of 0.7220.
Drug-drug interaction (DDI) is a vital information when physicians and pharmacists intend to co-administer two or more drugs. Thus, several DDI databases are constructed to avoid mistakenly combined use. In recent years, automatically extracting DDIs from biomedical text has drawn researchers' attention. However, the existing work utilize either complex feature engineering or NLP tools, both of which are insufficient for sentence comprehension. Inspired by the deep learning approaches in natural language processing, we propose a recur- rent neural network model with multiple attention layers for DDI classification. We evaluate our model on 2013 SemEval DDIExtraction dataset. The experiments show that our model classifies most of the drug pairs into correct DDI categories, which outperforms the existing NLP or deep learning methods.
Drug-drug interaction (DDI) is a situation when one drug increases or decreases the effect of another drug BIBREF0 . Adverse drug reactions may cause severe side effect, if two or more medicines were taken and their DDI were not investigated in detail. DDI is a common cause of illness, even a cause of death BIBREF1 . Thus, DDI databases for clinical medication decisions are proposed by some researchers. These databases such as SFINX BIBREF2 , KEGG BIBREF3 , CredibleMeds BIBREF4 help physicians and pharmacists avoid most adverse drug reactions. Traditional DDI databases are manually constructed according to clinical records, scientific research and drug specifications. For instance, The sentence “With combined use, clinicians should be aware, when phenytoin is added, of the potential for reexacerbation of pulmonary symptomatology due to lowered serum theophylline concentrations BIBREF5 ”, which is from a pharmacotherapy report, describe the side effect of phenytoin and theophylline's combined use. Then this information on specific medicines will be added to DDI databases. As drug-drug interactions have being increasingly found, manually constructing DDI database would consume a lot of manpower and resources. There has been many efforts to automatically extract DDIs from natural language BIBREF0 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , mainly medical literature and clinical records. These works can be divided into the following categories: To avoid complex feature engineering and NLP toolkits' usage, we employ deep learning approaches for sentence comprehension as a whole. Our model takes in a sentence from biomedical literature which contains a drug pair and outputs what kind of DDI this drug pair belongs. This assists physicians refrain from improper combined use of drugs. In addition, the word and sentence level attentions are introduced to our model for better DDI predictions. We train our language comprehension model with labeled instances. Figure FIGREF5 shows partial records in DDI corpus BIBREF16 . We extract the sentence and drug pairs in the records. There are 3 drug pairs in this example thus we have 3 instances. The DDI corpus annotate each drug pair in the sentence with a DDI type. The DDI type, which is the most concerned information, is described in table TABREF4 . The details about how we train our model and extract the DDI type from text are described in the remaining sections.
416
How do they damage different neural modules?
Damage to neural modules is done by randomly initializing their weights, causing the loss of all learned information.
The meaning of a natural language utterance is largely determined from its syntax and words. Additionally, there is evidence that humans process an utterance by separating knowledge about the lexicon from syntax knowledge. Theories from semantics and neuroscience claim that complete word meanings are not encoded in the representation of syntax. In this paper, we propose neural units that can enforce this constraint over an LSTM encoder and decoder. We demonstrate that our model achieves competitive performance across a variety of domains including semantic parsing, syntactic parsing, and English to Mandarin Chinese translation. In these cases, our model outperforms the standard LSTM encoder and decoder architecture on many or all of our metrics. To demonstrate that our model achieves the desired separation between the lexicon and syntax, we analyze its weights and explore its behavior when different neural modules are damaged. When damaged, we find that the model displays the knowledge distortions that aphasics are evidenced to have.
Studies of Broca's and Wernicke's aphasia provide evidence that our brains understand an utterance by creating separate representations for word meaning and word arrangement BIBREF0. There is a related thesis about human language, present across many theories of semantics, which is that syntactic categories are partially agnostic to the identity of words BIBREF1. This regularity in how humans derive meaning from an utterance is applicable to the task of natural language translation. This is because, by definition, translation necessitates the creation of a meaning representation for an input. According to the cognitive and neural imperative, we introduce new units to regularize an artificial neural encoder and decoder BIBREF2. These are called the Lexicon and Lexicon-Adversary units (collectively, LLA). Tests are done on a diagnostic task, and naturalistic tasks including semantic parsing, syntactic parsing, and English to Mandarin Chinese translation. We evaluate a Long Short-Term Memory (LSTM) BIBREF3 encoder and decoder, with and without the LLA units, and show that the LLA version achieves superior translation performance. In addition, we examine our model's weights, and its performance when some of its neurons are damaged. We find that the model exhibits the knowledge and the lack thereof expected of a Broca's aphasic BIBREF0 when one module's weights are corrupted. It also exhibits that expected of a Wernicke's aphasic BIBREF0 when another module's weights are corrupted.
418
What are the sources of the data?
User reviews written in Chinese collected online for hotel, mobile phone, and travel domains
Sentiment analysis is a key component in various text mining applications. Numerous sentiment classification techniques, including conventional and deep learning-based methods, have been proposed in the literature. In most existing methods, a high-quality training set is assumed to be given. Nevertheless, constructing a high-quality training set that consists of highly accurate labels is challenging in real applications. This difficulty stems from the fact that text samples usually contain complex sentiment representations, and their annotation is subjective. We address this challenge in this study by leveraging a new labeling strategy and utilizing a two-level long short-term memory network to construct a sentiment classifier. Lexical cues are useful for sentiment analysis, and they have been utilized in conventional studies. For example, polar and privative words play important roles in sentiment analysis. A new encoding strategy, that is, $\rho$-hot encoding, is proposed to alleviate the drawbacks of one-hot encoding and thus effectively incorporate useful lexical cues. We compile three Chinese data sets on the basis of our label strategy and proposed methodology. Experiments on the three data sets demonstrate that the proposed method outperforms state-of-the-art algorithms.
Text is important in many artificial intelligence applications. Among various text mining techniques, sentiment analysis is a key component in applications such as public opinion monitoring and comparative analysis. Sentiment analysis can be divided into three problems according to input texts, namely, sentence, paragraph, and document levels. This study focuses on sentence and paragraph levels. Text sentiment analysis is usually considered a text classification problem. Almost all existing text classification techniques are applied to text sentiment analysis BIBREF0 . Typical techniques include bag-of-words (BOW)-based BIBREF1 , deep learning-based BIBREF2 , and lexicon-based (or rule-based) methods BIBREF3 . Although many achievements have been made and sentiment analysis has been successfully used in various commercial applications, its accuracy can be further improved. The construction of a high-accuracy sentiment classification model usually entails the challenging compilation of training sets with numerous samples and sufficiently accurate labels. The reason behind this difficulty is two-fold. First, sentiment is somewhat subjective, and a sample may receive different labels from different users. Second, some texts contain complex sentiment representations, and a single label is difficult to provide. We conduct a statistical analysis of public Chinese sentiment text sets in GitHub. The results show that the average label error is larger than 10%. This error value reflects the degree of difficulty of sentiment labeling. Privative and interrogative sentences are difficult to classify when deep learning-based methods are applied. Although lexicon-based methods can deal with particular types of privative sentences, their generalization capability is poor. We address the above issues with a new methodology. First, we introduce a two-stage labeling strategy for sentiment texts. In the first stage, annotators are invited to label a large number of short texts with relatively pure sentiment orientations. Each sample is labeled by only one annotator. In the second stage, a relatively small number of text samples with mixed sentiment orientations are annotated, and each sample is labeled by multiple annotators. Second, we propose a two-level long short-term memory (LSTM) BIBREF4 network to achieve two-level feature representation and classify the sentiment orientations of a text sample to utilize two labeled data sets. Lastly, in the proposed two-level LSTM network, lexicon embedding is leveraged to incorporate linguistic features used in lexicon-based methods. Three Chinese sentiment data sets are compiled to investigate the performance of the proposed methodology. The experimental results demonstrate the effectiveness of the proposed methods. Our work is new in the following aspects. The rest of this paper is organized as follows. Section 2 briefly reviews related work. Section 3 describes our methodology. Section 4 reports the experimental results, and Section 5 concludes the study.
420
How are their changes evaluated?
The changes are evaluated based on accuracy of intent and entity recognition on SNIPS dataset
As spoken dialogue systems and chatbots are gaining more widespread adoption, commercial and open-sourced services for natural language understanding are emerging. In this paper, we explain how we altered the open-source RASA natural language understanding pipeline to process incrementally (i.e., word-by-word), following the incremental unit framework proposed by Schlangen and Skantze. To do so, we altered existing RASA components to process incrementally, and added an update-incremental intent recognition model as a component to RASA. Our evaluations on the Snips dataset show that our changes allow RASA to function as an effective incremental natural language understanding service.
There is no shortage of services that are marketed as natural language understanding (nlu) solutions for use in chatbots, digital personal assistants, or spoken dialogue systems (sds). Recently, Braun2017 systematically evaluated several such services, including Microsoft LUIS, IBM Watson Conversation, API.ai, wit.ai, Amazon Lex, and RASA BIBREF0 . More recently, Liu2019b evaluated LUIS, Watson, RASA, and DialogFlow using some established benchmarks. Some nlu services work better than others in certain tasks and domains with a perhaps surprising pattern: RASA, the only fully open-source nlu service among those evaluated, consistently performs on par with the commercial services. Though these services yield state-of-the-art performance on a handful of nlu tasks, one drawback to sds and robotics researchers is the fact that all of these nlu solutions process input at the utterance level; none of them process incrementally at the word-level. Yet, research has shown that humans comprehend utterances as they unfold BIBREF1 . Moreover, when a listener feels they are missing some crucial information mid-utterance, they can interject with a clarification request, so as to ensure they and the speaker are maintaining common ground BIBREF2 . Users who interact with sdss perceive incremental systems as being more natural than traditional, turn-based systems BIBREF3 , BIBREF4 , BIBREF5 , offer a more human-like experience BIBREF6 and are more satisfying to interact with than non-incremental systems BIBREF7 . Users even prefer interacting with an incremental sds when the system is less accurate or requires filled pauses while replying BIBREF8 or operates in a limited domain as long as there is incremental feedback BIBREF9 . In this paper, we report our recent efforts in making the RASA nlu pipeline process incrementally. We explain briefly the RASA framework and pipeline, explain how we altered the RASA framework and individual components (including a new component which we added) to allow it to process incrementally, then we explain how we evaluated the system to ensure that RASA works as intended and how researchers can leverage this tool.
421
What are the six target languages?
Answer with content missing: (3 Experimental Setup) We experiment with six target languages: French (FR), Brazilian Portuguese (PT), Italian (IT), Polish (PL), Croatian (HR), and Finnish (FI).
Existing approaches to automatic VerbNet-style verb classification are heavily dependent on feature engineering and therefore limited to languages with mature NLP pipelines. In this work, we propose a novel cross-lingual transfer method for inducing VerbNets for multiple languages. To the best of our knowledge, this is the first study which demonstrates how the architectures for learning word embeddings can be applied to this challenging syntactic-semantic task. Our method uses cross-lingual translation pairs to tie each of the six target languages into a bilingual vector space with English, jointly specialising the representations to encode the relational information from English VerbNet. A standard clustering algorithm is then run on top of the VerbNet-specialised representations, using vector dimensions as features for learning verb classes. Our results show that the proposed cross-lingual transfer approach sets new state-of-the-art verb classification performance across all six target languages explored in this work.
Playing a key role in conveying the meaning of a sentence, verbs are famously complex. They display a wide range of syntactic-semantic behaviour, expressing the semantics of an event as well as relational information among its participants BIBREF0 , BIBREF1 , BIBREF2 . Lexical resources which capture the variability of verbs are instrumental for many Natural Language Processing (NLP) applications. One of the richest verb resources currently available for English is VerbNet BIBREF3 , BIBREF4 . Based on the work of Levin Levin:1993book, this largely hand-crafted taxonomy organises verbs into classes on the basis of their shared syntactic-semantic behaviour. Providing a useful level of generalisation for many NLP tasks, VerbNet has been used to support semantic role labelling BIBREF5 , BIBREF6 , semantic parsing BIBREF7 , word sense disambiguation BIBREF8 , discourse parsing BIBREF9 , information extraction BIBREF10 , text mining applications BIBREF11 , BIBREF12 , research into human language acquisition BIBREF13 , and other tasks. This benefit for English NLP has motivated the development of VerbNets for languages such as Spanish and Catalan BIBREF14 , Czech BIBREF15 , and Mandarin BIBREF16 . However, end-to-end manual resource development using Levin's methodology is extremely time consuming, even when supported by translations of English VerbNet classes to other languages BIBREF17 , BIBREF18 . Approaches which aim to learn verb classes automatically offer an attractive alternative. However, existing methods rely on carefully engineered features that are extracted using sophisticated language-specific resources BIBREF19 , BIBREF17 , BIBREF20 , ranging from accurate parsers to pre-compiled subcategorisation frames BIBREF21 , BIBREF22 , BIBREF23 . Such methods are limited to a small set of resource-rich languages. It has been argued that VerbNet-style classification has a strong cross-lingual element BIBREF24 , BIBREF2 . In support of this argument, Majewska:2017lre have shown that English VerbNet has high translatability across different, even typologically diverse languages. Based on this finding, we propose an automatic approach which exploits readily available annotations for English to facilitate efficient, large-scale development of VerbNets for a wide set of target languages. Recently, unsupervised methods for inducing distributed word vector space representations or word embeddings BIBREF25 have been successfully applied to a plethora of NLP tasks BIBREF26 , BIBREF27 , BIBREF28 . These methods offer an elegant way to learn directly from large corpora, bypassing the feature engineering step and the dependence on mature NLP pipelines (e.g., POS taggers, parsers, extraction of subcategorisation frames). In this work, we demonstrate how these models can be used to support automatic verb class induction. Moreover, we show that these models offer the means to exploit inherent cross-lingual links in VerbNet-style classification in order to guide the development of new classifications for resource-lean languages. To the best of our knowledge, this proposition has not been investigated in previous work. There has been little work on assessing the suitability of embeddings for capturing rich syntactic-semantic phenomena. One challenge is their reliance on the distributional hypothesis BIBREF29 , which coalesces fine-grained syntactic-semantic relations between words into a broad relation of semantic relatedness (e.g., coffee:cup) BIBREF30 , BIBREF31 . This property has an adverse effect when word embeddings are used in downstream tasks such as spoken language understanding BIBREF32 , BIBREF33 or dialogue state tracking BIBREF34 , BIBREF35 . It could have a similar effect on verb classification, which relies on the similarity in syntactic-semantic properties of verbs within a class. In summary, we explore three important questions in this paper: (Q1) Given their fundamental dependence on the distributional hypothesis, to what extent can unsupervised methods for inducing vector spaces facilitate the automatic induction of VerbNet-style verb classes across different languages? (Q2) Can one boost verb classification for lower-resource languages by exploiting general-purpose cross-lingual resources such as BabelNet BIBREF36 , BIBREF37 or bilingual dictionaries such as PanLex BIBREF38 to construct better word vector spaces for these languages? (Q3) Based on the stipulated cross-linguistic validity of VerbNet-style classification, can one exploit rich sets of readily available annotations in one language (e.g., the full English VerbNet) to automatically bootstrap the creation of VerbNets for other languages? In other words, is it possible to exploit a cross-lingual vector space to transfer VerbNet knowledge from a resource-rich to a resource-lean language? To investigate Q1, we induce standard distributional vector spaces BIBREF39 , BIBREF40 from large monolingual corpora in English and six target languages. As expected, the results obtained with this straightforward approach show positive trends, but at the same time reveal its limitations for all the languages involved. Therefore, the focus of our work shifts to Q2 and Q3. The problem of inducing VerbNet-oriented embeddings is framed as vector space specialisation using the available external resources: BabelNet or PanLex, and (English) VerbNet. Formalised as an instance of post-processing semantic specialisation approaches BIBREF41 , BIBREF34 , our procedure is steered by two sets of linguistic constraints: 1) cross-lingual (translation) links between languages extracted from BabelNet (targeting Q2); and 2) the available VerbNet annotations for a resource-rich language. The two sets of constraints jointly target Q3. The main goal of vector space specialisation is to pull examples standing in desirable relations, as described by the constraints, closer together in the transformed vector space. The specialisation process can capitalise on the knowledge of VerbNet relations in the source language (English) by using translation pairs to transfer that knowledge to each of the target languages. By constructing shared bilingual vector spaces, our method facilitates the transfer of semantic relations derived from VerbNet to the vector spaces of resource-lean target languages. This idea is illustrated by Fig. FIGREF2 . Our results indicate that cross-lingual connections yield improved verb classes across all six target languages (thus answering Q2). Moreover, a consistent and significant boost in verb classification performance is achieved by propagating the VerbNet-style information from the source language (English) to any other target language (e.g., Italian, Croatian, Polish, Finnish) for which no VerbNet-style information is available during the fine-tuning process (thus answering Q3). We report state-of-the-art verb classification performance for all six languages in our experiments. For instance, we improve the state-of-the-art F-1 score from prior work from 0.55 to 0.79 for French, and from 0.43 to 0.74 for Brazilian Portuguese.
424
Which OpenIE systems were used?
OpenIE4 and MiniIE
Open Information Extraction (OIE) is the task of the unsupervised creation of structured information from text. OIE is often used as a starting point for a number of downstream tasks including knowledge base construction, relation extraction, and question answering. While OIE methods are targeted at being domain independent, they have been evaluated primarily on newspaper, encyclopedic or general web text. In this article, we evaluate the performance of OIE on scientific texts originating from 10 different disciplines. To do so, we use two state-of-the-art OIE systems applying a crowd-sourcing approach. We find that OIE systems perform significantly worse on scientific text than encyclopedic text. We also provide an error analysis and suggest areas of work to reduce errors. Our corpus of sentences and judgments are made available.
This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/ The scientific literature is growing at a rapid rate BIBREF0 . To make sense of this flood of literature, for example, to extract cancer pathways BIBREF1 or find geological features BIBREF2 , increasingly requires the application of natural language processing. Given the diversity of information and its constant flux, the use of unsupervised or distantly supervised techniques are of interest BIBREF3 . In this paper, we investigate one such unsupervised method, namely, Open Information Extraction (OIE) BIBREF4 . OIE is the task of the unsupervised creation of structured information from text. OIE is often used as a starting point for a number of downstream tasks including knowledge base construction, relation extraction, and question answering BIBREF5 . While OIE has been applied to the scientific literature before BIBREF6 , we have not found a systematic evaluation of OIE as applied to scientific publications. The most recent evaluations of OIE extraction tools BIBREF7 , BIBREF8 have instead looked at the performance of these tools on traditional NLP information sources (i.e. encyclopedic and news-wire text). Indeed, as BIBREF8 noted, there is little work on the evaluation of OIE systems. Thus, the goal of this paper is to evaluate the performance of the state of the art in OIE systems on scientific text. Specifically, we aim to test two hypotheses: Additionally, we seek to gain insight into the value of unsupervised approaches to information extraction and also provide information useful to implementors of these systems. We note that our evaluation differs from existing OIE evaluations in that we use crowd-sourcing annotations instead of expert annotators. This allows for a larger number of annotators to be used. All of our data, annotations and analyses are made openly available. The rest of the paper is organized as follows. We begin with a discussion of existing evaluation approaches and then describe the OIE systems that we evaluated. We then proceed to describe the datasets used in the evaluation and the annotation process that was employed. This is followed by the results of the evaluation including an error analysis. Finally, we conclude.
426
what metrics are used in evaluation?
micro-averaged F1
Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks. However, in most cases, the recurrent network that operates on word-level representations to produce context sensitive representations is trained on relatively little labeled data. In this paper, we demonstrate a general semi-supervised approach for adding pre- trained context embeddings from bidirectional language models to NLP systems and apply it to sequence labeling tasks. We evaluate our model on two standard datasets for named entity recognition (NER) and chunking, and in both cases achieve state of the art results, surpassing previous systems that use other forms of transfer or joint learning with additional labeled data and task specific gazetteers.
Due to their simplicity and efficacy, pre-trained word embedding have become ubiquitous in NLP systems. Many prior studies have shown that they capture useful semantic and syntactic information BIBREF0 , BIBREF1 and including them in NLP systems has been shown to be enormously helpful for a variety of downstream tasks BIBREF2 . However, in many NLP tasks it is essential to represent not just the meaning of a word, but also the word in context. For example, in the two phrases “A Central Bank spokesman” and “The Central African Republic”, the word `Central' is used as part of both an Organization and Location. Accordingly, current state of the art sequence tagging models typically include a bidirectional recurrent neural network (RNN) that encodes token sequences into a context sensitive representation before making token specific predictions BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . Although the token representation is initialized with pre-trained embeddings, the parameters of the bidirectional RNN are typically learned only on labeled data. Previous work has explored methods for jointly learning the bidirectional RNN with supplemental labeled data from other tasks BIBREF7 , BIBREF3 . In this paper, we explore an alternate semi-supervised approach which does not require additional labeled data. We use a neural language model (LM), pre-trained on a large, unlabeled corpus to compute an encoding of the context at each position in the sequence (hereafter an LM embedding) and use it in the supervised sequence tagging model. Since the LM embeddings are used to compute the probability of future words in a neural LM, they are likely to encode both the semantic and syntactic roles of words in context. Our main contribution is to show that the context sensitive representation captured in the LM embeddings is useful in the supervised sequence tagging setting. When we include the LM embeddings in our system overall performance increases from 90.87% to 91.93% INLINEFORM0 for the CoNLL 2003 NER task, a more then 1% absolute F1 increase, and a substantial improvement over the previous state of the art. We also establish a new state of the art result (96.37% INLINEFORM1 ) for the CoNLL 2000 Chunking task. As a secondary contribution, we show that using both forward and backward LM embeddings boosts performance over a forward only LM. We also demonstrate that domain specific pre-training is not necessary by applying a LM trained in the news domain to scientific papers.
427
Are this models usually semi/supervised or unsupervised?
Both supervised and unsupervised, depending on the task that needs to be solved.
Text-based representations of chemicals and proteins can be thought of as unstructured languages codified by humans to describe domain-specific knowledge. Advances in natural language processing (NLP) methodologies in the processing of spoken languages accelerated the application of NLP to elucidate hidden knowledge in textual representations of these biochemical entities and then use it to construct models to predict molecular properties or to design novel molecules. This review outlines the impact made by these advances on drug discovery and aims to further the dialogue between medicinal chemists and computer scientists.
The design and discovery of novel drugs for protein targets is powered by an understanding of the underlying principles of protein-compound interaction. Biochemical methods that measure affinity and biophysical methods that describe the interaction in atomistic level detail have provided valuable information toward a mechanistic explanation for bimolecular recognition BIBREF0. However, more often than not, compounds with drug potential are discovered serendipitously or by phenotypic drug discovery BIBREF1 since this highly specific interaction is still difficult to predict BIBREF2. Protein structure based computational strategies such as docking BIBREF3, ultra-large library docking for discovering new chemotypes BIBREF4, and molecular dynamics simulations BIBREF3 or ligand based strategies such as quantitative structure-activity relationship (QSAR) BIBREF5, BIBREF6, and molecular similarity BIBREF7 have been powerful at narrowing down the list of compounds to be tested experimentally. With the increase in available data, machine learning and deep learning architectures are also starting to play a significant role in cheminformatics and drug discovery BIBREF8. These approaches often require extensive computational resources or they are limited by the availability of 3D information. On the other hand, text based representations of biochemical entities are more readily available as evidenced by the 19,588 biomolecular complexes (3D structures) in PDB-Bind BIBREF9 (accessed on Nov 13, 2019) compared with 561,356 (manually annotated and reviewed) protein sequences in Uniprot BIBREF10 (accessed on Nov 13, 2019) or 97 million compounds in Pubchem BIBREF11 (accessed on Nov 13, 2019). The advances in natural language processing (NLP) methodologies make processing of text based representations of biomolecules an area of intense research interest. The discipline of natural language processing (NLP) comprises a variety of methods that explore a large amount of textual data in order to bring unstructured, latent (or hidden) knowledge to the fore BIBREF12. Advances in this field are beneficial for tasks that use language (textual data) to build insight. The languages in the domains of bioinformatics and cheminformatics can be investigated under three categories: (i) natural language (mostly English) that is used in documents such as scientific publications, patents, and web pages, (ii) domain specific language, codified by a systematic set of rules extracted from empirical data and describing the human understanding of that domain (e.g. proteins, chemicals, etc), and (iii) structured forms such as tables, ontologies, knowledge graphs or databases BIBREF13. Processing and extracting information from textual data written in natural languages is one of the major application areas of NLP methodologies in the biomedical domain (also known as BioNLP). Information extracted with BioNLP methods is most often shared in structured databases or knowledge graphs BIBREF14. We refer the reader to the comprehensive review on BioNLP by BIBREF15. Here, we will be focusing on the application of NLP to domain specific, unstructured biochemical textual representations toward exploration of chemical space in drug discovery efforts. We can view the textual representation of biomedical/biochemical entities as a domain-specific language. For instance, a genome sequence is an extensive script of four characters (A, T, G, C) constituting a genomic language. In proteins, the composition of 20 different natural amino acids in varying lengths builds the protein sequences. Post-translational modifications expand this 20 letter alphabet and confer different properties to proteins BIBREF16. For chemicals there are several text based alternatives such as chemical formula, IUPAC International Chemical Identifier (InChI) BIBREF17 and Simplified Molecular Input Line Entry Specification (SMILES) BIBREF18. Today, the era of “big data" boosts the “learning" aspect of computational approaches substantially, with the ever-growing amounts of information provided by publicly available databases such as PubChem BIBREF11, ChEMBL BIBREF19, UniProt BIBREF10. These databases are rich in biochemical domain knowledge that is in textual form, thus building an efficient environment in which NLP-based techniques can thrive. Furthermore, advances in computational power allow the design of more complex methodologies, which in turn drive the fields of machine learning (ML) and NLP. However, biological and chemical interpretability and explainability remain among the major challenges of AI-based approaches. Data management in terms of access, interoperability and reusability are also critical for the development of NLP models that can be shared across disciplines. With this review, we aim to provide an outline of how the field of NLP has influenced the studies in bioinformatics and cheminformatics and the impact it has had over the last decade. Not only are NLP methodologies facilitating processing and exploitation of biochemical text, they also promise an “understanding" of biochemical language to elucidate the underlying principles of bimolecular recognition. NLP technologies are enhancing the biological and chemical knowledge with the final goal of accelerating drug discovery for improving human health. We highlight the significance of an interdisciplinary approach that integrates computer science and natural sciences.
428
When they say "comparable performance", how much of a performance drop do these new embeddings result in?
Performance was comparable, with the proposed method quite close and sometimes exceeding performance of baseline method.
We study the problem of inducing interpretability in KG embeddings. Specifically, we explore the Universal Schema (Riedel et al., 2013) and propose a method to induce interpretability. There have been many vector space models proposed for the problem, however, most of these methods don't address the interpretability (semantics) of individual dimensions. In this work, we study this problem and propose a method for inducing interpretability in KG embeddings using entity co-occurrence statistics. The proposed method significantly improves the interpretability, while maintaining comparable performance in other KG tasks.
Knowledge Graphs such as Freebase, WordNet etc. have become important resources for supporting many AI applications like web search, Q&A etc. They store a collection of facts in the form of a graph. The nodes in the graph represent real world entities such as Roger Federer, Tennis, United States etc while the edges represent relationships between them. These KGs have grown huge, but they are still not complete BIBREF1 . Hence the task of inferring new facts becomes important. Many vector space models have been proposed which can perform reasoning over KGs efficiently BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF0 , BIBREF1 etc. These methods learn representations for entities and relations as vectors in a vector space, capturing global information about the KG. The task of KG inference is then defined as operations over these vectors. Some of these methods like BIBREF0 , BIBREF1 are capable of exploiting additional text data apart from the KG, resulting in better representations. Although these methods have shown good performance in applications, they don't address the problem of understanding semantics of individual dimensions of the KG embedding. A recent work BIBREF6 addressed the problem of learning semantic features for KGs. However, they don't directly use vector space modeling. In this work, we focus on incorporating interpretability in KG embeddings. Specifically, we aim to learn interpretable embeddings for KG entities by incorporating additional entity co-occurrence statistics from text data. This work is motivated by BIBREF7 who presented automated methods for evaluating topics learned via topic modelling methods. We adapt these measures for the vector space model and propose a method to directly maximize them while learning KG embedding. To the best of our knowledge, this work presents the first regularization term which induces interpretability in KG embeddings.
429
What types of word representations are they evaluating?
GloVE; SGNS
Word analogy tasks have tended to be handcrafted, involving permutations of hundreds of words with dozens of relations, mostly morphological relations and named entities. Here, we propose modeling commonsense knowledge down to word-level analogical reasoning. We present CA-EHN, the first commonsense word analogy dataset containing 85K analogies covering 5K words and 6K commonsense relations. This was compiled by leveraging E-HowNet, an ontology that annotates 88K Chinese words with their structured sense definitions and English translations. Experiments show that CA-EHN stands out as a great indicator of how well word representations embed commonsense structures, which is crucial for future end-to-end models to generalize inference beyond training corpora. The dataset is publicly available at \url{https://github.com/jacobvsdanniel/CA-EHN}.
Commonsense reasoning is fundamental for natural language agents to generalize inference beyond their training corpora. Although the natural language inference (NLI) task BIBREF0 , BIBREF1 has proved a good pre-training objective for sentence representations BIBREF2 , commonsense coverage is limited and most models are still end-to-end, relying heavily on word representations to provide background world knowledge. Therefore, we propose modeling commonsense knowledge down to word-level analogical reasoning. In this sense, existing analogy benchmarks are lackluster. For Chinese analogy (CA), the simplified Chinese dataset CA8 BIBREF3 and the traditional Chinese dataset CA-Google BIBREF4 translated from the English BIBREF5 contain only a few dozen relations, most of which are either morphological, e.g., a shared prefix, or about named entities, e.g., capital-country. However, commonsense knowledge bases such as WordNet BIBREF6 and ConceptNet BIBREF7 have long annotated relations in our lexicon. Among them, E-HowNet BIBREF4 , extended from HowNet BIBREF8 , currently annotates 88K traditional Chinese words with their structured definitions and English translations. In this paper, we propose an algorithm for the extraction of accurate commonsense analogies from E-HowNet. We present CA-EHN, the first commonsense analogy dataset containing 85,226 analogies covering 5,563 words and 6,490 commonsense relations.
430
What is a word confusion network?
It is a network used to encode speech lattices to maintain a rich hypothesis space.
This paper presents our novel method to encode word confusion networks, which can represent a rich hypothesis space of automatic speech recognition systems, via recurrent neural networks. We demonstrate the utility of our approach for the task of dialog state tracking in spoken dialog systems that relies on automatic speech recognition output. Encoding confusion networks outperforms encoding the best hypothesis of the automatic speech recognition in a neural system for dialog state tracking on the well-known second Dialog State Tracking Challenge dataset.
Spoken dialog systems (SDSs) allow users to naturally interact with machines through speech and are nowadays an important research direction, especially with the great success of automatic speech recognition (ASR) systems BIBREF0 , BIBREF1 . SDSs can be designed for generic purposes, e.g. smalltalk BIBREF2 , BIBREF3 ) or a specific task such as finding restaurants or booking flights BIBREF4 , BIBREF5 . Here, we focus on task-oriented dialog systems, which assist the users to reach a certain goal. Task-oriented dialog systems are often implemented in a modular architecture to break up the complex task of conducting dialogs into more manageable subtasks. BIBREF6 describe the following prototypical set-up of such a modular architecture: First, an ASR system converts the spoken user utterance into text. Then, a spoken language understanding (SLU) module extracts the user's intent and coarse-grained semantic information. Next, a dialog state tracking (DST) component maintains a distribution over the state of the dialog, updating it in every turn. Given this information, the dialog policy manager decides on the next action of the system. Finally, a natural language generation (NLG) module forms the system reply that is converted into an audio signal via a text-to-speech synthesizer. Error propagation poses a major problem in modular architectures as later components depend on the output of the previous steps. We show in this paper that DST suffers from ASR errors, which was also noted by BIBREF7 . One solution is to avoid modularity and instead perform joint reasoning over several subtasks, e.g. many DST systems directly operate on ASR output and do not rely on a separate SLU module BIBREF8 , BIBREF7 , BIBREF9 . End-to-end systems that can be directly trained on dialogs without intermediate annotations have been proposed for open-domain dialog systems BIBREF3 . This is more difficult to realize for task-oriented systems as they often require domain knowledge and external databases. First steps into this direction were taken by BIBREF5 and BIBREF10 , yet these approaches do not integrate ASR into the joint reasoning process. We take a first step towards integrating ASR in an end-to-end SDS by passing on a richer hypothesis space to subsequent components. Specifically, we investigate how the richer ASR hypothesis space can improve DST. We focus on these two components because they are at the beginning of the processing pipeline and provide vital information for the subsequent SDS components. Typically, ASR systems output the best hypothesis or an n-best list, which the majority of DST approaches so far uses BIBREF11 , BIBREF8 , BIBREF7 , BIBREF12 . However, n-best lists can only represent a very limited amount of hypotheses. Internally, the ASR system maintains a rich hypothesis space in the form of speech lattices or confusion networks (cnets). We adapt recently proposed algorithms to encode lattices with recurrent neural networks (RNNs) BIBREF14 , BIBREF15 to encode cnets via an RNN based on Gated Recurrent Units (GRUs) to perform DST in a neural encoder-classifier system and show that this outperforms encoding only the best ASR hypothesis. We are aware of two DST approaches that incorporate posterior word-probabilities from cnets in addition to features derived from the n-best lists BIBREF11 , BIBREF16 , but to the best of our knowledge, we develop the first DST system directly operating on cnets.
433
What baseline algorithms were presented?
a sentence-level prediction algorithm, a segment retrieval algorithm and a pipeline segment retrieval algorithm
Despite the number of currently available datasets on video question answering, there still remains a need for a dataset involving multi-step and non-factoid answers. Moreover, relying on video transcripts remains an under-explored topic. To adequately address this, We propose a new question answering task on instructional videos, because of their verbose and narrative nature. While previous studies on video question answering have focused on generating a short text as an answer, given a question and video clip, our task aims to identify a span of a video segment as an answer which contains instructional details with various granularities. This work focuses on screencast tutorial videos pertaining to an image editing program. We introduce a dataset, TutorialVQA, consisting of about 6,000manually collected triples of (video, question, answer span). We also provide experimental results with several baselines algorithms using the video transcripts. The results indicate that the task is challenging and call for the investigation of new algorithms.
Video is the fastest growing medium to create and deliver information today. Consequentially, videos have been increasingly used as main data sources in many question answering problems BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF2, BIBREF5. These previous studies have mostly focused on factoid questions, each of which can be answered in a few words or phrases generated by understanding multimodal contents in a short video clip. However, this problem definition of video question answering causes some practical limitations for the following reasons. First, factoid questions are just a small part of what people actually want to ask on video contents. Especially if a short video is given to users, most fragmentary facts within the scope of previous tasks can be easily perceived by themselves even before asking questions. Thus, video question answering is expected to provide answers to more complicated non-factoid questions beyond the simple facts. For example, those questions could be the ones asking about a how procedure as shown in Fig. FIGREF5, and the answers should contain all necessary steps to complete the task. Accordingly, the answer format needs to also be improved towards more flexible ways than multiple choice BIBREF1, BIBREF2 or fill-in-the-blank questions BIBREF3, BIBREF4. Although open-ended video question answering BIBREF0, BIBREF2, BIBREF5 has been explored, it still aims to generate just a short word or phrase-level answer, which is not enough to cover various granularities of non-factoid question answering. The other issue is that most videos with sufficient amount of information, which are likely to be asked, have much longer lengths than the video clips in the existing datasets. Therefore, the most relevant part of a whole video needs to be determined prior to each answer generation in practice. However, this localization task has been out of scope for previous studies. In this work, we propose a new question answering problem for non-factoid questions on instructional videos. According to the nature of the media created for educational purposes, we assume that many answers already exist within the given video contents. Under this assumption, we formulate the problem as a localization task to specify the span of a video segment as the direct answer to a given video and a question, as illustrated in Figure FIGREF1. The remainder of this paper is structured as follows. Section SECREF3 introduces TutorialVQA dataset as a case study of our proposed problem. The dataset includes about 6,000 triples, comprised of videos, questions, and answer spans manually collected from screencast tutorial videos with spoken narratives for a photo-editing software. Section SECREF4 presents the baseline models and their experiment details on the sentence-level prediction and video segment retrieval tasks on our dataset. Then, we discuss the experimental results in Section SECREF5 and conclude the paper in Section SECREF6.
439
What is the performance proposed model achieved on MathQA?
Operation accuracy: 71.89 Execution accuracy: 55.95
Generating formal-language represented by relational tuples, such as Lisp programs or mathematical expressions, from a natural-language input is an extremely challenging task because it requires to explicitly capture discrete symbolic structural information from the input to generate the output. Most state-of-the-art neural sequence models do not explicitly capture such structure information, and thus do not perform well on these tasks. In this paper, we propose a new encoder-decoder model based on Tensor Product Representations (TPRs) for Natural- to Formal-language generation, called TP-N2F. The encoder of TP-N2F employs TPR 'binding' to encode natural-language symbolic structure in vector space and the decoder uses TPR 'unbinding' to generate a sequence of relational tuples, each consisting of a relation (or operation) and a number of arguments, in symbolic space. TP-N2F considerably outperforms LSTM-based Seq2Seq models, creating a new state of the art results on two benchmarks: the MathQA dataset for math problem solving, and the AlgoList dataset for program synthesis. Ablation studies show that improvements are mainly attributed to the use of TPRs in both the encoder and decoder to explicitly capture relational structure information for symbolic reasoning.
When people perform explicit reasoning, they can typically describe the way to the conclusion step by step via relational descriptions. There is ample evidence that relational representations are important for human cognition (e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4). Although a rapidly growing number of researchers use deep learning to solve complex symbolic reasoning and language tasks (a recent review is BIBREF5), most existing deep learning models, including sequence models such as LSTMs, do not explicitly capture human-like relational structure information. In this paper we propose a novel neural architecture, TP-N2F, to solve natural- to formal-language generation tasks (N2F). In the tasks we study, math or programming problems are stated in natural-language, and answers are given as programs, sequences of relational representations, to solve the problem. TP-N2F encodes the natural-language symbolic structure of the problem in an input vector space, maps this to a vector in an intermediate space, and uses that vector to produce a sequence of output vectors that are decoded as relational structures. Both input and output structures are modelled as Tensor Product Representations (TPRs) BIBREF6. During encoding, NL-input symbolic structures are encoded as vector space embeddings using TPR `binding' (following BIBREF7); during decoding, symbolic constituents are extracted from structure-embedding output vectors using TPR `unbinding' (following BIBREF8, BIBREF9). Our contributions in this work are as follows. (i) We propose a role-level analysis of N2F tasks. (ii) We present a new TP-N2F model which gives a neural-network-level implementation of a model solving the N2F task under the role-level description proposed in (i). To our knowledge, this is the first model to be proposed which combines both the binding and unbinding operations of TPRs to achieve generation tasks through deep learning. (iii) State-of-the-art performance on two recently developed N2F tasks shows that the TP-N2F model has significant structure learning ability on tasks requiring symbolic reasoning through program synthesis.
440
What previous methods is the proposed method compared against?
BLSTM+Attention+BLSTM Hierarchical BLSTM-CRF CRF-ASN Hierarchical CNN (window 4) mLSTM-RNN DRLM-Conditional LSTM-Softmax RCNN CNN CRF LSTM BERT
Dialogue act recognition is a fundamental task for an intelligent dialogue system. Previous work models the whole dialog to predict dialog acts, which may bring the noise from unrelated sentences. In this work, we design a hierarchical model based on self-attention to capture intra-sentence and inter-sentence information. We revise the attention distribution to focus on the local and contextual semantic information by incorporating the relative position information between utterances. Based on the found that the length of dialog affects the performance, we introduce a new dialog segmentation mechanism to analyze the effect of dialog length and context padding length under online and offline settings. The experiment shows that our method achieves promising performance on two datasets: Switchboard Dialogue Act and DailyDialog with the accuracy of 80.34\% and 85.81\% respectively. Visualization of the attention weights shows that our method can learn the context dependency between utterances explicitly.
Dialogue act (DA) characterizes the type of a speaker's intention in the course of producing an utterance and is approximately equivalent to the illocutionary act of BIBREF0 or the speech act of BIBREF1. The recognition of DA is essential for modeling and automatically detecting discourse structure, especially in developing a human-machine dialogue system. It is natural to predict the Answer acts following an utterance of type Question, and then match the Question utterance to each QA-pair in the knowledge base. The predicted DA can also guide the response generation process BIBREF2. For instance, system generates a Greeting type response to former Greeting type utterance. Moreover, DA is beneficial to other online dialogue strategies, such as conflict avoidance BIBREF3. In the offline system, DA also plays a significant role in summarizing and analyzing the collected utterances. For instance, recognizing DAs of a wholly online service record between customer and agent is beneficial to mine QA-pairs, which are selected and clustered then to expand the knowledge base. DA recognition is challenging due to the same utterance may have a different meaning in a different context. Table TABREF1 shows an example of some utterances together with their DAs from Switchboard dataset. In this example, utterance “Okay.” corresponds to two different DA labels within different semantic context. Many approaches have been proposed for DA recognition. Previous work relies heavily on handcrafted features which are domain-specific and difficult to scale up BIBREF4, BIBREF5, BIBREF6. Recently, with great ability to do feature extraction, deep learning has yielded state-of-the-art results for many NLP tasks, and also makes impressive advances in DA recognition. BIBREF7, BIBREF8 built hierarchical CNN/RNN models to encode sentence and incorporate context information for DA recognition. BIBREF9 achieved promising performance by adding the CRF to enhance the dependency between labels. BIBREF10 applied the self-attention mechanism coupled with a hierarchical recurrent neural network. However, previous approaches cannot make full use of the relative position relationship between utterances. It is natural that utterances in the local context always have strong dependencies in our daily dialog. In this paper, we propose a hierarchical model based on self-attention BIBREF11 and revise the attention distribution to focus on a local and contextual semantic information by a learnable Gaussian bias which represents the relative position information between utterances, inspired by BIBREF12. Further, to analyze the effect of dialog length quantitatively, we introduce a new dialog segmentation mechanism for the DA task and evaluate the performance of different dialogue length and context padding length under online and offline settings. Experiment and visualization show that our method can learn the local contextual dependency between utterances explicitly and achieve promising performance in two well-known datasets. The contributions of this paper are: We design a hierarchical model based on self-attention and revise the attention distribution to focus on a local and contextual semantic information by the relative position information between utterances. We introduce a new dialog segmentation mechaism for the DA task and analyze the effect of dialog length and context padding length. In addition to traditional offline prediction, we also analyze the accuracy and time complexity under the online setting.
442
What is the baseline model used?
The baseline models used are DrQA modified to support answering no answer questions, DrQA+CoQA which is pre-tuned on CoQA dataset, vanilla BERT, BERT+review tuned on domain reviews, BERT+CoQA tuned on the supervised CoQA data
Seeking information about products and services is an important activity of online consumers before making a purchase decision. Inspired by recent research on conversational reading comprehension (CRC) on formal documents, this paper studies the task of leveraging knowledge from a huge amount of reviews to answer multi-turn questions from consumers or users. Questions spanning multiple turns in a dialogue enables users to ask more specific questions that are hard to ask within a single question as in traditional machine reading comprehension (MRC). In this paper, we first build a dataset and then propose a novel task-adaptation approach to encoding the formulation of CRC task into a pre-trained language model. This task-adaptation approach is unsupervised and can greatly enhance the performance of the end CRC task that has only limited supervision. Experimental results show that the proposed approach is highly effective and has competitive performance as supervised approach. We plan to release the datasets and the code in May 2019.
Seeking information to assess whether some products or services suit one's needs is a vital activity for consumer decision making. In online businesses, one major hindrance is that customers have limited access to answers to their specific questions or concerns about products and user experiences. Given the ever-changing environment of products and services, it is very hard, if not impossible, to pre-compile an up-to-date knowledge base to answer user questions as in KB-QA BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . As a compromise, community question-answering (CQA) BIBREF4 is leveraged to enable existing customers or sellers to answer customer questions. However, one obvious drawback of this approach is that many questions are not answered, and even if they are answered, the answers and the following up questions are delayed, which is not suitable for interactive QA. Although existing studies have used information retrieval (IR) techniques BIBREF4 , BIBREF5 to identify a whole review as an answer to a question, it is time-consuming to read a whole review and the approach has difficulty to answer questions in multiple turns. Inspired by recent research in Conversational Reading Comprehension (CRC) BIBREF6 , BIBREF7 , we explore the possibility of turning reviews as a source of valuable knowledge of experiences and to provide a natural way of answering customers' multiple-turn questions in a dialogue setting. The conversational setting of machine reading comprehension (MRC) enables more specific questions and allow customers to either omit or co-reference information in context. As an example in a laptop domain shown in Table 1 , a customer may have 5 turns of questions based on the context. The customer first has an opinion question targeting an aspect “retina display” of a to-be-purchased laptop. Then the customer carries (and omit) the question type opinion from the first question to the second and continually asking the second aspect “boot-up speed”. For the third question, the customer carries the aspect of the second question but change the question type to opinion explanation. Later, the customer can co-reference the aspect “SSD” from the previous answer and ask for the capacity (a sub-aspect) of “SSD”. Unfortunately, there is no answer in this review for the fourth question so the review may say “I don't know”. But the customer can still ask other aspects as in the fifth question. We formally define this problem as follows and call it review conversational reading comprehension (RCRC). Problem Definition: Given a review that consists of a sequence of $n$ tokens $d=(d_1, \dots , d_n)$ , a history of past $k-1$ questions and answers as the context $C=(q_1, a_1, q_2, a_2, \dots , q_{k-1}, a_{k-1})$ and the current question $q_k$ , find a sequence of tokens (a textual span) $a=(d_s, \dots , d_e)$ in $d$ that answers $q_k$ based on $C$ , where $1 \le s \le n$ , $d=(d_1, \dots , d_n)$0 , and $d=(d_1, \dots , d_n)$1 , or return NO ANSWER ( $d=(d_1, \dots , d_n)$2 ) if the review does not contain any answer for $d=(d_1, \dots , d_n)$3 . RCRC is a novel QA task that requires the understanding of both the current question $q_k$ and dialogue context $C$ . Compared to the traditional single-turn MRC, the key challenge is how to understand the context $C$ and the current question $q_k$ given it may have a co-reference resolution or context carryover. To the best of our knowledge, there are no existing review datasets for RCRC. We first build a dataset called $(\text{RC})_2$ based on laptop and restaurant reviews from SemEval 2016 Task 5. We choose this dataset to better align with existing research on review-based tasks in sentiment analysis. Each review is annotated with a few dialogues focusing on some topics. Note that although one dialogue is annotated on a single review, a trained RCRC model can potentially be deployed among an open set of reviews BIBREF8 where the context may potentially contain answers from different reviews. Given the wide spectrum of domains in online business (e.g., thousands of categories on Amazon.com) and the prohibitive cost of annotation, $(\text{RC})_2$ is designed to have limited supervision as in other tasks of sentiment analysis. We adopt BERT (Bidirectional Encoder Representation from Transformers BIBREF9 ) as our base model since its variants achieve dominant performance on MRC BIBREF10 , BIBREF11 and CRC BIBREF6 tasks. However, BERT is designed to learn features for a wide spectrum of NLP tasks with a large amount of training examples. The task-awareness of BERT can be hindered by the weak supervision of the $(\text{RC})_2$ dataset. To resolve this challenge, we introduce a novel pre-tuning stage between pre-training and end-task fine-tuning for BERT. The pre-tuning stage is formulated in a similar fashion as the RCRC task but requires no annotated RCRC data and just domain QA pairs (from CQA) and reviews, which are readily available online BIBREF4 . We bring certain characteristics of the RCRC task (inputs/outputs) to pre-tuning to encourage BERT's weight to be prepared for understanding the current question and locate the answer if there exists one. The proposed pre-tuning step is general and can potentially be used in MRC or CRC tasks in other domains. The main contributions of this paper are as follows. (1) It proposes a practical new task on reviews that allows multi-turn conversational QA. (2) To address this problem, an annotated dataset is first created. (3) It then proposes a pre-tuning stage to learn task-aware representation. Experimental results show that the proposed approach achieves competitive performance even compared with the supervised approach on a large-scale training data.
444
What domains are present in the data?
Alarm, Banks, Buses, Calendar, Events, Flights, Homes, Hotels, Media, Messaging, Movies, Music, Payment, Rental Cars, Restaurants, Ride Sharing, Services, Train, Travel, Weather
This paper gives an overview of the Schema-Guided Dialogue State Tracking task of the 8th Dialogue System Technology Challenge. The goal of this task is to develop dialogue state tracking models suitable for large-scale virtual assistants, with a focus on data-efficient joint modeling across domains and zero-shot generalization to new APIs. This task provided a new dataset consisting of over 16000 dialogues in the training set spanning 16 domains to highlight these challenges, and a baseline model capable of zero-shot generalization to new APIs. Twenty-five teams participated, developing a range of neural network models, exceeding the performance of the baseline model by a very high margin. The submissions incorporated a variety of pre-trained encoders and data augmentation techniques. This paper describes the task definition, dataset and evaluation methodology. We also summarize the approach and results of the submitted systems to highlight the overall trends in the state-of-the-art.
Virtual assistants help users accomplish tasks including but not limited to finding flights, booking restaurants, by providing a natural language interface to services and APIs on the web. Large-scale assistants like Google Assistant, Amazon Alexa, Apple Siri, Microsoft Cortana etc. need to support a large and constantly increasing number of services, over a wide variety of domains. Consequently, recent work has focused on scalable dialogue systems that can handle tasks across multiple application domains. Data-driven deep learning based approaches for multi-domain modeling have shown promise, both for end-to-end and modular systems involving dialogue state tracking and policy learning. This line of work has been facilitated by the release of multi-domain dialogue corpora such as MultiWOZ BIBREF0, Taskmaster-1 BIBREF1, M2M BIBREF2 and FRAMES BIBREF3. However, building large-scale assistants, as opposed to dialogue systems managing a few APIs, poses a new set of challenges. Apart from the handling a very large variety of domains, such systems need to support heterogeneous services or APIs with possibly overlapping functionality. It should also offer an efficient way of supporting new APIs or services, while requiring little or no additional training data. Furthermore, to reduce maintenance workload and accommodate future growth, such assistants need to be robust to changes in the API's interface or addition of new slot values. Such changes shouldn't require collection of additional training data or retraining the model. The Schema-Guided Dialogue State Tracking task at the Eighth Dialogue System Technology Challenge explores the aforementioned challenges in context of dialogue state tracking. In a task-oriented dialogue, the dialogue state is a summary of the entire conversation till the current turn. The dialogue state is used to invoke APIs with appropriate parameters as specified by the user over the dialogue history. It is also used by the assistant to generate the next actions to continue the dialogue. DST, therefore, is a core component of virtual assistants. In this task, participants are required to develop innovative approaches to multi-domain dialogue state tracking, with a focus on data-efficient joint modeling across APIs and zero-shot generalization to new APIs. The task is based on the Schema-Guided Dialogue (SGD) dataset, which, to the best of our knowledge, is the largest publicly available corpus of annotated task-oriented dialogues. With over 16000 dialogues in the training set spanning 26 APIs over 16 domains, it exceeds the existing dialogue corpora in scale. SGD is the first dataset to allow multiple APIs with overlapping functionality within each domain. To adequately test generalization in zero-shot settings, the evaluation sets contain unseen services and domains. The dataset is designed to serve as an effective testbed for intent prediction, slot filling, state tracking and language generation, among other tasks in large-scale virtual assistants.
445
In which languages did the approach outperform the reported results?
Arabic, German, Portuguese, Russian, Swedish
Recently, sentiment analysis has received a lot of attention due to the interest in mining opinions of social media users. Sentiment analysis consists in determining the polarity of a given text, i.e., its degree of positiveness or negativeness. Traditionally, Sentiment Analysis algorithms have been tailored to a specific language given the complexity of having a number of lexical variations and errors introduced by the people generating content. In this contribution, our aim is to provide a simple to implement and easy to use multilingual framework, that can serve as a baseline for sentiment analysis contests, and as starting point to build new sentiment analysis systems. We compare our approach in eight different languages, three of them have important international contests, namely, SemEval (English), TASS (Spanish), and SENTIPOLC (Italian). Within the competitions our approach reaches from medium to high positions in the rankings; whereas in the remaining languages our approach outperforms the reported results.
Sentiment analysis is a crucial task in opinion mining field where the goal is to extract opinions, emotions, or attitudes to different entities (person, objects, news, among others). Clearly, this task is of interest for all languages; however, there exists a significant gap between English state-of-the-art methods and other languages. It is expected that some researchers decided to test the straightforward approach which consists in, first, translating the messages to English, and, then, use a high performing English sentiment classifier (for instance, see BIBREF0 and BIBREF1 ) instead of creating a sentiment classifier optimized for a given language. However, the advantages of a properly tuned sentiment classifier have been studied for different languages (for instance, see BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 ). This manuscript focuses on the particular case of multilingual sentiment analysis of short informal texts such as Twitter messages. Our aim is to provide an easy-to-use tool to create sentiment classifiers based on supervised learning (i.e., labeled dataset) where the classifier should be competitive to those sentiment classifiers carefully tuned by some given languages. Furthermore, our second contribution is to create a well-performing baseline to compare new sentiment classifiers in a broad range of languages or to bootstrap new sentiment analysis systems. Our approach is based on selecting the best text-transforming techniques that optimize some performance measures where the chosen techniques are robust to typical writing errors. In this context, we propose a robust multilingual sentiment analysis method, tested in eight different languages: Spanish, English, Italian, Arabic, German, Portuguese, Russian and Swedish. We compare our approach ranking in three international contests: TASS'15, SemEval'15-16 and SENTIPOLC'14, for Spanish, English and Italian respectively; the remaining languages are compared directly with the results reported in the literature. The experimental results locate our approach in good positions for all considered competitions; and excellent results in the other five languages tested. Finally, even when our method is almost cross-language, it can be extended to take advantage of language dependencies; we also provide experimental evidence of the advantages of using these language-dependent techniques. The rest of the manuscript is organized as follows. Section SECREF2 describes our proposed Sentiment Analysis method. Section SECREF3 describes the datasets and contests used to test our approach; whereas, the experimental results, and, the discussion are presented on Section SECREF4 . Finally, Section SECREF5 concludes.
447
Which is the baseline model?
The three baseline models are the i-vector model, a standard RNN LID system and a multi-task RNN LID system.
Deep neural models, particularly the LSTM-RNN model, have shown great potential for language identification (LID). However, the use of phonetic information has been largely overlooked by most existing neural LID methods, although this information has been used very successfully in conventional phonetic LID systems. We present a phonetic temporal neural model for LID, which is an LSTM-RNN LID system that accepts phonetic features produced by a phone-discriminative DNN as the input, rather than raw acoustic features. This new model is similar to traditional phonetic LID methods, but the phonetic knowledge here is much richer: it is at the frame level and involves compacted information of all phones. Our experiments conducted on the Babel database and the AP16-OLR database demonstrate that the temporal phonetic neural approach is very effective, and significantly outperforms existing acoustic neural models. It also outperforms the conventional i-vector approach on short utterances and in noisy conditions.
Language identification (LID) lends itself to a wide range of applications, such as mixed-lingual (code-switching) speech recognition. Humans use many cues to discriminate languages, and better accuracy can be achieved with the use of more cues. Various LID approaches have been developed, based on different types of cues.
448
How do they get the formal languages?
These are well-known formal languages some of which was used in the literature to evaluate the learning capabilities of RNNs.
Recurrent Neural Networks (RNNs) are theoretically Turing-complete and established themselves as a dominant model for language processing. Yet, there still remains an uncertainty regarding their language learning capabilities. In this paper, we empirically evaluate the inductive learning capabilities of Long Short-Term Memory networks, a popular extension of simple RNNs, to learn simple formal languages, in particular $a^nb^n$, $a^nb^nc^n$, and $a^nb^nc^nd^n$. We investigate the influence of various aspects of learning, such as training data regimes and model capacity, on the generalization to unobserved samples. We find striking differences in model performances under different training settings and highlight the need for careful analysis and assessment when making claims about the learning capabilities of neural network models.
Recurrent Neural Networks (RNNs) are powerful machine learning models that can capture and exploit sequential data. They have become standard in important natural language processing tasks such as machine translation BIBREF0 , BIBREF1 and speech recognition BIBREF2 . Despite the ubiquity of various RNN architectures in natural language processing, there still lies an unanswered fundamental question: What classes of languages can, empirically or theoretically, be learned by neural networks? This question has drawn much attention in the study of formal languages, with previous results on both the theoretical BIBREF3 , BIBREF4 and empirical capabilities of RNNs, showing that different RNN architectures can learn certain regular BIBREF5 , BIBREF6 , context-free BIBREF7 , BIBREF8 , and context-sensitive languages BIBREF9 . In a common experimental setup for investigating whether a neural network can learn a formal language, one formulates a supervised learning problem where the network is presented one character at a time and predicts the next possible character(s). The performance of the network can then be evaluated based on its ability to recognize sequences shown in the training set and – more importantly – to generalize to unseen sequences. There are, however, various methods of evaluation in a language learning task. In order to define the generalization of a network, one may consider the length of the shortest sequence in a language whose output was incorrectly produced by the network, or the size of the largest accepted test set, or the accuracy on a fixed test set BIBREF10 , BIBREF11 , BIBREF9 , BIBREF12 . These formulations follow narrow and bounded evaluation schemes though: They often define a length threshold in the test set and report the performance of the model on this fixed set. We acknowledge three unsettling issues with these formulations. First, the sequences in the training set are usually assumed to be uniformly or geometrically distributed, with little regard to the nature and complexity of the language. This assumption may undermine any conclusions drawn from empirical investigations, especially given that natural language is not uniformly distributed, an aspect that is known to affect learning in modern RNN architectures BIBREF13 . Second, in a test set where the sequences are enumerated by their lengths, if a network makes an error on a sequence of, say, length 7, but correctly recognizes longer sequences of length up to 1000, would we consider the model's generalization as good or bad? In a setting where we monitor only the shortest sequence that was incorrectly predicted by the network, this scheme clearly misses the potential success of the model after witnessing a failure, thereby misportraying the capabilities of the network. Third, the test sets are often bounded in these formulations, making it challenging to compare and contrast the performance of models if they attain full accuracy on their fixed test sets. In the present work, we address these limitations by providing a more nuanced evaluation of the learning capabilities of RNNs. In particular, we investigate the effects of three different aspects of a network's generalization: data distribution, length-window, and network capacity. We define an informative protocol for assessing the performance of RNNs: Instead of training a single network until it has learned its training set and then evaluating it on its test set, as BIBREF9 do in their study, we monitor and test the network's performance at each epoch during the entire course of training. This approach allows us to study the stability of the solutions reached by the network. Furthermore, we do not restrict ourselves to a test set of sequences of fixed lengths during testing. Rather, we exhaustively enumerate all the sequences in a language by their lengths and then go through the sequences in the test set one by one until our network errs $k$ times, thereby providing a more fine-grained evaluation criterion of its generalization capabilities. Our experimental evaluation is focused on the Long Short-Term Memory (LSTM) network BIBREF14 , a particularly popular RNN variant. We consider three formal languages, namely $a^n b^n$ , $a^n b^n c^n$ , and $a^n b^n c^n d^n$ , and investigate how LSTM networks learn these languages under different training regimes. Our investigation leads to the following insights: (1) The data distribution has a significant effect on generalization capability, with discrete uniform and U-shaped distributions often leading to the best generalization amongst all the four distributions in consideration. (2) Widening the training length-window, naturally, enables LSTM models to generalize better to longer sequences, and interestingly, the networks seem to learn to generalize to shorter sequences when trained on long sequences. (3) Higher model capacity – having more hidden units – leads to better stability, but not necessarily better generalization levels. In other words, over-parameterized models are more stable than models with theoretically sufficient but far fewer parameters. We explain this phenomenon by conjecturing that a collaborative counting mechanism arises in over-parameterized networks.
450
What is a confusion network or lattice?
graph-like structures where arcs connect nodes representing multiple hypothesized words, thus allowing multiple incoming arcs unlike 1-best sequences
The standard approach to mitigate errors made by an automatic speech recognition system is to use confidence scores associated with each predicted word. In the simplest case, these scores are word posterior probabilities whilst more complex schemes utilise bi-directional recurrent neural network (BiRNN) models. A number of upstream and downstream applications, however, rely on confidence scores assigned not only to 1-best hypotheses but to all words found in confusion networks or lattices. These include but are not limited to speaker adaptation, semi-supervised training and information retrieval. Although word posteriors could be used in those applications as confidence scores, they are known to have reliability issues. To make improved confidence scores more generally available, this paper shows how BiRNNs can be extended from 1-best sequences to confusion network and lattice structures. Experiments are conducted using one of the Cambridge University submissions to the IARPA OpenKWS 2016 competition. The results show that confusion network and lattice-based BiRNNs can provide a significant improvement in confidence estimation.
Recent years have seen an increased usage of spoken language technology in applications ranging from speech transcription BIBREF0 to personal assistants BIBREF1 . The quality of these applications heavily depends on the accuracy of the underlying automatic speech recognition (ASR) system yielding 1-best hypotheses and how well ASR errors are mitigated. The standard approach to ASR error mitigation is confidence scores BIBREF2 , BIBREF3 . A low confidence can give a signal to downstream applications about the high uncertainty of the ASR in its prediction and measures can be taken to mitigate the risk of making a wrong decision. However, confidence scores can also be used in upstream applications such as speaker adaptation BIBREF4 and semi-supervised training BIBREF5 , BIBREF6 to reflect uncertainty among multiple possible alternative hypotheses. Downstream applications, such as machine translation and information retrieval, could similarly benefit from using multiple hypotheses. A range of confidence scores has been proposed in the literature BIBREF3 . In the simplest case, confidence scores are posterior probabilities that can be derived using approaches such as confusion networks BIBREF7 , BIBREF8 . These posteriors typically significantly over-estimate confidence BIBREF8 . Therefore, a number of approaches have been proposed to rectify this problem. These range from simple piece-wise linear mappings given by decision trees BIBREF8 to more complex sequence models such as conditional random fields BIBREF9 , and to neural networks BIBREF10 , BIBREF11 , BIBREF12 . Though improvements over posterior probabilities on 1-best hypotheses were reported, the impact of these approaches on all hypotheses available within confusion networks and lattices has not been investigated. Extending confidence estimation to confusion network and lattice structures can be straightforward for some approaches, such as decision trees, and challenging for others, such as recurrent forms of neural networks. The previous work on encoding graph structures into neural networks BIBREF13 has mostly focused on embedding lattices into a fixed dimensional vector representation BIBREF14 , BIBREF15 . This paper examines a particular example of extending a bi-directional recurrent neural network (BiRNN) BIBREF16 to confusion network and lattice structures. This requires specifying how BiRNN states are propagated in the forward and backward directions, how to merge a variable number of BiRNN states, and how target confidence values are assigned to confusion network and lattice arcs. The paper shows that the state propagation in the forward and backward directions has close links to the standard forward-backward algorithm BIBREF17 . This paper proposes several approaches for merging BiRNN states, including an attention mechanism BIBREF18 . Finally, it describes a Levenshtein algorithm for assigning targets to confusion networks and an approximate solution for lattices. Combined these make it possible to assign confidence scores to every word hypothesised by the ASR, not just from a single extracted hypothesis. The rest of this paper is organised as follows. Section "Bi-Directional Recurrent Neural Network" describes the use of bi-directional recurrent neural networks for confidence estimation in 1-best hypotheses. Section "Confusion Network and Lattice Extensions" describes the extension to confusion network and lattice structures. Experimental results are presented in Section "Experiments" . The conclusions drawn from this work are given in Section "Conclusions" .
451
How close do clusters match to ground truth tone categories?
NMI between cluster assignments and ground truth tones for all sylables is: Mandarin: 0.641 Cantonese: 0.464
Tone is a prosodic feature used to distinguish words in many languages, some of which are endangered and scarcely documented. In this work, we use unsupervised representation learning to identify probable clusters of syllables that share the same phonemic tone. Our method extracts the pitch for each syllable, then trains a convolutional autoencoder to learn a low dimensional representation for each contour. We then apply the mean shift algorithm to cluster tones in high-density regions of the latent space. Furthermore, by feeding the centers of each cluster into the decoder, we produce a prototypical contour that represents each cluster. We apply this method to spoken multi-syllable words in Mandarin Chinese and Cantonese and evaluate how closely our clusters match the ground truth tone categories. Finally, we discuss some difficulties with our approach, including contextual tone variation and allophony effects.
Tonal languages use pitch to distinguish different words, for example, yi in Mandarin may mean `one', `to move', `already', or `art', depending on the pitch contour. Of over 6000 languages in the world, it is estimated that as many as 60-70% are tonal BIBREF0, BIBREF1. A few of these are national languages (e.g., Mandarin Chinese, Vietnamese, and Thai), but many tonal languages have a small number of speakers and are scarcely documented. There is a limited availability of trained linguists to perform language documentation before these languages become extinct, hence the need for better tools to assist linguists in these tasks. One of the first tasks during the description of an unfamiliar language is determining its phonemic inventory: what are the consonants, vowels, and tones of the language, and which pairs of phonemes are contrastive? Tone presents a unique challenge because unlike consonants and vowels, which can be identified in isolation, tones do not have a fixed pitch, and vary by speaker and situation. Since tone data is subject to interpretation, different linguists may produce different descriptions of the tone system of the same language BIBREF1. In this work, we present a model to automatically infer phonemic tone categories of a tonal language. We use an unsupervised representation learning and clustering approach, which requires only a set of spoken words in the target language, and produces clusters of syllables that probably have the same tone. We apply our method on Mandarin Chinese and Cantonese datasets, for which the ground truth annotation is used for evaluation. Our method does not make any language-specific assumptions, so it may be applied to low-resource languages whose phonemic inventories are not already established.
452
what are the evaluation metrics?
Precision, Recall, F1
We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.
A named entity can be mentioned using a great variety of surface forms (Barack Obama, President Obama, Mr. Obama, B. Obama, etc.) and the same surface form can refer to a variety of named entities. For example, according to the English Wikipedia, the form `Europe' can ambiguously be used to refer to 18 different entities, including the continent, the European Union, various Greek mythological entities, a rock band, some music albums, a magazine, a short story, etc. Furthermore, it is possible to refer to a named entity by means of anaphoric pronouns and co-referent expressions such as `he', `her', `their', `I', `the 35 year old', etc. Therefore, in order to provide an adequate and comprehensive account of named entities in text it is necessary to recognize the mention of a named entity and to classify it by a pre-defined type (e.g, person, location, organization). Named Entity Recognition and Classification (NERC) is usually a required step to perform Named Entity Disambiguation (NED), namely to link `Europe' to the right Wikipedia article, and to resolve every form of mentioning or co-referring to the same entity. Nowadays NERC systems are widely being used in research for tasks such as Coreference Resolution BIBREF0 , Named Entity Disambiguation BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 for which a lot of interest has been created by the TAC KBP shared tasks BIBREF6 , Machine Translation BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , Aspect Based Sentiment Analysis BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , Event Extraction BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 and Event Ordering BIBREF20 . Moreover, NERC systems are integrated in the processing chain of many industrial software applications, mostly by companies offering specific solutions for a particular industrial sector which require recognizing named entities specific of their domain. There is therefore a clear interest in both academic research and industry to develop robust and efficient NERC systems: For industrial vendors it is particularly important to diversify their services by including NLP technology for a variety of languages whereas in academic research NERC is one of the foundations of many other NLP end-tasks. Most NERC taggers are supervised statistical systems that extract patterns and term features which are considered to be indications of Named Entity (NE) types using the manually annotated training data (extracting orthographic, linguistic and other types of evidence) and often external knowledge resources. As in other NLP tasks, supervised statistical NERC systems are more robust and obtain better performance on available evaluation sets, although sometimes the statistical models can also be combined with specific rules for some NE types. For best performance, supervised statistical approaches require manually annotated training data, which is both expensive and time-consuming. This has seriously hindered the development of robust high performing NERC systems for many languages but also for other domains and text genres BIBREF21 , BIBREF22 , in what we will henceforth call `out-of-domain' evaluations. Moreover, supervised NERC systems often require fine-tuning for each language and, as some of the features require language-specific knowledge, this poses yet an extra complication for the development of robust multilingual NERC systems. For example, it is well-known that in German every noun is capitalized and that compounds including named entities are pervasive. This also applies to agglutinative languages such as Basque, Korean, Finnish, Japanese, Hungarian or Turkish. For this type of languages, it had usually been assumed that linguistic features (typically Part of Speech (POS) and lemmas, but also semantic features based on WordNet, for example) and perhaps specific hand-crafted rules, were a necessary condition for good NERC performance as they would allow to capture better the most recurrent declensions (cases) of named entities for Basque BIBREF23 or to address problems such as sparsity and capitalization of every noun for German BIBREF24 , BIBREF25 , BIBREF26 . This language dependency was easy to see in the CoNLL 2002 and 2003 tasks, in which systems participating in the two available languages for each edition obtained in general different results for each language. This suggests that without fine-tuning for each corpus and language, the systems did not generalize well across languages BIBREF27 . This paper presents a multilingual and robust NERC system based on simple, general and shallow features that heavily relies on word representation features for high performance. Even though we do not use linguistic motivated features, our approach also works well for inflected languages such as Basque and German. We demonstrate the robustness of our approach by reporting best results for five languages (Basque, Dutch, German, English and Spanish) on 12 different datasets, including seven in-domain and eight out-of-domain evaluations.
453
What monolingual word representations are used?
AraVec for Arabic, FastText for French, and Word2vec Google News for English.
This paper proposes the first multilingual (French, English and Arabic) and multicultural (Indo-European languages vs. less culturally close languages) irony detection system. We employ both feature-based models and neural architectures using monolingual word representation. We compare the performance of these systems with state-of-the-art systems to identify their capabilities. We show that these monolingual models trained separately on different languages using multilingual word representation or text-based features can open the door to irony detection in languages that lack of annotated data for irony.
Figurative language makes use of figures of speech to convey non-literal meaning BIBREF0, BIBREF1. It encompasses a variety of phenomena, including metaphor, humor, and irony. We focus here on irony and uses it as an umbrella term that covers satire, parody and sarcasm. Irony detection (ID) has gained relevance recently, due to its importance to extract information from texts. For example, to go beyond the literal matches of user queries, Veale enriched information retrieval with new operators to enable the non-literal retrieval of creative expressions BIBREF2. Also, the performances of sentiment analysis systems drastically decrease when applied to ironic texts BIBREF3, BIBREF4. Most related work concern English BIBREF5, BIBREF6 with some efforts in French BIBREF7, Portuguese BIBREF8, Italian BIBREF9, Dutch BIBREF10, Hindi BIBREF11, Spanish variants BIBREF12 and Arabic BIBREF13, BIBREF14. Bilingual ID with one model per language has also been explored, like English-Czech BIBREF15 and English-Chinese BIBREF16, but not within a cross-lingual perspective. In social media, such as Twitter, specific hashtags (#irony, #sarcasm) are often used as gold labels to detect irony in a supervised learning setting. Although recent studies pointed out the issue of false-alarm hashtags in self-labeled data BIBREF17, ID via hashtag filtering provides researchers positive examples with high precision. On the other hand, systems are not able to detect irony in languages where such filtering is not always possible. Multilingual prediction (either relying on machine translation or multilingual embedding methods) is a common solution to tackle under-resourced languages BIBREF18, BIBREF19. While multilinguality has been widely investigated in information retrieval BIBREF20, BIBREF21 and several NLP tasks (e.g., sentiment analysis BIBREF22, BIBREF23 and named entity recognition BIBREF24), no one explored it for irony. We aim here to bridge the gap by tackling ID in tweets from both multilingual (French, English and Arabic) and multicultural perspectives (Indo-European languages whose speakers share quite the same cultural background vs. less culturally close languages). Our approach does not rely either on machine translation or parallel corpora (which are not always available), but rather builds on previous corpus-based studies that show that irony is a universal phenomenon and many languages share similar irony devices. For example, Karoui et. al BIBREF25 concluded that their multi-layer annotated schema, initially used to annotate French tweets, is portable to English and Italian, observing relatively the same tendencies in terms of irony categories and markers. Similarly, Chakhachiro BIBREF26 studies irony in English and Arabic, and shows that both languages share several similarities in the rhetorical (e.g., overstatement), grammatical (e.g., redundancy) and lexical (e.g., synonymy) usage of irony devices. The next step now is to show to what extent these observations are still valid from a computational point of view. Our contributions are: A new freely available corpus of Arabic tweets manually annotated for irony detection. Monolingual ID: We propose both feature-based models (relying on language-dependent and language-independent features) and neural models to measure to what extent ID is language dependent. Cross-lingual ID: We experiment using cross-lingual word representation by training on one language and testing on another one to measure how the proposed models are culture-dependent. Our results are encouraging and open the door to ID in languages that lack of annotated data for irony.
455
Do they build one model per topic or on all topics?
One model per topic.
Summarizing content contributed by individuals can be challenging, because people make different lexical choices even when describing the same events. However, there remains a significant need to summarize such content. Examples include the student responses to post-class reflective questions, product reviews, and news articles published by different news agencies related to the same events. High lexical diversity of these documents hinders the system's ability to effectively identify salient content and reduce summary redundancy. In this paper, we overcome this issue by introducing an integer linear programming-based summarization framework. It incorporates a low-rank approximation to the sentence-word co-occurrence matrix to intrinsically group semantically-similar lexical items. We conduct extensive experiments on datasets of student responses, product reviews, and news documents. Our approach compares favorably to a number of extractive baselines as well as a neural abstractive summarization system. The paper finally sheds light on when and why the proposed framework is effective at summarizing content with high lexical variety.
Summarization is a promising technique for reducing information overload. It aims at converting long text documents to short, concise summaries conveying the essential content of the source documents BIBREF0 . Extractive methods focus on selecting important sentences from the source and concatenating them to form a summary, whereas abstractive methods can involve a number of high-level text operations such as word reordering, paraphrasing, and generalization BIBREF1 . To date, summarization has been successfully exploited for a number of text domains, including news articles BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , product reviews BIBREF6 , online forum threads BIBREF7 , meeting transcripts BIBREF8 , scientific articles BIBREF9 , BIBREF10 , student course responses BIBREF11 , BIBREF12 , and many others. Summarizing content contributed by multiple authors is particularly challenging. This is partly because people tend to use different expressions to convey the same semantic meaning. In a recent study of summarizing student responses to post-class reflective questions, Luo et al., Luo:2016:NAACL observe that the students use distinct lexical items such as “bike elements” and “bicycle parts” to refer to the same concept. The student responses frequently contain expressions with little or no word overlap, such as “the main topics of this course” and “what we will learn in this class,” when they are prompted with “describe what you found most interesting in today's class.” A similar phenomenon has also been observed in the news domain, where reporters use different nicknames, e.g., “Bronx Zoo” and “New York Highlanders,” to refer to the baseball team “New York Yankees.” Luo et al., Luo:2016:NAACL report that about 80% of the document bigrams occur only once or twice for the news domain, whereas the ratio is 97% for student responses, suggesting the latter domain has a higher level of lexical diversity. When source documents contain diverse expressions conveying the same meaning, it can hinder the summarization system's ability to effectively identify salient content from the source documents. It can also increase the summary redundancy if lexically-distinct but semantically-similar expressions are included in the summary. Existing neural encoder-decoder models may not work well at summarizing such content with high lexical variety BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . On one hand, training the neural sequence-to-sequence models requires a large amount of parallel data. The cost of annotating gold-standard summaries for many domains such as student responses can be prohibitive. Without sufficient labelled data, the models can only be trained on automatically gathered corpora, where an instance often includes a news article paired with its title or a few highlights. On the other hand, the summaries produced by existing neural encoder-decoder models are far from perfect. The summaries are mostly extractive with minor edits BIBREF16 , contain repetitive words and phrases BIBREF17 and may not accurately reproduce factual details BIBREF18 , BIBREF19 . We examine the performance of a state-of-the-art neural summarization model in Section § SECREF28 . In this work, we propose to augment the integer linear programming (ILP)-based summarization framework with a low-rank approximation of the co-occurrence matrix, and further evaluate the approach on a broad range of datasets exhibiting high lexical diversity. The ILP framework, being extractive in nature, has demonstrated considerable success on a number of summarization tasks BIBREF20 , BIBREF21 . It generates a summary by selecting a set of sentences from the source documents. The sentences shall maximize the coverage of important source content, while minimizing the redundancy among themselves. At the heart of the algorithm is a sentence-concept co-occurrence matrix, used to determine if a sentence contains important concepts and whether two sentences share the same concepts. We introduce a low-rank approximation to the co-occurrence matrix and optimize it using the proximal gradient method. The resulting system thus allows different sentences to share co-occurrence statistics. For example, “The activity with the bicycle parts" will be allowed to partially contain “bike elements" although the latter phrase does not appear in the sentence. The low-rank matrix approximation provides an effective way to implicitly group lexically-diverse but semantically-similar expressions. It can handle out-of-vocabulary expressions and domain-specific terminologies well, hence being a more principled approach than heuristically calculating similarities of word embeddings. Our research contributions of this work include the following. In the following sections we first present a thorough review of the related work (§ SECREF2 ), then introduce our ILP summarization framework (§ SECREF3 ) with a low-rank approximation of the co-occurrence matrix optimized using the proximal gradient method (§ SECREF4 ). Experiments are performed on a collection of eight datasets (§ SECREF5 ) containing student responses to post-class reflective questions, product reviews, peer reviews, and news articles. Intrinsic evaluation (§ SECREF20 ) shows that the low-rank approximation algorithm can effectively group distinct expressions used in similar semantic context. For extrinsic evaluation (§ SECREF28 ) our proposed framework obtains competitive results in comparison to state-of-the-art summarization systems. Finally, we conduct comprehensive studies analyzing the characteristics of the datasets and suggest critical factors that affect the summarization performance (§ SECREF7 ).
458
How well does their system perform on the development set of SRE?
EER 16.04, Cmindet 0.6012, Cdet 0.6107
This paper presents the Intelligent Voice (IV) system submitted to the NIST 2016 Speaker Recognition Evaluation (SRE). The primary emphasis of SRE this year was on developing speaker recognition technology which is robust for novel languages that are much more heterogeneous than those used in the current state-of-the-art, using significantly less training data, that does not contain meta-data from those languages. The system is based on the state-of-the-art i-vector/PLDA which is developed on the fixed training condition, and the results are reported on the protocol defined on the development set of the challenge.
Compared to previous years, the 2016 NIST speaker recognition evaluation (SRE) marked a major shift from English towards Austronesian and Chinese languages. The task like previous years is to perform speaker detection with the focus on telephone speech data recorded over a variety of handset types. The main challenges introduced in this evaluation are duration and language variability. The potential variation of languages addressed in this evaluation, recording environment, and variability of test segments duration influenced the design of our system. Our goal was to utilize recent advances in language normalization, domain adaptation, speech activity detection and session compensation techniques to mitigate the adverse bias introduced in this year's evaluation. Over recent years, the i-vector representation of speech segments has been widely used by state-of-the-art speaker recognition systems BIBREF0 . The speaker recognition technology based on i-vectors currently dominates the research field due to its performance, low computational cost and the compatibility of i-vectors with machine learning techniques. This dominance is reflected by the recent NIST i-vector machine learning challenge BIBREF1 which was designed to find the most promising algorithmic approaches to speaker recognition specifically on the basis of i-vectors BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . The outstanding ability of DNN for frame alignment which has achieved remarkable performance in text-independent speaker recognition for English data BIBREF6 , BIBREF7 , failed to provide even comparable recognition performance to the traditional GMM. Therefore, we concentrated on the cepstral based GMM/i-vector system. We outline in this paper the Intelligent Voice system, techniques and results obtained on the SRE 2016 development set that will mirror the evaluation condition as well as the timing report. Section SECREF2 describes the data used for the system training. The front-end and back-end processing of the system are presented in Sections SECREF3 and SECREF4 respectively. In Section SECREF5 , we describe experimental evaluation of the system on the SRE 2016 development set. Finally, we present a timing analysis of the system in Section SECREF6 .
468
Which of the classifiers showed the best performance?
Logistic regression
Event detection using social media streams needs a set of informative features with strong signals that need minimal preprocessing and are highly associated with events of interest. Identifying these informative features as keywords from Twitter is challenging, as people use informal language to express their thoughts and feelings. This informality includes acronyms, misspelled words, synonyms, transliteration and ambiguous terms. In this paper, we propose an efficient method to select the keywords frequently used in Twitter that are mostly associated with events of interest such as protests. The volume of these keywords is tracked in real time to identify the events of interest in a binary classification scheme. We use keywords within word-pairs to capture the context. The proposed method is to binarize vectors of daily counts for each word-pair by applying a spike detection temporal filter, then use the Jaccard metric to measure the similarity of the binary vector for each word-pair with the binary vector describing event occurrence. The top n word-pairs are used as features to classify any day to be an event or non-event day. The selected features are tested using multiple classifiers such as Naive Bayes, SVM, Logistic Regression, KNN and decision trees. They all produced AUC ROC scores up to 0.91 and F1 scores up to 0.79. The experiment is performed using the English language in multiple cities such as Melbourne, Sydney and Brisbane as well as the Indonesian language in Jakarta. The two experiments, comprising different languages and locations, yielded similar results.
Event detection is important for emergency services to react rapidly and minimize damage. For example, terrorist attacks, protests, or bushfires may require the presence of ambulances, firefighters, and police as soon as possible to save people. This research aims to detect events as soon as they occur and are reported via some Twitter user. The event detection process requires to know the keywords associated with each event and to assess the minimal count of each word to decide confidently that an event has occurred. In this research, we propose a novel method of spike matching to identify keywords, and use probabilistic classification to assess the probability of having an event given the volume of each word. Event detection and prediction from social networks have been studied frequently in recent years. Most of the predictive frameworks use textual content such as likes, shares, and retweets, as features. The text is used as features either by tracking the temporal patterns of keywords, clustering words into topics, or by evaluating sentiment scores and polarity. The main challenge in keyword-based models is to determine which words to use in the first place, especially as people use words in a non-standard way, particularly on Twitter. In this research, we aim for detecting large events as soon as they happen with near-live sensitivity. For example, When spontaneous protests occur just after recent news such as increasing taxes or decreasing budget, we need to have indicators to raise the flag of a happening protest. Identifying these indicators requires to select a set of words that are mostly associated with the events of interest such as protests. We then track the volume of these words and evaluate the probability of an event occurring given the current volume of each of the tracked features. The main challenge is to find this set of features that allow such probabilistic classification. Using text as features in Twitter is challenging because of the informal nature of the tweets, the limited length of the tweet, platform-specific language, and multilingual nature of Twitter BIBREF0 , BIBREF1 , BIBREF2 . The main challenges for text analysis in Twitter are listed below: We approached the first and second challenges by using a Bayesian approach to learn which terms were associated with events, regardless of whether they are standard language, acronyms, or even a made-up word, so long as they match the events of interest. The third and fourth challenges are approached by using word-pairs, where we extract all the pairs of co-occurring words within each tweet. This allows us to recognize the context of the word ('Messi','strike' ) is different than ('labour','strike'). According to the distributional semantic hypothesis, event-related words are likely to be used on the day of an event more frequently than any normal day before or after the event. This will form a spike in the keyword count magnitude along the timeline as illustrated in Figure FIGREF6 . To find the words most associated with events, we search for the words that achieve the highest number of spikes matching the days of events. We use the Jaccard similarity metric as it values the spikes matching events and penalizes spikes with no event and penalizes events without spikes. Separate words can be noisy due to the misuse of the term by people, especially in big data environments. So, we rather used the word-pairs as textual features in order to capture the context of the word. For example, this can differentiate between the multiple usages of the word “strike” within the contexts of “lightning strike”, “football strike” and “labour strike” In this paper, we propose a method to find the best word-pairs to represent the events of interest. These word-pairs can be used for time series analysis to predict future events as indicated in Figure FIGREF1 . They can also be used as seeds for topic modelling, or to find related posts and word-pairs using dynamic query expansion. The proposed framework uses a temporal filter to identify the spikes within the word-pair signal to binarize the word-pair time series vector BIBREF3 . The binary vector of the word-pair is compared to the protest days vector using Jaccard similarity index BIBREF4 , BIBREF5 , where the word-pairs with highest similarity scores are the most associated word-pairs with protest days. This feature selection method is built upon the assumption that people discuss an event on the day of that event more than on any day before or after the event. This implies that word-pairs related to the event will form a spike on this specific day. Some of the spiking word-pairs are related to the nature of the event itself, such as “taxi protest” or “fair education”. These word-pairs will appear once or twice along the time frame. Meanwhile, more generic word-pairs such as “human rights” or “labour strike” will spike more frequently in the days of events regardless the protest nature. To test our method, we developed two experiments using all the tweets in Melbourne and Sydney over a period of 640 days. The total number of tweets exceeded 4 million tweets per day, with a total word-pair count of 12 million different word-pairs per day, forming 6 billion word-pairs over the entire timeframe. The selected word-pairs from in each city are used as features to classify if there will be an event or not on a specific day in that city. We classified events from the extracted word-pairs using 9 classifiers including Naive Bayes, Decision Trees, KNN, SVM, and logistic regression. In Section 2, we describe the event detection methods. Section 3 states the known statistical methods used for data association and feature selection. Section 4 describes the proposed feature selection method. Section 5 describes model training and prediction. Section 6 describes the experiment design, the data and the results. Section 7 summarizes the paper, discuss the research conclusion and explains future work.
469
How many speeches are in the dataset?
5575 speeches
With the increasing usage of the internet, more and more data is being digitized including parliamentary debates but they are in an unstructured format. There is a need to convert them into a structured format for linguistic analysis. Much work has been done on parliamentary data such as Hansard, American congressional floor-debate data on various aspects but less on pragmatics. In this paper, we provide a dataset for the synopsis of Indian parliamentary debates and perform stance classification of speeches i.e identifying if the speaker is supporting the bill/issue or against it. We also analyze the intention of the speeches beyond mere sentences i.e pragmatics in the parliament. Based on thorough manual analysis of the debates, we developed an annotation scheme of 4 mutually exclusive categories to analyze the purpose of the speeches: to find out ISSUES, to BLAME, to APPRECIATE and for CALL FOR ACTION. We have annotated the dataset provided, with these 4 categories and conducted preliminary experiments for automatic detection of the categories. Our automated classification approach gave us promising results.
As the world moves towards increasing forms of digitization, the creation of text corpora has become an important activity for NLP and other fields of research. Parliamentary data is a rich corpus of discourse on a wide array of topics. The Lok Sabha website provides access to all kinds of reports, debates, bills related to the proceedings of the house. Similarly, the Rajya Sabha website also contains debates, bills, reports introduced in the house. The Lok Sabha website also contains information about members of the parliament who are elected by the people and debate in the house. Since the data is unstructured , it cannot be computationally analyzed. There is a need to shape the data into a structured format for analysis. This data is important as it can be used to visualize person, party and agenda level semantics in the house. The data that we get from parliamentary proceedings has presence of sarcasm, interjections and allegations which makes it difficult to apply standard NLP techniques BIBREF0 . Members of the parliament discuss various important aspects and there is a strong purpose behind every speech. We wanted to analyze this particular aspect. Traditional polar stances (for or against) do not justify for the diplomatic intricacies in the speeches. We created this taxonomy to better understand the semantics i.e the pragmatics of the speeches and to give enriched insights into member's responses in a speech. The study of the speaker's meaning, not focusing on the phonetic or grammatical form of an utterance, but instead on what the speaker's intentions and beliefs are is pragmatics. Pragmatics is a sub-field of linguistics and semiotics that studies the ways in which context contributes to meaning. After thorough investigation of many speeches we found that the statements made by members cannot be deemed strictly "for or against" a bill or government. A person maybe appreciating a bill or government's effort in one part of a speech but also asking attention to other contentious issues. Similarly, a person criticizing government for an irresponsible action could be giving some constructive suggestions elsewhere. A political discourse may not always be polar and might have a higher spectrum of meaning. After investigating and highlighting statements with different intentions we came up with a minimal set of 4 mutually exclusive categories with different degrees of correlation with the traditional two polar categories (for and against). It is observed that any statement by a participating member will fall into one of these categories namely - Appreciation, Call for Action, Issue, Blaming. For example, if the debate consists of more of issues, one can infer that the bill is not serving the its purpose in a well manner. Also, this preliminary step will lead to new areas of research such as detection of appreciation, blame in similar lines of argument mining which is evolving in the recent years in the field of linguistics. We will quote portions of a few speeches which will give an idea of the data being presented: This city has lost its place due to negligence of previous governments and almost all industries have migrated from here and lack of infrastructure facilities, business is also losing its grip. It is very unfortunate that previous UP Governments also did not do any justice to this city. - Shri Devendra Singh Bhole, May 03, 2016 As evident, the speaker is clearly blaming the previous governments for negligence on the city. In this sense the data is very rich and a lot of linguistic research is possible. Researchers can work on different aspects such as detection of critique made by members, suggestions raised by members etc. Given the data, it can be used for rhetoric, linguistic, historical, political and sociological research. Parliamentary data is a major source of socially relevant content. A new series of workshops are being conducted for the sole purpose of encouraging research in parliamentary debates ParlClarin. As a preliminary step, we created four major categories of the speeches spoken by the parliament members. The definitions and examples of the four categories are explained in the below tables respectively. The examples are taken from a debate on NABARD bill in Lok Sabha. A speech can be labelled with multiple categories as members can appreciate and raise issues in the same speech. The following points are the contributions of this paper :
470
How are multimodal representations combined?
The image feature vectors are mapped into BERT embedding dimensions and treated like a text sequence afterwards.
Pre-trained language models such as BERT have recently contributed to significant advances in Natural Language Processing tasks. Interestingly, while multilingual BERT models have demonstrated impressive results, recent works have shown how monolingual BERT can also be competitive in zero-shot cross-lingual settings. This suggests that the abstractions learned by these models can transfer across languages, even when trained on monolingual data. In this paper, we investigate whether such generalization potential applies to other modalities, such as vision: does BERT contain abstractions that generalize beyond text? We introduce BERT-gen, an architecture for text generation based on BERT, able to leverage on either mono- or multi- modal representations. The results reported under different configurations indicate a positive answer to our research question, and the proposed model obtains substantial improvements over the state-of-the-art on two established Visual Question Generation datasets.
The BERT language model BIBREF0 is a Deep Bidirectional Transformer BIBREF1 pre-trained on textual corpora (BookCorpus and Wikipedia) using a Masked Language Model (MLM) objective – predicting some words that are randomly masked in the sentence, along with a sentence entailment loss. Recent research efforts BIBREF2 have shown how BERT encodes abstractions that generalize across languages, even when trained on monolingual data only. This contradicts the common belief BIBREF3, BIBREF4 that a shared vocabulary and joint training on multiple languages are essential to achieve cross-lingual generalization capabilities. In this work, we further investigate the generalization potentials of large pre-trained LMs, this time moving to a cross-modal setup: does BERT contain abstractions that generalize beyond text? In the Artificial Intelligence community, several works have investigated the longstanding research question of whether textual representations encode visual information. On the one hand, a large body of research called language grounding considers that textual representations lack visual commonsense BIBREF5, and intend to ground the meaning of words BIBREF6, BIBREF7 and sentences BIBREF8, BIBREF9 in the perceptual world. In another body of work, textual representations have successfully been used to tackle multi-modal tasks BIBREF10 such as Zero-Shot Learning BIBREF11, Visual Question Answering BIBREF12 or Image Captioning BIBREF13. Following the latter line of research, in this paper we evaluate the potential of pre-trained language models to generalize in the context of Visual Question Generation (VQG) BIBREF14. The Visual Question Generation task allows us to investigate the cross-modal capabilities of BERT: unlike Image Captioning (where the input is only visual) or VQA (where the input is visual and textual), VQG is a multi-modal task where input can be textual and/or visual. VQG data usually includes images and the associated captions, along with corresponding questions about the image; thus, different experimental setups can be designed to analyze the impact of each modality. Indeed, the questions can be generated using i) textual (the caption), ii) visual (the image), or iii) multi-modal (both the caption and the image) input. From a practical standpoint, the VQG task has several applications: robots or AI assistants could ask questions rooted in multi-modal data (e.g. fusing conversational data with visual information from captors and cameras), in order to refine their interpretation of the situation they are presented with. It could also allow systems relying on knowledge-bases to gain visual common sense and deal with the Human Reporting Bias BIBREF15, which states that the content of images and text are intrinsically different, since visual common sense is rarely explicitly stated in text. Recently, BERT-based Multi-Modal Language Models have been proposed BIBREF16, BIBREF17, BIBREF18, BIBREF19 to tackle multi-modal tasks, using different approaches to incorporate visual data within BERT. From these works, it is left to explore whether the cross-modal alignment is fully learned, or it is to some extent already encoded in the BERT abstractions. Therefore, in contrast with those approaches, we explicitly avoid using the following complex mechanisms: Multi-modal supervision: all previous works exploit an explicit multi-modal supervision through a pre-training step; the models have access to text/image pairs as input, to align their representations. In contrast, our model can switch from text-only to image-only mode without any explicit alignment. Image-specific losses: specific losses such as Masked RoI (Region of Interest) Classification with Linguistic Clues BIBREF19 or sentence-image prediction BIBREF18 have been reported helpful to align visual and text modalities. Instead, we only use the original MLM loss from BERT (and not its entailment loss). Non-linearities: we explore a scenario in which the only learnable parameters, for aligning image representations to BERT, are those of simple linear projection layer. This allows us to assess whether the representations encoded in BERT can transfer out-of-the-box to another modality. Furthermore, to the best of our knowledge, this paper is the first attempt to investigate multi-modal text generation using pre-trained language models. We introduce BERT-gen, a text generator based on BERT, that can be applied both in mono and multi-modal settings. We treat images similarly to text: while a sentence is seen as a sequence of (sub)word tokens, an image is seen as a sequence of objects associated to their corresponding positions (bounding boxes). We show how a simple linear mapping, projecting visual embeddings into the first layer, is enough to ground BERT in the visual realm: text and image object representations are found to be effectively aligned, and the attention over words transfers to attention over the relevant objects in the image. Our contributions can be summarized as follows: we introduce BERT-gen, a novel method for generating text using BERT, that can be applied in both mono and multi-modal settings; we show that the semantic abstractions encoded in pre-trained BERT can generalize to another modality; we report state-of-the art results on the VQG task; we provide extensive ablation analyses to interpret the behavior of BERT-gen under different configurations (mono- or multi- modal).
471
What is the problem with existing metrics that they are trying to address?
Answer with content missing: (whole introduction) However, recent studies observe the limits of ROUGE and find in some cases, it fails to reach consensus with human. judgment (Paulus et al., 2017; Schluter, 2017).
Commonly adopted metrics for extractive text summarization like ROUGE focus on the lexical similarity and are facet-agnostic. In this paper, we present a facet-aware evaluation procedure for better assessment of the information coverage in extracted summaries while still supporting automatic evaluation once annotated. Specifically, we treat \textit{facet} instead of \textit{token} as the basic unit for evaluation, manually annotate the \textit{support sentences} for each facet, and directly evaluate extractive methods by comparing the indices of extracted sentences with support sentences. We demonstrate the benefits of the proposed setup by performing a thorough \textit{quantitative} investigation on the CNN/Daily Mail dataset, which in the meantime reveals useful insights of state-of-the-art summarization methods.\footnote{Data can be found at \url{this https URL}.
In this section, we describe the procedure of annotating CNN/Daily Mail. For each facet (sentence) in the reference summary, we find all its support sentences in the document that can cover its meaning. Note that the support sentences are likely to be more verbose, but we only consider if the sentences cover the semantics of the facet regardless of their length. The reason is that we believe extractive summarization should focus on information coverage and once salient sentences are extracted, one can then compress them in an abstractive way BIBREF0, BIBREF1. Formally, we denote one document-summary pair as $\lbrace d, r\rbrace $, where $d = \lbrace d^j\rbrace _{j=1}^D$, $r = \lbrace r^j\rbrace _{j=1}^R$, and $D$, $R$ denote the number of sentences. We define one support group of facet $\mathcal {F}$ as a minimum set of sentences in the document that express the meaning of $\mathcal {F}$. For each $r^j$, we annotate a FAM $r^j \rightarrow \lbrace \lbrace d^{s_{j, 1}^k}\rbrace _{k=1}^{\textrm {K}_1}, \lbrace d^{s_{j, 2}^k}\rbrace _{k=1}^{\textrm {K}_2}, ..., \lbrace d^{s_{j, N}^k}\rbrace _{k=1}^{\textrm {K}_N}\rbrace $ in which each $\lbrace d^{s_{j, n}^k}\rbrace _{k=1}^{\textrm {K}_n}$ is a support group and $s_{j, n}^k$ is the index of the $k$-th support sentence in group $n$. One may regard the procedure as creating extractive labels, which is widely used in extractive summarization since only abstractive references are available in existing datasets. The major differences are that 1) We label all the support sentences instead of just one or fixed number of sentences, i.e., we do not specify $\textrm {K}_n$. For example, we would put two sentences to one support group if they are complementary and only combining them can cover the facet. 2) We find multiple support groups ($N > 1$), as there could be more than one set of sentences that cover the same facet and extracting any one of them is acceptable. In contrast, there is no concept of support group in extractive labels as they inherently form one such group. We sampled 150 document-summary pairs from the test set of CNN/Daily Mail. 344 FAMs were created by three annotators with high agreement (pairwise Jaccard index 0.71) and further verified to reach consensus. We found that the facets can be divided into three categories based on their quality and degree of abstraction as follows. Random: The facet is quite random, either because the document itself is too hard to summarize (e.g., a report full of quotations) or the human editor was too subjective when writing the summary BIBREF2. Another possible reason is that the so-called “summaries” are in fact “story highlights”, which seems reasonable to contain details. We found that 41/150 (26%) samples have random facet(s), implying there are severe issues in the reference summaries of CNN/Daily Mail. Low Abstraction: The facet can be mapped to its support sentences. We further divide this category by the (rounded) average number of support sentences K of $N$ support groups ($\textrm {K}=\frac{\sum _{n=1}^N |\lbrace d^{s_{j, n}^k}\rbrace _{k=1}^{\textrm {K}_n} \rbrace |}{N})$. As in Table TABREF1, most facets (93%) in the reference summaries are paraphrases or compression of one to two sentences in the document without much abstraction. High Abstraction: The facet cannot be mapped to its support sentences, which indicates that its writing requires deep understandings of the document rather than reorganizing several sentences. The proportion of this category (7%) also indicates how often extractive methods would not work (well) on CNN/Daily Mail. Surprisingly, we found it easier than previously believed to create the FAMs on CNN/Daily Mail, as it is uncommon ($\overline{N} = 1.56$) to detect multiple sentences with similar semantics (compared to multi-document summarization). In addition, most support groups only have one or two support sentences with large lexical overlap.
472
How are discourse features incorporated into the model?
They derive entity grid with grammatical relations and RST discourse relations and concatenate them with pooling vector for the char-bigrams before feeding to the resulting vector to the softmax layer.
We explore techniques to maximize the effectiveness of discourse information in the task of authorship attribution. We present a novel method to embed discourse features in a Convolutional Neural Network text classifier, which achieves a state-of-the-art result by a substantial margin. We empirically investigate several featurization methods to understand the conditions under which discourse features contribute non-trivial performance gains, and analyze discourse embeddings.
Authorship attribution (AA) is the task of identifying the author of a text, given a set of author-labeled training texts. This task typically makes use of stylometric cues at the surface lexical and syntactic level BIBREF0 , although BIBREF1 and BIBREF2 go beyond the sentence level, showing that discourse information can help. However, they achieve limited performance gains and lack an in-depth analysis of discourse featurization techniques. More recently, convolutional neural networks (CNNs) have demonstrated considerable success on AA relying only on character-level INLINEFORM0 -grams BIBREF3 , BIBREF4 . The strength of these models is evidenced by findings that traditional stylometric features such as word INLINEFORM1 -grams and POS-tags do not improve, and can sometimes even hurt performance BIBREF3 , BIBREF5 . However, none of these CNN models make use of discourse. Our work builds upon these prior studies by exploring an effective method to (i) featurize the discourse information, and (ii) integrate discourse features into the best text classifier (i.e., CNN-based models), in the expectation of achieving state-of-the-art results in AA. BIBREF1 (henceforth F&H14) made the first comprehensive attempt at using discourse information for AA. They employ an entity-grid model, an approach introduced by BIBREF6 for the task of ordering sentences. This model tracks how the grammatical relations of salient entities (e.g., subj, obj, etc.) change between pairs of sentences in a document, thus capturing a form of discourse coherence. The grid is summarized into a vector of transition probabilities. However, because the model only records the transition between two consecutive sentences at a time, the coherence is local. BIBREF2 (henceforth F15) further extends the entity-grid model by replacing grammatical relations with discourse relations from Rhetorical Structure Theory BIBREF7 . Their study uses a linear-kernel SVM to perform pairwise author classifications, where a non-discourse model captures lexical and syntactic features. They find that adding the entity-grid with grammatical relations enhances the non-discourse model by almost 1% in accuracy, and using RST relations provides an improvement of 3%. The study, however, works with only one small dataset and their models produce overall unremarkable performance ( INLINEFORM0 85%). BIBREF8 propose an advanced Recursive Neural Network (RecNN) architecture to work with RST in the more general area of text categorization and present impressive results. However, we suspect that the massive number of parameters of RecNNs would likely cause overfitting when working with smaller datasets, as is often the case in AA tasks. In our paper, we opt for a state-of-the-art character bigram CNN classifier BIBREF4 , and investigate various ways in which the discourse information can be featurized and integrated into the CNN. Specifically, We explore these questions using two approaches to represent salient entities: grammatical relations, and RST discourse relations. We apply these models to datasets of varying sizes and genres, and find that adding any discourse information improves AA consistently on longer documents, but has mixed results on shorter documents. Further, embedding the discourse features in a parallel CNN at the input end yields better performance than concatenating them to the output layer as a feature vector (Section SECREF3 ). The global featurization is more effective than the local one. We also show that SVMs, which can only use discourse probability vectors, neither produce a competitive performance (even with fine-tuning), nor generalize in using the discourse information effectively.
473
What are proof paths?
A sequence of logical statements represented in a computational graph
Neural models combining representation learning and reasoning in an end-to-end trainable manner are receiving increasing interest. However, their use is severely limited by their computational complexity, which renders them unusable on real world datasets. We focus on the Neural Theorem Prover (NTP) model proposed by Rockt{\"{a}}schel and Riedel (2017), a continuous relaxation of the Prolog backward chaining algorithm where unification between terms is replaced by the similarity between their embedding representations. For answering a given query, this model needs to consider all possible proof paths, and then aggregate results - this quickly becomes infeasible even for small Knowledge Bases (KBs). We observe that we can accurately approximate the inference process in this model by considering only proof paths associated with the highest proof scores. This enables inference and learning on previously impracticable KBs.
Recent advancements in deep learning intensified the long-standing interests in integrating symbolic reasoning with connectionist models BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . The attraction of said integration stems from the complementing properties of these systems. Symbolic reasoning models offer interpretability, efficient generalisation from a small number of examples, and the ability to leverage knowledge provided by an expert. However, these systems are unable to handle ambiguous and noisy high-dimensional data such as sensory inputs BIBREF5 . On the other hand, representation learning models exhibit robustness to noise and ambiguity, can learn task-specific representations, and achieve state-of-the-art results on a wide variety of tasks BIBREF6 . However, being universal function approximators, these models require vast amounts of training data and are treated as non-interpretable black boxes. One way of integrating the symbolic and sub-symbolic models is by continuously relaxing discrete operations and implementing them in a connectionist framework. Recent approaches in this direction focused on learning algorithmic behaviour without the explicit symbolic representations of a program BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , and consequently with it BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . In the inductive logic programming setting, two new models, NTP BIBREF0 and Differentiable Inductive Logic Programming ( $\partial $ ILP) BIBREF16 successfully combined the interpretability and data efficiency of a logic programming system with the expressiveness and robustness of neural networks. In this paper, we focus on the NTP model proposed by BIBREF0 . Akin to recent neural-symbolic models, NTP rely on a continuous relaxation of a discrete algorithm, operating over the sub-symbolic representations. In this case, the algorithm is an analogue to Prolog's backward chaining with a relaxed unification operator. The backward chaining algorithm constructs neural networks, which model continuously relaxed proof paths using sub-symbolic representations. These representations are learned end-to-end by maximising the proof scores of facts in the KB, while minimising the score of facts not in the KB, in a link prediction setting BIBREF17 . However, while the symbolic unification checks whether two terms can represent the same structure, the relaxed unification measures the similarity between their sub-symbolic representations. This continuous relaxation is at the crux of NTP' inability to scale to large datasets. During both training and inference, NTP need to compute all possible proof trees needed for proving a query, relying on the continuous unification of the query with all the rules and facts in the KB. This procedure quickly becomes infeasible for large datasets, as the number of nodes of the resulting computation graph grows exponentially. Our insight is that we can radically reduce the computational complexity of inference and learning by generating only the most promising proof paths. In particular, we show that the problem of finding the facts in the KB that best explain a query can be reduced to a $k$ -nearest neighbour problem, for which efficient exact and approximate solutions exist BIBREF18 . This enables us to apply NTP to previously unreachable real-world datasets, such as WordNet.
474
What external sources are used?
Raw data from Gigaword, Automatically segmented text from Gigaword, Heterogenous training data from People's Daily, POS data from People's Daily
Neural word segmentation research has benefited from large-scale raw texts by leveraging them for pretraining character and word embeddings. On the other hand, statistical segmentation research has exploited richer sources of external information, such as punctuation, automatic segmentation and POS. We investigate the effectiveness of a range of external training sources for neural word segmentation by building a modular segmentation model, pretraining the most important submodule using rich external sources. Results show that such pretraining significantly improves the model, leading to accuracies competitive to the best methods on six benchmarks.
There has been a recent shift of research attention in the word segmentation literature from statistical methods to deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Neural network models have been exploited due to their strength in non-sparse representation learning and non-linear power in feature combination, which have led to advances in many NLP tasks. So far, neural word segmentors have given comparable accuracies to the best statictical models. With respect to non-sparse representation, character embeddings have been exploited as a foundation of neural word segmentors. They serve to reduce sparsity of character ngrams, allowing, for example, “猫(cat) 躺(lie) 在(in) 墙角(corner)” to be connected with “狗(dog) 蹲(sit) 在(in) 墙角(corner)” BIBREF0 , which is infeasible by using sparse one-hot character features. In addition to character embeddings, distributed representations of character bigrams BIBREF6 , BIBREF1 and words BIBREF2 , BIBREF5 have also been shown to improve segmentation accuracies. With respect to non-linear modeling power, various network structures have been exploited to represent contexts for segmentation disambiguation, including multi-layer perceptrons on five-character windows BIBREF0 , BIBREF6 , BIBREF1 , BIBREF7 , as well as LSTMs on characters BIBREF3 , BIBREF8 and words BIBREF2 , BIBREF4 , BIBREF5 . For structured learning and inference, CRF has been used for character sequence labelling models BIBREF1 , BIBREF3 and structural beam search has been used for word-based segmentors BIBREF4 , BIBREF5 . Previous research has shown that segmentation accuracies can be improved by pretraining character and word embeddings over large Chinese texts, which is consistent with findings on other NLP tasks, such as parsing BIBREF9 . Pretraining can be regarded as one way of leveraging external resources to improve accuracies, which is practically highly useful and has become a standard practice in neural NLP. On the other hand, statistical segmentation research has exploited raw texts for semi-supervised learning, by collecting clues from raw texts more thoroughly such as mutual information and punctuation BIBREF10 , BIBREF11 , and making use of self-predictions BIBREF12 , BIBREF13 . It has also utilised heterogenous annotations such as POS BIBREF14 , BIBREF15 and segmentation under different standards BIBREF16 . To our knowledge, such rich external information has not been systematically investigated for neural segmentation. We fill this gap by investigating rich external pretraining for neural segmentation. Following BIBREF4 and BIBREF5 , we adopt a globally optimised beam-search framework for neural structured prediction BIBREF9 , BIBREF17 , BIBREF18 , which allows word information to be modelled explicitly. Different from previous work, we make our model conceptually simple and modular, so that the most important sub module, namely a five-character window context, can be pretrained using external data. We adopt a multi-task learning strategy BIBREF19 , casting each external source of information as a auxiliary classification task, sharing a five-character window network. After pretraining, the character window network is used to initialize the corresponding module in our segmentor. Results on 6 different benchmarks show that our method outperforms the best statistical and neural segmentation models consistently, giving the best reported results on 5 datasets in different domains and genres. Our implementation is based on LibN3L BIBREF20 . Code and models can be downloaded from http://gitHub.com/jiesutd/RichWordSegmentor
475
How much better peformance is achieved in human evaluation when model is trained considering proposed metric?
Pearson correlation to human judgement - proposed vs next best metric Sample level comparison: - Story generation: 0.387 vs 0.148 - Dialogue: 0.472 vs 0.341 Model level comparison: - Story generation: 0.631 vs 0.302 - Dialogue: 0.783 vs 0.553
Automated evaluation of open domain natural language generation (NLG) models remains a challenge and widely used metrics such as BLEU and Perplexity can be misleading in some cases. In our paper, we propose to evaluate natural language generation models by learning to compare a pair of generated sentences by fine-tuning BERT, which has been shown to have good natural language understanding ability. We also propose to evaluate the model-level quality of NLG models with sample-level comparison results with skill rating system. While able to be trained in a fully self-supervised fashion, our model can be further fine-tuned with a little amount of human preference annotation to better imitate human judgment. In addition to evaluating trained models, we propose to apply our model as a performance indicator during training for better hyperparameter tuning and early-stopping. We evaluate our approach on both story generation and chit-chat dialogue response generation. Experimental results show that our model correlates better with human preference compared with previous automated evaluation approaches. Training with the proposed metric yields better performance in human evaluation, which further demonstrates the effectiveness of the proposed model.
Recent advances in sequence-to-sequence learning architecture BIBREF0 and the transformer model BIBREF1 have raised increasing interest in natural language generation (NLG) tasks, including story generation BIBREF2, open-domain dialogue response generation BIBREF3 and abstractive summarization BIBREF4. Despite the fast advances of models, there remains a huge gap in the evaluation of NLG models and it is hard to measure the progress due to the lack of good evaluation metrics. While perplexity is a good measure of how well a model fits some data, it does not measure performance at the desired task. Word overlap based metrics such as BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7 capture quality better than the perplexity and are useful in translation and summarization. However, they still correlate poorly with human evaluation BIBREF8 in open domain text generation tasks including story generation and dialogue response generation because two equally good generated texts may have no n-gram overlap. Human evaluation is generally considered to be the gold standard evaluation, however, it does not scale well as it is generally expensive and time-consuming to conduct human evaluation. Apart from measuring relative progress between different models, automated evaluation metrics also play an important role in the training stage of NLG models. It is a common practice to tune the model hyperparameter, detect convergence, perform early-stopping, and select the best checkpoints based on the model's performance on automated evaluation metrics. While acceptable for tasks where automated metrics correlate well with human evaluations, including machine translation and text summarization, this can be erroneous and result in sub-optimal training in open domain NLG tasks because available automated metrics correlate poorly with human evaluation, as demonstrated in the experimental section of this paper. To tackle the aforementioned problems, in this paper, we propose a self-supervised approach with transfer learning to learn to compare the quality of two samples as an automated comparative Turing test. The motivation of our approach is that we can better assess the quality of generated samples or trained NLG model by comparing it with another one. Our model is a text pair classification model trained to compare the task-specific quality of two samples, which is then used to evaluate the quality of trained NLG models. As human preference annotation is generally expensive, our model is designed to be able to perform self-supervised training using only generated samples and gold reference samples without human preference annotation. When human preference annotation is available, our model can be further fine-tuned to better imitate human judgment. To evaluate the model-level quality of NLG models based on pairwise comparison in sample-level, we adopt the skill rating system similar to ELO BIBREF9 and Trueskill BIBREF10, which is a method for assigning a numerical skill to players in a player-vs-player game, given a win-loss record of games played. In our scenario, the players are NLG models to be evaluated and a higher rating indicates a better model. The skill rating system makes it possible to evaluate all n models without needing to run $n^{2}$ matches and is able to take into account the amount of new information each comparison provides. The contribution of this paper is threefold: We propose a “learning to compare” model to better assess the quality of text generated by NLG models based on pairwise comparison. Our model is able to transfer natural language understanding knowledge from BERT by fine-tuning in a self-supervised way while also able to be further fine-tuned with human preference annotation. Once trained, our model is able to perform inter-model comparison without the need for gold references, which greatly enlarges the potentially available test set and reduces the potential risk of overfitting the reference in the test set. We propose to use the skill rating system to perform model-level evaluation based on the sample-level evaluation information provided by our pairwise comparison model. The skill rating system is more efficient and accurate than several baseline approaches. We conduct experiments on both story generation task and open domain dialogue response generation task. Experimental results show that our approach correlates better with human evaluation on both datasets. Moreover, we show that using automated metrics such as BLEU to perform hyperparameter tuning and early-stopping results in sub-optimal model and our approach helps alleviate this problem.
478
How much transcribed data is available for for Ainu language?
Transcribed data is available for duration of 38h 54m 38s for 8 speakers.
Ainu is an unwritten language that has been spoken by Ainu people who are one of the ethnic groups in Japan. It is recognized as critically endangered by UNESCO and archiving and documentation of its language heritage is of paramount importance. Although a considerable amount of voice recordings of Ainu folklore has been produced and accumulated to save their culture, only a quite limited parts of them are transcribed so far. Thus, we started a project of automatic speech recognition (ASR) for the Ainu language in order to contribute to the development of annotated language archives. In this paper, we report speech corpus development and the structure and performance of end-to-end ASR for Ainu. We investigated four modeling units (phone, syllable, word piece, and word) and found that the syllable-based model performed best in terms of both word and phone recognition accuracy, which were about 60% and over 85% respectively in speaker-open condition. Furthermore, word and phone accuracy of 80% and 90% has been achieved in a speaker-closed setting. We also found out that a multilingual ASR training with additional speech corpora of English and Japanese further improves the speaker-open test accuracy.
Automatic speech recognition (ASR) technology has been made a dramatic progress and is currently brought to a pratical levels of performance assisted by large speech corpora and the introduction of deep learning techniques. However, this is not the case for low-resource languages which do not have large corpora like English and Japanese have. There are about 5,000 languages in the world over half of which are faced with the danger of extinction. Therefore, constructing ASR systems for these endangered languages is an important issue. The Ainu are an indigenous people of northern Japan and Sakhakin in Russia, but their language has been fading away ever since the Meiji Restoration and Modernization. On the other hand, active efforts to preserve their culture have been initiated by the Government of Japan, and exceptionally large oral recordings have been made. Nevertheless, a majority of the recordings have not been transcribed and utilized effectively. Since transcribing them requires expertise in the Ainu language, not so many people are able to work on this task. Hence, there is a strong demand for an ASR system for the Ainu language. We started a project of Ainu ASR and this article is the first report of this project. We have built an Ainu speech corpus based on data provided by the Ainu Museum and the Nibutani Ainu Culture Museum. The oral recordings in this data consist of folklore and folk songs, and we chose the former to construct the ASR model. The end-to-end method of speech recognition has been proposed recently and has achieved performance comparable to that of the conventional DNN-HMM hybrid modeling BIBREF0, BIBREF1, BIBREF2. End-to-end systems do not have a complex hierarchical structure and do not require expertise in target languages such as their phonology and morphology. In this study we adopt the attention mechanism BIBREF3, BIBREF4 and combine it with Connectionist Temporal Classification (CTC) BIBREF5, BIBREF6. In this work, we investigate the modeling unit and utilization of corpora of other languages.
479
What baseline approaches do they compare against?
HotspotQA: Yang, Ding, Muppet Fever: Hanselowski, Yoneda, Nie
Machine Reading at Scale (MRS) is a challenging task in which a system is given an input query and is asked to produce a precise output by "reading" information from a large knowledge base. The task has gained popularity with its natural combination of information retrieval (IR) and machine comprehension (MC). Advancements in representation learning have led to separated progress in both IR and MC; however, very few studies have examined the relationship and combined design of retrieval and comprehension at different levels of granularity, for development of MRS systems. In this work, we give general guidelines on system design for MRS by proposing a simple yet effective pipeline system with special consideration on hierarchical semantic retrieval at both paragraph and sentence level, and their potential effects on the downstream task. The system is evaluated on both fact verification and open-domain multihop QA, achieving state-of-the-art results on the leaderboard test sets of both FEVER and HOTPOTQA. To further demonstrate the importance of semantic retrieval, we present ablation and analysis studies to quantify the contribution of neural retrieval modules at both paragraph-level and sentence-level, and illustrate that intermediate semantic retrieval modules are vital for not only effectively filtering upstream information and thus saving downstream computation, but also for shaping upstream data distribution and providing better data for downstream modeling. Code/data made publicly available at: this https URL
Extracting external textual knowledge for machine comprehensive systems has long been an important yet challenging problem. Success requires not only precise retrieval of the relevant information sparsely restored in a large knowledge source but also a deep understanding of both the selected knowledge and the input query to give the corresponding output. Initiated by chen2017drqa, the task was termed as Machine Reading at Scale (MRS), seeking to provide a challenging situation where machines are required to do both semantic retrieval and comprehension at different levels of granularity for the final downstream task. Progress on MRS has been made by improving individual IR or comprehension sub-modules with recent advancements on representative learning BIBREF0, BIBREF1, BIBREF2. However, partially due to the lack of annotated data for intermediate retrieval in an MRS setting, the evaluations were done mainly on the final downstream task and with much less consideration on the intermediate retrieval performance. This led to the convention that upstream retrieval modules mostly focus on getting better coverage of the downstream information such that the upper-bound of the downstream score can be improved, rather than finding more exact information. This convention is misaligned with the nature of MRS where equal effort should be put in emphasizing the models' joint performance and optimizing the relationship between the semantic retrieval and the downstream comprehension sub-tasks. Hence, to shed light on the importance of semantic retrieval for downstream comprehension tasks, we start by establishing a simple yet effective hierarchical pipeline system for MRS using Wikipedia as the external knowledge source. The system is composed of a term-based retrieval module, two neural modules for both paragraph-level retrieval and sentence-level retrieval, and a neural downstream task module. We evaluated the system on two recent large-scale open domain benchmarks for fact verification and multi-hop QA, namely FEVER BIBREF3 and HotpotQA BIBREF4, in which retrieval performance can also be evaluated accurately since intermediate annotations on evidences are provided. Our system achieves the start-of-the-art results with 45.32% for answer EM and 25.14% joint EM on HotpotQA (8% absolute improvement on answer EM and doubling the joint EM over the previous best results) and with 67.26% on FEVER score (3% absolute improvement over previously published systems). We then provide empirical studies to validate design decisions. Specifically, we prove the necessity of both paragraph-level retrieval and sentence-level retrieval for maintaining good performance, and further illustrate that a better semantic retrieval module not only is beneficial to achieving high recall and keeping high upper bound for downstream task, but also plays an important role in shaping the downstream data distribution and providing more relevant and high-quality data for downstream sub-module training and inference. These mechanisms are vital for a good MRS system on both QA and fact verification.
481
how many domains did they experiment with?
2
We introduce a modular system that can be deployed on any Kubernetes cluster for question answering via REST API. This system, called Katecheo, includes four configurable modules that collectively enable identification of questions, classification of those questions into topics, a search of knowledge base articles, and reading comprehension. We demonstrate the system using publicly available, pre-trained models and knowledge base articles extracted from Stack Exchange sites. However, users can extend the system to any number of topics, or domains, without the need to modify any of the model serving code. All components of the system are open source and available under a permissive Apache 2 License.
When people interact with chatbots, smart speakers or digital assistants (e.g., Siri), one of their primary modes of interaction is information retrieval BIBREF0 . Thus, those that build dialog systems often have to tackle the problem of question answering. Developers could support question answering using publicly available chatbot platforms, such as Watson Assistant or DialogFlow. To do this, a user would need to program an intent for each anticipated question with various examples of the question and one or more curated responses. This approach has the advantage of generating high quality answers, but it is limited to those questions anticipated by developers. Moreover, the management burden of such a system might be prohibitive as the number of questions that needs to be supported is likely to increase over time. To overcome the burden of programming intents, developers might look towards more advanced question answering systems that are built using open domain question and answer data (e.g., from Stack Exchange or Wikipedia), reading comprehension models, and knowledge base searches. In particular, BIBREF1 previously demonstrated a two step system, called DrQA, that matches an input question to a relevant article from a knowledge base and then uses a recurrent neural network (RNN) based comprehension model to detect an answer within the matched article. This more flexible method was shown to produce promising results for questions related to Wikipedia articles and it performed competitively on the SQuAD benchmark BIBREF2 . However, if developers wanted to integrate this sort of reading comprehension based methodology into their applications, how would they currently go about this? They would need to wrap pre-trained models in their own custom code and compile similar knowledge base articles at the very least. At the most, they may need to re-train reading comprehension models on open domain question and answer data (e.g., SQuAD) and/or implement their own knowledge base search algorithms. In this paper we present Katecheo, a portable and modular system for reading comprehension based question answering that attempts to ease this development burden. The system provides a quickly deployable and easily extendable way for developers to integrate question answering functionality into their applications. Katecheo includes four configurable modules that collectively enable identification of questions, classification of those questions into topics, a search of knowledge base articles, and reading comprehension. The modules are tied together in a single inference graph that can be invoked via a REST API call. We demonstrate the system using publicly available, pre-trained models and knowledge base articles extracted from Stack Exchange sites. However, users can extend the system to any number of topics, or domains, without the need to modify the model serving code. All components of the system are open source and publicly available under a permissive Apache 2 License. The rest of the paper is organized as follows. In the next section, we provide an overview of the system logic and its modules. In Section 3, we outline the architecture and configuration of Katecheo, including extending the system to an arbitrary number of topics. In Section 4, we report some results using example pre-trained models and public knowledge base articles. Then in conclusion, we summarize the system, its applicability, and future development work.
482
What is a string kernel?
String kernel is a technique that uses character n-grams to measure the similarity of strings
For many text classification tasks, there is a major problem posed by the lack of labeled data in a target domain. Although classifiers for a target domain can be trained on labeled text data from a related source domain, the accuracy of such classifiers is usually lower in the cross-domain setting. Recently, string kernels have obtained state-of-the-art results in various text classification tasks such as native language identification or automatic essay scoring. Moreover, classifiers based on string kernels have been found to be robust to the distribution gap between different domains. In this paper, we formally describe an algorithm composed of two simple yet effective transductive learning approaches to further improve the results of string kernels in cross-domain settings. By adapting string kernels to the test set without using the ground-truth test labels, we report significantly better accuracy rates in cross-domain English polarity classification.
Domain shift is a fundamental problem in machine learning, that has attracted a lot of attention in the natural language processing and vision communities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . To understand and address this problem, generated by the lack of labeled data in a target domain, researchers have studied the behavior of machine learning methods in cross-domain settings BIBREF2 , BIBREF11 , BIBREF10 and came up with various domain adaptation techniques BIBREF12 , BIBREF5 , BIBREF6 , BIBREF9 . In cross-domain classification, a classifier is trained on data from a source domain and tested on data from a (different) target domain. The accuracy of machine learning methods is usually lower in the cross-domain setting, due to the distribution gap between different domains. However, researchers proposed several domain adaptation techniques by using the unlabeled test data to obtain better performance BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF7 . Interestingly, some recent works BIBREF10 , BIBREF17 indicate that string kernels can yield robust results in the cross-domain setting without any domain adaptation. In fact, methods based on string kernels have demonstrated impressive results in various text classification tasks ranging from native language identification BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 and authorship identification BIBREF22 to dialect identification BIBREF23 , BIBREF17 , BIBREF24 , sentiment analysis BIBREF10 , BIBREF25 and automatic essay scoring BIBREF26 . As long as a labeled training set is available, string kernels can reach state-of-the-art results in various languages including English BIBREF19 , BIBREF10 , BIBREF26 , Arabic BIBREF27 , BIBREF20 , BIBREF17 , BIBREF24 , Chinese BIBREF25 and Norwegian BIBREF20 . Different from all these recent approaches, we use unlabeled data from the test set in a transductive setting in order to significantly increase the performance of string kernels. In our recent work BIBREF28 , we proposed two transductive learning approaches combined into a unified framework that improves the results of string kernels in two different tasks. In this paper, we provide a formal and detailed description of our transductive algorithm and present results in cross-domain English polarity classification. The paper is organized as follows. Related work on cross-domain text classification and string kernels is presented in Section SECREF2 . Section SECREF3 presents our approach to obtain domain adapted string kernels. The transductive transfer learning method is described in Section SECREF4 . The polarity classification experiments are presented in Section SECREF5 . Finally, we draw conclusions and discuss future work in Section SECREF6 .
483
How do they correlate NED with emotional bond levels?
They compute Pearson’s correlation between NED measure for patient-to-therapist and patient-perceived emotional bond rating and NED measure for therapist-to-patient and patient-perceived emotional bond rating
Entrainment is a known adaptation mechanism that causes interaction participants to adapt or synchronize their acoustic characteristics. Understanding how interlocutors tend to adapt to each other's speaking style through entrainment involves measuring a range of acoustic features and comparing those via multiple signal comparison methods. In this work, we present a turn-level distance measure obtained in an unsupervised manner using a Deep Neural Network (DNN) model, which we call Neural Entrainment Distance (NED). This metric establishes a framework that learns an embedding from the population-wide entrainment in an unlabeled training corpus. We use the framework for a set of acoustic features and validate the measure experimentally by showing its efficacy in distinguishing real conversations from fake ones created by randomly shuffling speaker turns. Moreover, we show real world evidence of the validity of the proposed measure. We find that high value of NED is associated with high ratings of emotional bond in suicide assessment interviews, which is consistent with prior studies.
Vocal entrainment is an established social adaptation mechanism. It can be loosely defined as one speaker's spontaneous adaptation to the speaking style of the other speaker. Entrainment is a fairly complex multifaceted process and closely associated with many other mechanisms such as coordination, synchrony, convergence etc. While there are various aspects and levels of entrainment BIBREF0 , there is also a general agreement that entrainment is a sign of positive behavior towards the other speaker BIBREF1 , BIBREF2 , BIBREF3 . High degree of vocal entrainment has been associated with various interpersonal behavioral attributes, such as high empathy BIBREF4 , more agreement and less blame towards the partner and positive outcomes in couple therapy BIBREF5 , and high emotional bond BIBREF6 . A good understanding of entrainment provides insights to various interpersonal behaviors and facilitates the recognition and estimation of these behaviors in the realm of Behavioral Signal Processing BIBREF7 , BIBREF8 . Moreover, it also contributes to the modeling and development of `human-like' spoken dialog systems or conversational agents. Unfortunately, quantifying entrainment has always been a challenging problem. There is a scarcity of reliable labeled speech databases on entrainment, possibly due to the subjective and diverse nature of its definition. This makes it difficult to capture entrainment using supervised models, unlike many other behaviors. Early studies on entrainment relied on highly subjective and context-dependent manual observation coding for measuring entrainment. The objective methods based on extracted speech features employed classical synchrony measures such as Pearson's correlation BIBREF0 and traditional (linear) time series analysis techniques BIBREF9 . Lee et al. BIBREF10 , BIBREF4 proposed a measure based on PCA representation of prosody and MFCC features of consecutive turns. Most of the these approaches assume a linear relationship between features of consecutive speaker turns which is not necessarily true, given the complex nature of entrainment. For example, the effect of rising pitch or energy can potentially have a nonlinear influence across speakers. Recently, various complexity measures (such as largest Lyapunov exponent) of feature streams based on nonlinear dynamical systems modeling showed promising results in capturing entrainment BIBREF5 , BIBREF6 . A limitation of this modeling, however, is the assumption of the short-term stationary or slowly varying nature of the features. While this can be reasonable for global or session-level complexity, the measure is not very meaningful capturing turn-level or local entrainment. Nonlinear dynamical measures also suffer from scalability to a multidimensional feature set, including spectral coefficients such as MFCCs. Further, all of the above metrics are knowledge-driven and do not exploit the vast amount of information that can be gained from existing interactions. A more holistic approach is to capture entrainment in consecutive speaker turns through a more robust nonlinear function. Conceptually speaking, such a formulation of entrainment is closely related to the problem of learning a transfer function which maps vocal patterns of one speaker turn to the next. A compelling choice to nonlinearly approximate the transfer function would be to employ Deep Neural Networks (DNNs). This is supported by recent promising applications of deep learning models, both in supervised and unsupervised paradigm, in modeling and classification of emotions and behaviors from speech. For example in BIBREF11 the authors learned, in an unsupervised manner, a latent embedding towards identifying behavior in out-of-domain tasks. Similarly in BIBREF12 , BIBREF13 the authors employ Neural Predictive Coding to derive embeddings that link to speaker characteristics in an unsupervised manner. We propose an unsupervised training framework to contextually learn the transfer function that ties the two speakers. The learned bottleneck embedding contains cross-speaker information closely related to entrainment. We define a distance measure between the consecutive speaker turns represented in the bottleneck feature embedding space. We call this metric the Neural Entrainment Distance (NED). Towards this modeling approach we use features that have already been established as useful for entrainment. The majority of research BIBREF0 , BIBREF14 , BIBREF10 , BIBREF5 , BIBREF6 focused on prosodic features like pitch, energy, and speech rate. Others also analyzed entrainment in spectral and voice quality features BIBREF10 , BIBREF4 . Unlike classical nonlinear measures, we jointly learn from a multidimensional feature set comprising of prosodic, spectral, and voice quality features. We then experimentally investigate the validity and effectiveness of the NED measure in association with interpersonal behavior.
484
What was their F1 score on the Bengali NER corpus?
52.0%
Supervised machine learning assumes the availability of fully-labeled data, but in many cases, such as low-resource languages, the only data available is partially annotated. We study the problem of Named Entity Recognition (NER) with partially annotated training data in which a fraction of the named entities are labeled, and all other tokens, entities or otherwise, are labeled as non-entity by default. In order to train on this noisy dataset, we need to distinguish between the true and false negatives. To this end, we introduce a constraint-driven iterative algorithm that learns to detect false negatives in the noisy set and downweigh them, resulting in a weighted training set. With this set, we train a weighted NER model. We evaluate our algorithm with weighted variants of neural and non-neural NER models on data in 8 languages from several language and script families, showing strong ability to learn from partial data. Finally, to show real-world efficacy, we evaluate on a Bengali NER corpus annotated by non-speakers, outperforming the prior state-of-the-art by over 5 points F1.
Most modern approaches to NLP tasks rely on supervised learning algorithms to learn and generalize from labeled training data. While this has proven successful in high-resource scenarios, this is not realistic in many cases, such as low-resource languages, as the required amount of training data just doesn't exist. However, partial annotations are often easy to gather. We study the problem of using partial annotations to train a Named Entity Recognition (NER) system. In this setting, all (or most) identified entities are correct, but not all entities have been identified, and crucially, there are no reliable examples of the negative class. The sentence shown in Figure FIGREF2 shows examples of both a gold and a partially annotated sentence. Such partially annotated data is relatively easy to obtain: for example, a human annotator who does not speak the target language may recognize common entities, but not uncommon ones. With no reliable examples of the negative class, the problem becomes one of estimating which unlabeled instances are true negatives and which are false negatives. To address the above-mentioned challenge, we present Constrained Binary Learning (CBL) – a novel self-training based algorithm that focuses on iteratively identifying true negatives for the NER task while improving its learning. Towards this end, CBL uses constraints that incorporate background knowledge required for the entity recognition task. We evaluate the proposed methods in 8 languages, showing a significant ability to learn from partial data. We additionally experiment with initializing CBL with domain-specific instance-weighting schemes, showing mixed results. In the process, we use weighted variants of popular NER models, showing strong performance in both non-neural and neural settings. Finally, we show experiments in a real-world setting, by employing non-speakers to manually annotate romanized Bengali text. We show that a small amount of non-speaker annotation combined with our method can outperform previous methods.
488
What is the size of the dataset?
300,000 sentences with 1.5 million single-quiz questions
In this paper we formalize the problem automatic fill-in-the-blank question generation using two standard NLP machine learning schemes, proposing concrete deep learning models for each. We present an empirical study based on data obtained from a language learning platform showing that both of our proposed settings offer promising results.
With the advent of the Web 2.0, regular users were able to share, remix and distribute content very easily. As a result of this process, the Web became a rich interconnected set of heterogeneous data sources. Being in a standard format, it is suitable for many tasks involving knowledge extraction and representation. For example, efforts have been made to design games with the purpose of semi-automating a wide range of knowledge transfer tasks, such as educational quizzes, by leveraging on this kind of data. In particular, quizzes based on multiple choice questions (MCQs) have been proved efficient to judge students’ knowledge. However, manual construction of such questions often results a time-consuming and labor-intensive task. Fill-in-the-blank questions, where a sentence is given with one or more blanks in it, either with or without alternatives to fill in those blanks, have gained research attention recently. In this kind of question, as opposed to MCQs, there is no need to generate a WH style question derived from text. This means that the target sentence could simply be picked from a document on a corresponding topic of interest which results easier to automate. Fill-in-the-blank questions in its multiple-choice answer version, often referred to as cloze questions (CQ), are commonly used for evaluating proficiency of language learners, including official tests such as TOEIC and TOEFL BIBREF0 . They have also been used to test students knowledge of English in using the correct verbs BIBREF1 , prepositions BIBREF2 and adjectives BIBREF3 . BIBREF4 and BIBREF5 generated questions to evaluate student’s vocabulary. The main problem in CQ generation is that it is generally not easy to come up with appropriate distractors —incorrect options— without rich experience. Existing approaches are mostly based on domain-specific templates, whose elaboration relies on experts. Lately, approaches based on discriminative methods, which rely on annotated training data, have also appeared. Ultimately, these settings prevent end-users from participating in the elaboration process, limiting the diversity and variation of quizzes that the system may offer. In this work we formalize the problem of automatic fill-in-the-blank question generation and present an empirical study using deep learning models for it in the context of language learning. Our study is based on data obtained from our language learning platform BIBREF6 , BIBREF7 , BIBREF8 where users can create their own quizzes by utilizing freely available and open-licensed video content on the Web. In the platform, the automatic quiz creation currently relies on hand-crafted features and rules, making the process difficult to adapt. Our goal is to effectively provide an adaptive learning experience in terms of style and difficulty, and thus better serve users' needs BIBREF9 . In this context, we study the ability of our proposed architectures in learning to generate quizzes based on data derived of the interaction of users with the platform.
489
How many examples do they have in the target domain?
Around 388k examples, 194k from tst2013 (in-domain) and 194k from newstest2014 (out-of-domain)
Neural Machine Translation (NMT) is a new approach for automatic translation of text from one human language into another. The basic concept in NMT is to train a large Neural Network that maximizes the translation performance on a given parallel corpus. NMT is gaining popularity in the research community because it outperformed traditional SMT approaches in several translation tasks at WMT and other evaluation tasks/benchmarks at least for some language pairs. However, many of the enhancements in SMT over the years have not been incorporated into the NMT framework. In this paper, we focus on one such enhancement namely domain adaptation. We propose an approach for adapting a NMT system to a new domain. The main idea behind domain adaptation is that the availability of large out-of-domain training data and a small in-domain training data. We report significant gains with our proposed method in both automatic metrics and a human subjective evaluation metric on two language pairs. With our adaptation method, we show large improvement on the new domain while the performance of our general domain only degrades slightly. In addition, our approach is fast enough to adapt an already trained system to a new domain within few hours without the need to retrain the NMT model on the combined data which usually takes several days/weeks depending on the volume of the data.
Due to the fact that Neural Machine Translation (NMT) is reaching comparable or even better performance compared to the traditional statistical machine translation (SMT) models BIBREF0 , BIBREF1 , it has become very popular in the recent years BIBREF2 , BIBREF3 , BIBREF4 . With the great success of NMT, new challenges arise which have already been address with reasonable success in traditional SMT. One of the challenges is domain adaptation. In a typical domain adaptation setup such as ours, we have a large amount of out-of-domain bilingual training data for which we already have a trained neural network model (baseline). Given only an additional small amount of in-domain data, the challenge is to improve the translation performance on the new domain without deteriorating the performance on the general domain significantly. One approach one might take is to combine the in-domain data with the out-of-domain data and train the NMT model from scratch. However, there are two main problems with that approach. First, training a neural machine translation system on large data sets can take several weeks and training a new model based on the combined training data is time consuming. Second, since the in-domain data is relatively small, the out-of-domain data will tend to dominate the training data and hence the learned model will not perform as well on the in-domain test data. In this paper, we reuse the already trained out-of-domain system and continue training only on the small portion of in-domain data similar to BIBREF5 . While doing this, we adapt the parameters of the neural network model to the new domain. Instead of relying completely on the adapted (further-trained) model and over fitting on the in-domain data, we decode using an ensemble of the baseline model and the adapted model which tends to perform well on the in-domain data without deteriorating the performance on the baseline general domain.
493
What is the baseline model?
a RNN-based seq2seq VC model called ATTS2S based on the Tacotron model
We introduce a novel sequence-to-sequence (seq2seq) voice conversion (VC) model based on the Transformer architecture with text-to-speech (TTS) pretraining. Seq2seq VC models are attractive owing to their ability to convert prosody. While seq2seq models based on recurrent neural networks (RNNs) and convolutional neural networks (CNNs) have been successfully applied to VC, the use of the Transformer network, which has shown promising results in various speech processing tasks, has not yet been investigated. Nonetheless, their data-hungry property and the mispronunciation of converted speech make seq2seq models far from practical. To this end, we propose a simple yet effective pretraining technique to transfer knowledge from learned TTS models, which benefit from large-scale, easily accessible TTS corpora. VC models initialized with such pretrained model parameters are able to generate effective hidden representations for high-fidelity, highly intelligible converted speech. Experimental results show that such a pretraining scheme can facilitate data-efficient training and outperform an RNN-based seq2seq VC model in terms of intelligibility, naturalness, and similarity.
Voice conversion (VC) aims to convert the speech from a source to that of a target without changing the linguistic content BIBREF0. Conventional VC systems follow an analysis—conversion —synthesis paradigm BIBREF1. First, a high quality vocoder such as WORLD BIBREF2 or STRAIGHT BIBREF3 is utilized to extract different acoustic features, such as spectral features and fundamental frequency (F0). These features are converted separately, and a waveform synthesizer finally generates the converted waveform using the converted features. Past VC studies have focused on the conversion of spectral features while only applying a simple linear transformation to F0. In addition, the conversion is usually performed frame-by-frame, i.e, the converted speech and the source speech are always of the same length. To summarize, the conversion of prosody, including F0 and duration, is overly simplified in the current VC literature. This is where sequence-to-sequence (seq2seq) models BIBREF4 can play a role. Modern seq2seq models, often equipped with an attention mechanism BIBREF5, BIBREF6 to implicitly learn the alignment between the source and output sequences, can generate outputs of various lengths. This ability makes the seq2seq model a natural choice to convert duration in VC. In addition, the F0 contour can also be converted by considering F0 explicitly (e.g, forming the input feature sequence by concatenating the spectral and F0 sequences) BIBREF7, BIBREF8, BIBREF9 or implicitly (e.g, using mel spectrograms as the input feature) BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15. Seq2seq VC can further be applied to accent conversion BIBREF13, where the conversion of prosody plays an important role. Existing seq2seq VC models are based on either recurrent neural networks (RNNs) BIBREF7, BIBREF8, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15 or convolutional neural networks (CNNs) BIBREF9. In recent years, the Transformer architecture BIBREF16 has been shown to perform efficiently BIBREF17 in various speech processing tasks such as automatic speech recognition (ASR) BIBREF18, speech translation (ST) BIBREF19, BIBREF20, and text-to-speech (TTS) BIBREF21. On the basis of attention mechanism solely, the Transformer enables parallel training by avoiding the use of recurrent layers, and provides a receptive field that spans the entire input by using multi-head self-attention rather than convolutional layers. Nonetheless, the above-mentioned speech applications that have successfully utilized the Transformer architecture all attempted to find a mapping between text and acoustic feature sequences. VC, in contrast, attempts to map between acoustic frames, whose high time resolution introduces challenges regarding computational memory cost and accurate attention learning. Despite the promising results, seq2seq VC models suffer from two major problems. First, seq2seq models usually require a large amount of training data, although a large-scale parallel corpus, i.e, pairs of speech samples with identical linguistic contents uttered by both source and target speakers, is impractical to collect. Second, as pointed out in BIBREF11, the converted speech often suffers from mispronunciations and other instability problems such as phonemes and skipped phonemes. Several techniques have been proposed to address these issues. In BIBREF10 a pretrained ASR module was used to extract phonetic posteriorgrams (PPGs) as an extra clue, whereas PPGs were solely used as the input in BIBREF13. The use of context preservation loss and guided attention loss BIBREF22 to stabilize training has also been proposed BIBREF8, BIBREF9. Multitask learning and data augmentation were incorporated in BIBREF11 using additional text labels to improve data efficiency, and linguistic and speaker representations were disentangled in BIBREF12 to enable nonparallel training, thus removing the need for a parallel corpus. In BIBREF15 a large hand-transcribed corpus was used to generate artificial training data from a TTS model for a many-to-one (normalization) VC model, where multitask learning was also used. One popular means of dealing with the problem of limited training data is transfer leaning, where knowledge from massive, out-of-domain data is utilized to aid learning in the target domain. Recently, TTS systems, especially neural seq2seq models, have enjoyed great success owing to the vast large-scale corpus contributed by the community. We argue that lying at the core of these TTS models is the ability to generate effective intermediate representations, which facilitates correct attention learning that bridges the encoder and the decoder. Transfer learning from TTS has been successfully applied to tasks such as speaker adaptation BIBREF23, BIBREF24, BIBREF25, BIBREF26. In BIBREF27 the first attempt to apply this technique to VC was made by bootstrapping a nonparallel VC system from a pretrained speaker-adaptive TTS model. In this work, we propose a novel yet simple pretraining technique to transfer knowledge from learned TTS models. To transfer the core ability, i.e, the generation and utilization of fine representations, knowledge from both the encoder and the decoder is needed. Thus, we pretrain them in separate steps: first, the decoder is pretrained by using a large-scale TTS corpus to train a conventional TTS model. The TTS training ensures a well-trained decoder that can generate high-quality speech with the correct hidden representations. As the encoder must be pretrained to encode input speech into hidden representations that can be recognized by the decoder, we train the encoder in an autoencoder style with the pretrained decoder fixed. This is carried out using a simple reconstruction loss. We demonstrate that the VC model initialized with the above pretrained model parameters can generate high-quality, highly intelligible speech even with very limited training data. Our contributions in this work are as follows: We apply the Transformer network to VC. To our knowledge, this is the first work to investigate this combination. We propose a TTS pretraining technique for VC. The pretraining process provides a prior for fast, sample-efficient VC model learning, thus reducing the data size requirement and training time. In this work, we verify the effectiveness of this scheme by transferring knowledge from Transformer-based TTS models to a Transformer-based VC model.
495
Which datasets did they experiment on?
ConciergeQA and AmazonQA
Open Information Extraction (OpenIE) extracts meaningful structured tuples from free-form text. Most previous work on OpenIE considers extracting data from one sentence at a time. We describe NeurON, a system for extracting tuples from question-answer pairs. Since real questions and answers often contain precisely the information that users care about, such information is particularly desirable to extend a knowledge base with. NeurON addresses several challenges. First, an answer text is often hard to understand without knowing the question, and second, relevant information can span multiple sentences. To address these, NeurON formulates extraction as a multi-source sequence-to-sequence learning task, wherein it combines distributed representations of a question and an answer to generate knowledge facts. We describe experiments on two real-world datasets that demonstrate that NeurON can find a significant number of new and interesting facts to extend a knowledge base compared to state-of-the-art OpenIE methods.
This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by JohannaD. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the International Joint Conference on Artificial Intelligence and the Conference on Computer Vision and Pattern Recognition.
496
How do slot binary classifiers improve performance?
by adding extra supervision to generate the slots that will be present in the response
This paper proposes a novel end-to-end architecture for task-oriented dialogue systems. It is based on a simple and practical yet very effective sequence-to-sequence approach, where language understanding and state tracking tasks are modeled jointly with a structured copy-augmented sequential decoder and a multi-label decoder for each slot. The policy engine and language generation tasks are modeled jointly following that. The copy-augmented sequential decoder deals with new or unknown values in the conversation, while the multi-label decoder combined with the sequential decoder ensures the explicit assignment of values to slots. On the generation part, slot binary classifiers are used to improve performance. This architecture is scalable to real-world scenarios and is shown through an empirical evaluation to achieve state-of-the-art performance on both the Cambridge Restaurant dataset and the Stanford in-car assistant dataset\footnote{The code is available at \url{https://github.com/uber-research/FSDM}}
A traditional task-oriented dialogue system is often composed of a few modules, such as natural language understanding, dialogue state tracking, knowledge base (KB) query, dialogue policy engine and response generation. Language understanding aims to convert the input to some predefined semantic frame. State tracking is a critical component that models explicitly the input semantic frame and the dialogue history for producing KB queries. The semantic frame and the corresponding belief state are defined in terms of informable slots values and requestable slots. Informable slot values capture information provided by the user so far, e.g., {price=cheap, food=italian} indicating the user wants a cheap Italian restaurant at this stage. Requestable slots capture the information requested by the user, e.g., {address, phone} means the user wants to know the address and phone number of a restaurant. Dialogue policy model decides on the system action which is then realized by a language generation component. To mitigate the problems with such a classic modularized dialogue system, such as the error propagation between modules, the cascade effect that the updates of the modules have and the expensiveness of annotation, end-to-end training of dialogue systems was recently proposed BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . These systems train one whole model to read the current user's utterance, the past state (that may contain all previous interactions) and generate the current state and response. There are two main approaches for modeling the belief state in end-to-end task-oriented dialogue systems in the literature: the fully structured approach based on classification BIBREF7 , BIBREF9 , and the free-form approach based on text generation BIBREF10 . The fully structured approaches BIBREF11 , BIBREF12 use the full structure of the KB, both its schema and the values available in it, and assumes that the sets of informable slot values and requestable slots are fixed. In real-world scenarios, this assumption is too restrictive as the content of the KB may change and users' utterances may contain information outside the pre-defined sets. An ideal end-to-end architecture for state tracking should be able to identify the values of the informable slots and the requestable slots, easily adapt to new domains, to the changes in the content of the KB, and to the occurrence of words in users' utterances that are not present in the KB at training time, while at the same time providing the right amount of inductive bias to allow generalization. Recently, a free-form approach called TSCP (Two Stage Copy Net) BIBREF10 was proposed. This approach does not integrate any information about the KB in the model architecture. It has the advantage of being readily adaptable to new domains and changes in the content of the KB as well as solving the out-of-vocabulary word problem by generating or copying the relevant piece of text from the user's utterances in its response generation. However, TSCP can produce invalid states (see Section "Experiments" ). Furthermore, by putting all slots together into a sequence, it introduces an unwanted (artificial) order between different slots since they are encoded and decoded sequentially. It could be even worse if two slots have overlapping values, like departure and arrival airport in a travel booking system. As such, the unnecessary order of the slots makes getting rid of the invalid states a great challenge for the sequential decoder. As a summary, both approaches to state tracking have their weaknesses when applied to real-world applications. This paper proposes the Flexibly-Structured Dialogue Model (FSDM) as a new end-to-end task-oriented dialogue system. The state tracking component of FSDM has the advantages of both fully structured and free-form approaches while addressing their shortcomings. On one hand, it is still structured, as it incorporates information about slots in KB schema; on the other hand, it is flexible, as it does not use information about the values contained in the KB records. This makes it easily adaptable to new values. These desirable properties are achieved by a separate decoder for each informable slot and a multi-label classifier for the requestable slots. Those components explicitly assign values to slots like the fully structured approach, while also preserving the capability of dealing with out-of-vocabulary words like the free-form approach. By using these two types of decoders, FSDM produces only valid belief states, overcoming the limitations of the free-form approach. Further, FSDM has a new module called response slot binary classifier that adds extra supervision to generate the slots that will be present in the response more precisely before generating the final textual agent response (see Section "Methodology" for details). The main contributions of this work are
499
what bottlenecks were identified?
Confusion in recognizing the words that are active at a given node by a speech recognition solution developed for Indian Railway Inquiry System.
Speech based solutions have taken center stage with growth in the services industry where there is a need to cater to a very large number of people from all strata of the society. While natural language speech interfaces are the talk in the research community, yet in practice, menu based speech solutions thrive. Typically in a menu based speech solution the user is required to respond by speaking from a closed set of words when prompted by the system. A sequence of human speech response to the IVR prompts results in the completion of a transaction. A transaction is deemed successful if the speech solution can correctly recognize all the spoken utterances of the user whenever prompted by the system. The usual mechanism to evaluate the performance of a speech solution is to do an extensive test of the system by putting it to actual people use and then evaluating the performance by analyzing the logs for successful transactions. This kind of evaluation could lead to dissatisfied test users especially if the performance of the system were to result in a poor transaction completion rate. To negate this the Wizard of Oz approach is adopted during evaluation of a speech system. Overall this kind of evaluations is an expensive proposition both in terms of time and cost. In this paper, we propose a method to evaluate the performance of a speech solution without actually putting it to people use. We first describe the methodology and then show experimentally that this can be used to identify the performance bottlenecks of the speech solution even before the system is actually used thus saving evaluation time and expenses.
There are several commercial menu based ASR systems available around the world for a significant number of languages and interestingly speech solution based on these ASR are being used with good success in the Western part of the globe BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Typically, a menu based ASR system restricts user to speak from a pre-defined closed set of words for enabling a transaction. Before commercial deployment of a speech solution it is imperative to have a quantitative measure of the performance of the speech solution which is primarily based on the speech recognition accuracy of the speech engine used. Generally, the recognition performance of any speech recognition based solution is quantitatively evaluated by putting it to actual use by the people who are the intended users and then analyzing the logs to identify successful and unsuccessful transactions. This evaluation is then used to identifying any further improvement in the speech recognition based solution to better the overall transaction completion rates. This process of evaluation is both time consuming and expensive. For evaluation one needs to identify a set of users and also identify the set of actual usage situations and perform the test. It is also important that the set of users are able to use the system with ease meaning that even in the test conditions the performance of the system, should be good, while this can not usually be guaranteed this aspect of keeping the user experience good makes it necessary to employ a wizard of Oz (WoZ) approach. Typically this requires a human agent in the loop during actual speech transaction where the human agent corrects any mis-recognition by actually listening to the conversation between the human user and the machine without the user knowing that there is a human agent in the loop. The use of WoZ is another expense in the testing a speech solution. All this makes testing a speech solution an expensive and time consuming procedure. In this paper, we describe a method to evaluate the performance of a speech solution without actual people using the system as is usually done. We then show how this method was adopted to evaluate a speech recognition based solution as a case study. This is the main contribution of the paper. The rest of the paper is organized as follows. The method for evaluation without testing is described in Section SECREF2 . In Section SECREF3 we present a case study and conclude in Section SECREF4 .
505
By how much do they outperform BiLSTMs in Sentiment Analysis?
Proposed RCRN outperforms ablative baselines BiLSTM by +2.9% and 3L-BiLSTM by +1.1% on average across 16 datasets.
Recurrent neural networks (RNNs) such as long short-term memory and gated recurrent units are pivotal building blocks across a broad spectrum of sequence modeling problems. This paper proposes a recurrently controlled recurrent network (RCRN) for expressive and powerful sequence encoding. More concretely, the key idea behind our approach is to learn the recurrent gating functions using recurrent networks. Our architecture is split into two components - a controller cell and a listener cell whereby the recurrent controller actively influences the compositionality of the listener cell. We conduct extensive experiments on a myriad of tasks in the NLP domain such as sentiment analysis (SST, IMDb, Amazon reviews, etc.), question classification (TREC), entailment classification (SNLI, SciTail), answer selection (WikiQA, TrecQA) and reading comprehension (NarrativeQA). Across all 26 datasets, our results demonstrate that RCRN not only consistently outperforms BiLSTMs but also stacked BiLSTMs, suggesting that our controller architecture might be a suitable replacement for the widely adopted stacked architecture.
Recurrent neural networks (RNNs) live at the heart of many sequence modeling problems. In particular, the incorporation of gated additive recurrent connections is extremely powerful, leading to the pervasive adoption of models such as Gated Recurrent Units (GRU) BIBREF0 or Long Short-Term Memory (LSTM) BIBREF1 across many NLP applications BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . In these models, the key idea is that the gating functions control information flow and compositionality over time, deciding how much information to read/write across time steps. This not only serves as a protection against vanishing/exploding gradients but also enables greater relative ease in modeling long-range dependencies. There are two common ways to increase the representation capability of RNNs. Firstly, the number of hidden dimensions could be increased. Secondly, recurrent layers could be stacked on top of each other in a hierarchical fashion BIBREF6 , with each layer's input being the output of the previous, enabling hierarchical features to be captured. Notably, the wide adoption of stacked architectures across many applications BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 signify the need for designing complex and expressive encoders. Unfortunately, these strategies may face limitations. For example, the former might run a risk of overfitting and/or hitting a wall in performance. On the other hand, the latter might be faced with the inherent difficulties of going deep such as vanishing gradients or difficulty in feature propagation across deep RNN layers BIBREF11 . This paper proposes Recurrently Controlled Recurrent Networks (RCRN), a new recurrent architecture and a general purpose neural building block for sequence modeling. RCRNs are characterized by its usage of two key components - a recurrent controller cell and a listener cell. The controller cell controls the information flow and compositionality of the listener RNN. The key motivation behind RCRN is to provide expressive and powerful sequence encoding. However, unlike stacked architectures, all RNN layers operate jointly on the same hierarchical level, effectively avoiding the need to go deeper. Therefore, RCRNs provide a new alternate way of utilizing multiple RNN layers in conjunction by allowing one RNN to control another RNN. As such, our key aim in this work is to show that our proposed controller-listener architecture is a viable replacement for the widely adopted stacked recurrent architecture. To demonstrate the effectiveness of our proposed RCRN model, we conduct extensive experiments on a plethora of diverse NLP tasks where sequence encoders such as LSTMs/GRUs are highly essential. These tasks include sentiment analysis (SST, IMDb, Amazon Reviews), question classification (TREC), entailment classification (SNLI, SciTail), answer selection (WikiQA, TrecQA) and reading comprehension (NarrativeQA). Experimental results show that RCRN outperforms BiLSTMs and multi-layered/stacked BiLSTMs on all 26 datasets, suggesting that RCRNs are viable replacements for the widely adopted stacked recurrent architectures. Additionally, RCRN achieves close to state-of-the-art performance on several datasets.
506
which benchmark tasks did they experiment on?
They used Stanford Sentiment Treebank benchmark for sentiment classification task and AG English news corpus for the text classification task.
We propose a multi-view network for text classification. Our method automatically creates various views of its input text, each taking the form of soft attention weights that distribute the classifier's focus among a set of base features. For a bag-of-words representation, each view focuses on a different subset of the text's words. Aggregating many such views results in a more discriminative and robust representation. Through a novel architecture that both stacks and concatenates views, we produce a network that emphasizes both depth and width, allowing training to converge quickly. Using our multi-view architecture, we establish new state-of-the-art accuracies on two benchmark tasks.
State-of-the-art deep neural networks leverage task-specific architectures to develop hierarchical representations of their input, with each layer building a refined abstraction of the layer that came before it BIBREF0 . For text classification, one can think of this as a single reader building up an increasingly refined understanding of the content. In a departure from this philosophy, we propose a divide-and-conquer approach, where a team of readers each focus on different aspects of the text, and then combine their representations to make a joint decision. More precisely, the proposed Multi-View Network (MVN) for text classification learns to generate several views of its input text. Each view is formed by focusing on different sets of words through a view-specific attention mechanism. These views are arranged sequentially, so each subsequent view can build upon or deviate from previous views as appropriate. The final representation that concatenates these diverse views should be more robust to noise than any one of its components. Furthermore, different sentences may look similar under one view but different under another, allowing the network to devote particular views to distinguishing between subtle differences in sentences, resulting in more discriminative representations. Unlike existing multi-view neural network approaches for image processing BIBREF1 , BIBREF2 , where multiple views are provided as part of the input, our MVN learns to automatically create views from its input text by focusing on different sets of words. Compared to deep Convolutional Networks (CNN) for text BIBREF3 , BIBREF0 , the MVN strategy emphasizes network width over depth. Shorter connections between each view and the loss function enable better gradient flow in the networks, which makes the system easier to train. Our use of multiple views is similar in spirit to the weak learners used in ensemble methods BIBREF4 , BIBREF5 , BIBREF6 , but our views produce vector-valued intermediate representations instead of classification scores, and all our views are trained jointly with feedback from the final classifier. Experiments on two benchmark data sets, the Stanford Sentiment Treebank BIBREF7 and the AG English news corpus BIBREF3 , show that 1) our method achieves very competitive accuracy, 2) some views distinguish themselves from others by better categorizing specific classes, and 3) when our base bag-of-words feature set is augmented with convolutional features, the method establishes a new state-of-the-art for both data sets.
512
What accuracy is achieved by the speech recognition system?
Accuracy not available: WER results are reported 42.6 German, 35.9 English
This paper investigates the differences occuring in the excitation for different voice qualities. Its goal is two-fold. First a large corpus containing three voice qualities (modal, soft and loud) uttered by the same speaker is analyzed and significant differences in characteristics extracted from the excitation are observed. Secondly rules of modification derived from the analysis are used to build a voice quality transformation system applied as a post-process to HMM-based speech synthesis. The system is shown to effectively achieve the transformations while maintaining the delivered quality.
Since early times of computer-based speech synthesis research, voice quality (the perceived timbre of speech) analysis/modification has attracted interest of researchers BIBREF0. The topic of voice quality analysis finds application in various areas of speech processing such as high-quality parametric speech synthesis, expressive/emotional speech synthesis, speaker identification, emotion recognition, prosody analysis, speech therapy. Due to availability of reviews such as BIBREF1 and space limitations, a review of voice quality analysis methods will not be presented here. For voice quality analysis of speech corpora, it is common practice to estimate spectral parameters directly from speech signals such as relative harmonic amplitudes, or Harmonic to Noise Ratio (HNR). Although the voice quality variations are mainly considered to be controlled by the glottal source, glottal source estimation is considered to be problematic and hence avoided in the parameter estimation procedures for processing large speech corpora. In this work, we follow the not so common path and study the differences present in the glottal source signal parameters estimated via an automatic algorithm when a given speaker produces different voice qualities. Based on a parametric analysis of these latter (Section SECREF2), we further investigate the use of the information extracted from a large corpus, for voice quality modification of other speech databases in a HMM-based speech synthesizer (Section SECREF3).
513
By how much does their model outperform both the state-of-the-art systems?
w.r.t Rouge-1 their model outperforms by 0.98% and w.r.t Rouge-L their model outperforms by 0.45%
This paper describes "TLT-school" a corpus of speech utterances collected in schools of northern Italy for assessing the performance of students learning both English and German. The corpus was recorded in the years 2017 and 2018 from students aged between nine and sixteen years, attending primary, middle and high school. All utterances have been scored, in terms of some predefined proficiency indicators, by human experts. In addition, most of utterances recorded in 2017 have been manually transcribed carefully. Guidelines and procedures used for manual transcriptions of utterances will be described in detail, as well as results achieved by means of an automatic speech recognition system developed by us. Part of the corpus is going to be freely distributed to scientific community particularly interested both in non-native speech recognition and automatic assessment of second language proficiency.
We have acquired large sets of both written and spoken data during the implementation of campaigns aimed at assessing the proficiency, at school, of Italian pupils learning both German and English. Part of the acquired data has been included in a corpus, named "Trentino Language Testing" in schools (TLT-school), that will be described in the following. All the collected sentences have been annotated by human experts in terms of some predefined “indicators” which, in turn, were used to assign the proficiency level to each student undertaking the assigned test. This level is expressed according to the well-known Common European Framework of Reference for Languages (Council of Europe, 2001) scale. The CEFR defines 6 levels of proficiency: A1 (beginner), A2, B1, B2, C1 and C2. The levels considered in the evaluation campaigns where the data have been collected are: A1, A2 and B1. The indicators measure the linguistic competence of test takers both in relation to the content (e.g. grammatical correctness, lexical richness, semantic coherence, etc.) and to the speaking capabilities (e.g. pronunciation, fluency, etc.). Refer to Section SECREF2 for a description of the adopted indicators. The learners are Italian students, between 9 and 16 years old. They took proficiency tests by answering question prompts provided in written form. The “TLT-school” corpus, that we are going to make publicly available, contains part of the spoken answers (together with the respective manual transcriptions) recorded during some of the above mentioned evaluation campaigns. We will release the written answers in future. Details and critical issues found during the acquisition of the answers of the test takers will be discussed in Section SECREF2. The tasks that can be addressed by using the corpus are very challenging and pose many problems, which have only partially been solved by the interested scientific community. From the ASR perspective, major difficulties are represented by: a) recognition of both child and non-native speech, i.e. Italian pupils speaking both English and German, b) presence of a large number of spontaneous speech phenomena (hesitations, false starts, fragments of words, etc.), c) presence of multiple languages (English, Italian and German words are frequently uttered in response to a single question), d) presence of a significant level of background noise due to the fact that the microphone remains open for a fixed time interval (e.g. 20 seconds - depending on the question), and e) presence of non-collaborative speakers (students often joke, laugh, speak softly, etc.). Refer to Section SECREF6 for a detailed description of the collected spoken data set. Furthermore, since the sets of data from which “TLT-school” was derived were primarily acquired for measuring proficiency of second language (L2) learners, it is quite obvious to exploit the corpus for automatic speech rating. To this purpose, one can try to develop automatic approaches to reliably estimate the above-mentioned indicators used by the human experts who scored the answers of the pupils (such an approach is described in BIBREF0). However, it has to be noticed that scientific literature proposes to use several features and indicators for automatic speech scoring, partly different from those adopted in “TLT-school” corpus (see below for a brief review of the literature). Hence, we believe that adding new annotations to the corpus, related to particular aspects of language proficiency, can stimulate research and experimentation in this area. Finally, it is worth mentioning that also written responses of “TLT-school” corpus are characterised by a high level of noise due to: spelling errors, insertion of word fragments, presence of words belonging to multiple languages, presence of off-topic answers (e.g. containing jokes, comments not related to the questions, etc.). This set of text data will allow scientists to investigate both language and behaviour of pupils learning second languages at school. Written data are described in detail in Section SECREF5 Relation to prior work. Scientific literature is rich in approaches for automated assessment of spoken language proficiency. Performance is directly dependent on ASR accuracy which, in turn, depends on the type of input, read or spontaneous, and on the speakers' age, adults or children (see BIBREF1 for an overview of spoken language technology for education). A recent publication reporting an overview of state-of-the-art automated speech scoring technology as it is currently used at Educational Testing Service (ETS) can be found in BIBREF2. In order to address automatic assessment of complex spoken tasks requiring more general communication capabilities from L2 learners, the AZELLA data set BIBREF3, developed by Pearson, has been collected and used as benchmark for some researches BIBREF4, BIBREF3. The corpus contains $1,500$ spoken tests, each double graded by human professionals, from a variety of tasks. A public set of spoken data has been recently distributed in a spoken CALL (Computer Assisted Language Learning) shared task where Swiss students learning English had to answer to both written and spoken prompts. The goal of this challenge is to label students' spoken responses as “accept” or “reject”. Refer to BIBREF5 for details of the challenge and of the associated data sets. Many non-native speech corpora (mostly in English as target language) have been collected during the years. A list, though not recent, as well as a brief description of most of them can be found in BIBREF6. The same paper also gives information on how the data sets are distributed and can be accessed (many of them are available through both LDC and ELDA agencies). Some of the corpora also provide proficiency ratings to be used in CALL applications. Among them, we mention the ISLE corpus BIBREF7, which also contains transcriptions at the phonetic level and was used in the experiments reported in BIBREF0. Note that all corpora mentioned in BIBREF6 come from adult speech while, to our knowledge, the access to publicly available non-native children's speech corpora, as well as of children's speech corpora in general, is still scarce. Specifically concerning non-native children's speech, we believe worth mentioning the following corpora. The PF-STAR corpus (see BIBREF8) contains English utterances read by both Italian and German children, between 6 and 13 years old. The same corpus also contains utterances read by English children. The ChildIt corpus BIBREF9 contains English utterances (both read and imitated) by Italian children. By distributing “TLT-school” corpus, we hope to help researchers to investigate novel approaches and models in the areas of both non-native and children's speech and to build related benchmarks.
515
What is the size of their dataset?
10,001 utterances
In the medical domain, identifying and expanding abbreviations in clinical texts is a vital task for both better human and machine understanding. It is a challenging task because many abbreviations are ambiguous especially for intensive care medicine texts, in which phrase abbreviations are frequently used. Besides the fact that there is no universal dictionary of clinical abbreviations and no universal rules for abbreviation writing, such texts are difficult to acquire, expensive to annotate and even sometimes, confusing to domain experts. This paper proposes a novel and effective approach - exploiting task-oriented resources to learn word embeddings for expanding abbreviations in clinical notes. We achieved 82.27% accuracy, close to expert human performance.
Abbreviations and acronyms appear frequently in the medical domain. Based on a popular online knowledge base, among the 3,096,346 stored abbreviations, 197,787 records are medical abbreviations, ranked first among all ten domains. An abbreviation can have over 100 possible explanations even within the medical domain. Medical record documentation, the authors of which are mainly physicians, other health professionals, and domain experts, is usually written under the pressure of time and high workload, requiring notation to be frequently compressed with shorthand jargon and acronyms. This is even more evident within intensive care medicine, where it is crucial that information is expressed in the most efficient manner possible to provide time-sensitive care to critically ill patients, but can result in code-like messages with poor readability. For example, given a sentence written by a physician with specialty training in critical care medicine, “STAT TTE c/w RVS. AKI - no CTA. .. etc”, it is difficult for non-experts to understand all abbreviations without specific context and/or knowledge. But when a doctor reads this, he/she would know that although “STAT” is widely used as the abbreviation of “statistic”, “statistics” and “statistical” in most domains, in hospital emergency rooms, it is often used to represent “immediately”. Within the arena of medical research, abbreviation expansion using a natural language processing system to automatically analyze clinical notes may enable knowledge discovery (e.g., relations between diseases) and has potential to improve communication and quality of care. In this paper, we study the task of abbreviation expansion in clinical notes. As shown in Figure 1, our goal is to normalize all the abbreviations in the intensive care unit (ICU) documentation to reduce misinterpretation and to make the texts accessible to a wider range of readers. For accurately capturing the semantics of an abbreviation in its context, we adopt word embedding, which can be seen as a distributional semantic representation and has been proven to be effective BIBREF0 to compute the semantic similarity between words based on the context without any labeled data. The intuition of distributional semantics BIBREF1 is that if two words share similar contexts, they should have highly similar semantics. For example, in Figure 1, “RF” and “respiratory failure” have very similar contexts so that their semantics should be similar. If we know “respiratory failure” is a possible candidate expansion of “RF” and its semantics is similar to the “RF” in the intensive care medicine texts, we can determine that it should be the correct expansion of “RF”. Due to the limited resource of intensive care medicine texts where full expansions rarely appear, we exploit abundant and easily-accessible task-oriented resources to enrich our dataset for training embeddings. To the best of our knowledge, we are the first to apply word embeddings to this task. Experimental results show that the embeddings trained on the task-oriented corpus are much more useful than those trained on other corpora. By combining the embeddings with domain-specific knowledge, we achieve 82.27% accuracy, which outperforms baselines and is close to human's performance.
518
how was the dataset built?
Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no"
In this paper, we describe a methodology to infer Bullish or Bearish sentiment towards companies/brands. More specifically, our approach leverages affective lexica and word embeddings in combination with convolutional neural networks to infer the sentiment of financial news headlines towards a target company. Such architecture was used and evaluated in the context of the SemEval 2017 challenge (task 5, subtask 2), in which it obtained the best performance.
Real time information is key for decision making in highly technical domains such as finance. The explosive growth of financial technology industry (Fintech) continued in 2016, partially due to the current interest in the market for Artificial Intelligence-based technologies. Opinion-rich texts such as micro-blogging and news can have an important impact in the financial sector (e.g. raise or fall in stock value) or in the overall economy (e.g. the Greek public debt crisis). In such a context, having granular access to the opinions of an important part of the population is of key importance to any public and private actor in the field. In order to take advantage of this raw data, it is thus needed to develop machine learning methods allowing to convert unstructured text into information that can be managed and exploited. In this paper, we address the sentiment analysis problem applied to financial headlines, where the goal is, for a given news headline and target company, to infer its polarity score i.e. how positive (or negative) the sentence is with respect to the target company. Previous research BIBREF0 has highlighted the association between news items and market fluctiations; hence, in the financial domain, sentiment analysis can be used as a proxy for bullish (i.e. positive, upwards trend) or bearish (i.e. negative, downwards trend) attitude towards a specific financial actor, allowing to identify and monitor in real-time the sentiment associated with e.g. stocks or brands. Our contribution leverages pre-trained word embeddings (GloVe, trained on wikipedia+gigaword corpus), the DepecheMood affective lexicon, and convolutional neural networks.
519
what processing was done on the speeches before being parsed?
Remove numbers and interjections
In this paper we study yes/no questions that are naturally occurring --- meaning that they are generated in unprompted and unconstrained settings. We build a reading comprehension dataset, BoolQ, of such questions, and show that they are unexpectedly challenging. They often query for complex, non-factoid information, and require difficult entailment-like inference to solve. We also explore the effectiveness of a range of transfer learning baselines. We find that transferring from entailment data is more effective than transferring from paraphrase or extractive QA data, and that it, surprisingly, continues to be very beneficial even when starting from massive pre-trained language models such as BERT. Our best method trains BERT on MultiNLI and then re-trains it on our train set. It achieves 80.4% accuracy compared to 90% accuracy of human annotators (and 62% majority-baseline), leaving a significant gap for future work.
Understanding what facts can be inferred to be true or false from text is an essential part of natural language understanding. In many cases, these inferences can go well beyond what is immediately stated in the text. For example, a simple sentence like “Hanna Huyskova won the gold medal for Belarus in freestyle skiing." implies that (1) Belarus is a country, (2) Hanna Huyskova is an athlete, (3) Belarus won at least one Olympic event, (4) the USA did not win the freestyle skiing event, and so on. Work completed while interning at Google. Also affiliated with Columbia University, work done at Google. To test a model's ability to make these kinds of inferences, previous work in natural language inference (NLI) proposed the task of labeling candidate statements as being entailed or contradicted by a given passage. However, in practice, generating candidate statements that test for complex inferential abilities is challenging. For instance, evidence suggests BIBREF0 , BIBREF1 , BIBREF2 that simply asking human annotators to write candidate statements will result in examples that typically only require surface-level reasoning. In this paper we propose an alternative: we test models on their ability to answer naturally occurring yes/no questions. That is, questions that were authored by people who were not prompted to write particular kinds of questions, including even being required to write yes/no questions, and who did not know the answer to the question they were asking. Figure contains some examples from our dataset. We find such questions often query for non-factoid information, and that human annotators need to apply a wide range of inferential abilities when answering them. As a result, they can be used to construct highly inferential reading comprehension datasets that have the added benefit of being directly related to the practical end-task of answering user yes/no questions. Yes/No questions do appear as a subset of some existing datasets BIBREF3 , BIBREF4 , BIBREF5 . However, these datasets are primarily intended to test other aspects of question answering (QA), such as conversational QA or multi-step reasoning, and do not contain naturally occurring questions. We follow the data collection method used by Natural Questions (NQ) BIBREF6 to gather 16,000 naturally occurring yes/no questions into a dataset we call BoolQ (for Boolean Questions). Each question is paired with a paragraph from Wikipedia that an independent annotator has marked as containing the answer. The task is then to take a question and passage as input, and to return “yes" or “no" as output. Figure contains some examples, and Appendix SECREF17 contains additional randomly selected examples. Following recent work BIBREF7 , we focus on using transfer learning to establish baselines for our dataset. Yes/No QA is closely related to many other NLP tasks, including other forms of question answering, entailment, and paraphrasing. Therefore, it is not clear what the best data sources to transfer from are, or if it will be sufficient to just transfer from powerful pre-trained language models such as BERT BIBREF8 or ELMo BIBREF9 . We experiment with state-of-the-art unsupervised approaches, using existing entailment datasets, three methods of leveraging extractive QA data, and using a few other supervised datasets. We found that transferring from MultiNLI, and the unsupervised pre-training in BERT, gave us the best results. Notably, we found these approaches are surprisingly complementary and can be combined to achieve a large gain in performance. Overall, our best model reaches 80.43% accuracy, compared to 62.31% for the majority baseline and 90% human accuracy. In light of the fact BERT on its own has achieved human-like performance on several NLP tasks, this demonstrates the high degree of difficulty of our dataset. We present our data and code at https://goo.gl/boolq.
521
Which sentiment analysis data set has a larger performance drop when a 10% error is introduced?
SST-2 dataset
Entity population, a task of collecting entities that belong to a particular category, has attracted attention from vertical domains. There is still a high demand for creating entity dictionaries in vertical domains, which are not covered by existing knowledge bases. We develop a lightweight front-end tool for facilitating interactive entity population. We implement key components necessary for effective interactive entity population: 1) GUI-based dashboards to quickly modify an entity dictionary, and 2) entity highlighting on documents for quickly viewing the current progress. We aim to reduce user cost from beginning to end, including package installation and maintenance. The implementation enables users to use this tool on their web browsers without any additional packages -users can focus on their missions to create entity dictionaries. Moreover, an entity expansion module is implemented as external APIs. This design makes it easy to continuously improve interactive entity population pipelines. We are making our demo publicly available (http://bit.ly/luwak-demo).
Entity extraction is one of the most major NLP components. Most NLP tools (e.g., NLTK, Stanford CoreNLP, etc.), including commercial services (e.g., Google Cloud API, Alchemy API, etc.), provide entity extraction functions to recognize named entities (e.g., PERSON, LOCATION, ORGANIZATION, etc.) from texts. Some studies have defined fine-grained entity types and developed extraction methods BIBREF0 based on these types. However, these methods cannot comprehensively cover domain-specific entities. For instance, a real estate search engine needs housing equipment names to index these terms for providing fine-grained search conditions. There is a significant demand for constructing user-specific entity dictionaries, such as the case of cuisine and ingredient names for restaurant services. A straightforward solution is to prepare a set of these entity names as a domain-specific dictionary. Therefore, this paper focuses on the entity population task, which is a task of collecting entities that belong to an entity type required by a user. We develop LUWAK, a lightweight tool for effective interactive entity population. The key features are four-fold: We think these features are key components for effective interactive entity population. We choose an interactive user feedback strategy for entity population for LUWAK. A major approach to entity population is bootstrapping, which uses several entities that have been prepared as a seed set for finding new entities. Then, these new entities are integrated into the initial seed set to create a new seed set. The bootstrapping approach usually repeats the procedure until it has collected a sufficient number of entities. The framework cannot prevent the incorporation of incorrect entities that do not belong to the entity type unless user interaction between iterations. The problem is commonly called semantic drift BIBREF1 . Therefore, we consider user interaction, in which feedback is given to expanded candidates, as essential to maintaining the quality of an entity set. LUWAK implements fundamental functions for entity population, including (a) importing an initial entity set, (b) generating entity candidates, (c) obtaining user feedback, and (d) publishing populated entity dictionary. We aim to reduce the user’s total workload as a key metric of an entity population tool. That is, an entity population tool should provide the easiest and fastest solution to collecting entities of a particular entity type. User interaction cost is a dominant factor in the entire workload of an interactive tool. Thus, we carefully design the user interface for users to give feedbacks to the tool intuitively. Furthermore, we also consider the end-to-end user cost reduction. We adhere to the concept of developing installation-free software to distribute the tool among a wide variety of users, including nontechnical clusters. This lightweight design of LUWAK might speed up the procedure of the whole interactive entity population workflow. Furthermore, this advantage might be beneficial to continuously improve the whole pipeline of interactive entity population system.
522
How much is pre-training loss increased in Low/Medium/Hard level of pruning?
The increase is linearly from lowest on average 2.0 , medium around 3.5, and the largest is 6.0
Pre-trained language models such as BERT are known to perform exceedingly well on various NLP tasks and have even established new State-Of-The-Art (SOTA) benchmarks for many of these tasks. Owing to its success on various tasks and benchmark datasets, industry practitioners have started to explore BERT to build applications solving industry use cases. These use cases are known to have much more noise in the data as compared to benchmark datasets. In this work we systematically show that when the data is noisy, there is a significant degradation in the performance of BERT. Specifically, we performed experiments using BERT on popular tasks such sentiment analysis and textual similarity. For this we work with three well known datasets - IMDB movie reviews, SST-2 and STS-B to measure the performance. Further, we examine the reason behind this performance drop and identify the shortcomings in the BERT pipeline.
In recent times, pre-trained contextual language models have led to significant improvement in the performance for many NLP tasks. Among the family of these models, the most popular one is BERT BIBREF0, which is also the focus of this work. The strength of the BERT model FIGREF2 stems from its transformerBIBREF1 based encoder architectureFIGREF1. While it is still not very clear as to why BERT along with its embedding works so well for downstream tasks when it is fine tuned, there has been some work in this direction that that gives some important cluesBIBREF2, BIBREF3. At a high level, BERT’s pipelines looks as follows: given a input sentence, BERT tokenizes it using wordPiece tokenizerBIBREF4. The tokens are then fed as input to the BERT model and it learns contextualized embeddings for each of those tokens. It does so via pre-training on two tasks - Masked Language Model (MLM)BIBREF0 and Next Sentence Prediction (NSP)BIBREF0. The focus of this work is to understand the issues that a practitioner can run into while trying to use BERT for building NLP applications in industrial settings. It is a well known fact that NLP applications in industrial settings often have to deal with the noisy data. There are different kinds of possible noise namely non-canonical text such as spelling mistakes, typographic errors, colloquialisms, abbreviations, slang, internet jargon, emojis, embedded metadata (such as hashtags, URLs, mentions), non standard syntactic constructions and spelling variations, grammatically incorrect text, mixture of two or more languages to name a few. Such noisy data is a hallmark of user generated text content and commonly found on social media, chats, online reviews, web forums to name a few. Owing to this noise a common issue that NLP models have to deal with is Out Of Vocabulary (OOV) words. These are words that are found in test and production data but not part of training data. In this work we highlight how BERT fails to handle Out Of Vocabulary(OOV) words, given its limited vocabulary. We show that this negatively impacts the performance of BERT when working with user generated text data and evaluate the same. This evaluation is motivated from the business use case we are solving where we are building a dialogue system to screen candidates for blue collar jobs. Our candidate user base, coming from underprivileged backgrounds, are often high school graduates. This coupled with ‘fat finger’ problem over a mobile keypad leads to a lot of typos and spelling mistakes in the responses sent to the dialogue system. Hence, for this work we focus on spelling mistakes as the noise in the data. While this work is motivated from our business use case, our findings are applicable across various use cases in industry - be it be sentiment classification on twitter data or topic detection of a web forum. To simulate noise in the data, we begin with a clean dataset and introduce spelling errors in a fraction of words present in it. These words are chosen randomly. We will explain this process in detail later. Spelling mistakes introduced mimic the typographical errors in the text introduced by our users. We then use the BERT model for tasks using both clean and noisy datasets and compare the results. We show that the introduction of noise leads to a significant drop in performance of the BERT model for the task at hand as compared to clean dataset. We further show that as we increase the amount of noise in the data, the performance degrades sharply.
524
What is the average length of the recordings?
40 minutes
In (Yang et al. 2016), a hierarchical attention network (HAN) is created for document classification. The attention layer can be used to visualize text influential in classifying the document, thereby explaining the model's prediction. We successfully applied HAN to a sequential analysis task in the form of real-time monitoring of turn taking in conversations. However, we discovered instances where the attention weights were uniform at the stopping point (indicating all turns were equivalently influential to the classifier), preventing meaningful visualization for real-time human review or classifier improvement. We observed that attention weights for turns fluctuated as the conversations progressed, indicating turns had varying influence based on conversation state. Leveraging this observation, we develop a method to create more informative real-time visuals (as confirmed by human reviewers) in cases of uniform attention weights using the changes in turn importance as a conversation progresses over time.
The attention mechanism BIBREF1 in neural networks can be used to interpret and visualize model behavior by selecting the most pertinent pieces of information instead of all available information. For example, in BIBREF0 , a hierarchical attention network (Han) is created and tested on the classification of product and movie reviews. As a side effect of employing the attention mechanism, sentences (and words) that are considered important to the model can be highlighted, and color intensity corresponds to the level of importance (darker color indicates higher importance). Our application is the escalation of Internet chats. To maintain quality of service, users are transferred to human representatives when their conversations with an intelligent virtual assistant (IVA) fail to progress. These transfers are known as escalations. We apply Han to such conversations in a sequential manner by feeding each user turn to Han as they occur, to determine if the conversation should escalate. If so, the user will be transferred to a live chat representative to continue the conversation. To help the human representative quickly determine the cause of the escalation, we generate a visualization of the user's turns using the attention weights to highlight the turns influential in the escalation decision. This helps the representative quickly scan the conversation history and determine the best course of action based on problematic turns. Unfortunately, there are instances where the attention weights for every turn at the point of escalation are nearly equal, requiring the representative to carefully read the history to determine the cause of escalation unassisted. Table TABREF1 shows one such example with uniform attention weights at the point of escalation. Our application requires that the visualizations be generated in real-time at the point of escalation. The user must wait for the human representative to review the IVA chat history and resume the failed task. Therefore, we seek visualization methods that do not add significant latency to the escalation transfer. Using the attention weights for turn influence is fast as they were already computed at the time of classification. However, these weights will not generate useful visualizations for the representatives when their values are similar across all turns (see Han Weight in Table TABREF1 ). To overcome this problem, we develop a visualization method to be applied in the instances where the attention weights are uniform. Our method produces informative visuals for determining influential samples in a sequence by observing the changes in sample importance over the cumulative sequence (see Our Weight in Table TABREF1 ). Note that we present a technique that only serves to resolve situations when the existing attention weights are ambiguous; we are not developing a new attention mechanism, and, as our method is external, it does not require any changes to the existing model to apply. To determine when the turn weights are uniform, we use perplexity BIBREF2 (see more details in subsection SECREF4 ). If a conversation INLINEFORM0 escalates on turn INLINEFORM1 with attention weights INLINEFORM2 , let INLINEFORM3 . Intuitively, INLINEFORM4 should be low when uniformity is high. We measure the INLINEFORM5 of every escalated conversation and provide a user-chosen uniformity threshold for INLINEFORM6 (Figure FIGREF2 ). For example, if the INLINEFORM7 threshold for uniformity is INLINEFORM8 , 20% of conversations in our dataset will result in Han visuals where all turns have similar weight; thus, no meaningful visualization can be produced. Companies that deploy IVA solutions for customer service report escalated conversation volumes of INLINEFORM9 per day for one IVA BIBREF3 . Therefore, even at 20%, contact centers handling multiple companies may see hundreds or thousands of conversations per day with no visualizations. If we apply our method in instances where Han weights are uniform, all conversations become non-uniform using the same INLINEFORM10 threshold for INLINEFORM11 , enabling visualization to reduce human effort.
527
What is the prediction accuracy of the model?
mean prediction accuracy 0.99582651 S&P 500 Accuracy 0.99582651
This paper presents a new annotated corpus of 513 anonymized radiology reports written in Spanish. Reports were manually annotated with entities, negation and uncertainty terms and relations. The corpus was conceived as an evaluation resource for named entity recognition and relation extraction algorithms, and as input for the use of supervised methods. Biomedical annotated resources are scarce due to confidentiality issues and associated costs. This work provides some guidelines that could help other researchers to undertake similar tasks.
The availability of annotated corpora from the biomedical domain, in particular for non-English texts, is scarce. There are two main reasons for that: the generation of new annotated data is expensive due to the need of expert knowledge and to privacy issues: the patient and the physician should not be identified from the texts. So, although the availability of annotated data is a highly valuable asset for the research community, it is very difficult to access it. We are interested in supporting physicians with automatic text processing methods, such as named entity recognition (NER), relation extraction (RE), and negation and uncertainty detection in Spanish radiology reports. The extraction of entities and relations from the reports could suggest possible medical problems, that might lead to surgical interventions, such as seen in Do:2013, Morioka:2016 and Lakhani:2009. To the best of our knowledge, there are no publicly available annotated datasets of Spanish medical reports for these tasks. For this reason, this work focuses on creating an annotated corpus of Spanish radiology reports. There are some datasets available for other languages in the clinical domain, eg. for English BIBREF0 , BIBREF1 , BIBREF2 , Swedish BIBREF3 , French BIBREF4 , Polish BIBREF5 and German BIBREF6 . Oronoz:2015:corpus presented an annotated dataset in Spanish for adverse drug reactions analysis. There are different kind of medical reports. In our case, reports are very short, sentences are not always well formed and many of them have a telegraphic style. They contain spelling mistakes and the use of non-standard abbreviations and acronyms is frequent. This, added to the use of specialized language of the medical domain, makes the annotation task difficult. This work describes the annotation schema, the main guidelines and a brief analysis of the resulting corpus. We are evaluating the possibility of releasing the dataset publicly.
528
How does the SCAN dataset evaluate compositional generalization?
it systematically holds out inputs in the training set containing basic primitive verb, "jump", and tests on sequences containing that verb.
Stock price prediction is important for value investments in the stock market. In particular, short-term prediction that exploits financial news articles is promising in recent years. In this paper, we propose a novel deep neural network DP-LSTM for stock price prediction, which incorporates the news articles as hidden information and integrates difference news sources through the differential privacy mechanism. First, based on the autoregressive moving average model (ARMA), a sentiment-ARMA is formulated by taking into consideration the information of financial news articles in the model. Then, an LSTM-based deep neural network is designed, which consists of three components: LSTM, VADER model and differential privacy (DP) mechanism. The proposed DP-LSTM scheme can reduce prediction errors and increase the robustness. Extensive experiments on S&P 500 stocks show that (i) the proposed DP-LSTM achieves 0.32% improvement in mean MPA of prediction result, and (ii) for the prediction of the market index S&P 500, we achieve up to 65.79% improvement in MSE.
Stock prediction is crucial for quantitative analysts and investment companies. Stocks' trends, however, are affected by a lot of factors such as interest rates, inflation rates and financial news [12]. To predict stock prices accurately, one must use these variable information. In particular, in the banking industry and financial services, analysts' armies are dedicated to pouring over, analyzing, and attempting to quantify qualitative data from news. A large amount of stock trend information is extracted from the large amount of text and quantitative information that is involved in the analysis. Investors may judge on the basis of technical analysis, such as charts of a company, market indices, and on textual information such as news blogs or newspapers. It is however difficult for investors to analyze and predict market trends according to all of these information [22]. A lot of artificial intelligence approaches have been investigated to automatically predict those trends [3]. For instance, investment simulation analysis with artificial markets or stock trend analysis with lexical cohesion based metric of financial news' sentiment polarity. Quantitative analysis today is heavily dependent on data. However, the majority of such data is unstructured text that comes from sources like financial news articles. The challenge is not only the amount of data that are involved, but also the kind of language that is used in them to express sentiments, which means emoticons. Sifting through huge volumes of this text data is difficult as well as time-consuming. It also requires a great deal of resources and expertise to analyze all of that [4]. To solve the above problem, in this paper we use sentiment analysis to extract information from textual information. Sentiment analysis is the automated process of understanding an opinion about a given subject from news articles [5]. The analyzed data quantifies reactions or sentiments of the general public toward people, ideas or certain products and reveal the information's contextual polarity. Sentiment analysis allows us to understand if newspapers are talking positively or negatively about the financial market, get key insights about the stock's future trend market. We use valence aware dictionary and sentiment reasoner (VADER) to extract sentiment scores. VADER is a lexicon and rule-based sentiment analysis tool attuned to sentiments that are expressed in social media specifically [6]. VADER has been found to be quite successful when dealing with NY Times editorials and social media texts. This is because VADER not only tells about the negativity score and positively but also tells us about how positive or negative a sentiment is. However, news reports are not all objective. We may increase bias because of some non-objective reports, if we rely on the information that is extracted from the news for prediction fully. Therefore, in order to enhance the prediction model's robustness, we will adopt differential privacy (DP) method. DP is a system for sharing information about a dataset publicly by describing groups' patterns within the dataset while withholding information about individuals in the dataset. DP can be achieved if the we are willing to add random noise to the result. For example, rather than simply reporting the sum, we can inject noise from a Laplace or gaussian distribution, producing a result that’s not quite exact, that masks the contents of any given row. In the last several years a promising approach to private data analysis has emerged, based on DP, which ensures that an analysis outcome is "roughly as likely" to occur independent of whether any individual opts in to, or to opts out of, the database. In consequence, any one individual's specific data can never greatly affect the results. General techniques for ensuring DP have now been proposed, and a lot of datamining tasks can be carried out in a DP method, frequently with very accurate results [21]. We proposed a DP-LSTM neural network, which increase the accuracy of prediction and robustness of model at the same time. The remainder of the paper is organized as follows. In Section 2, we introduce stock price model, the sentiment analysis and differential privacy method. In Section 3, we develop the different privacy-inspired LSTM (DP-LSTM) deep neural network and present the training details. Prediction results are provided in Section 4. Section 5 concludes the paper.
530
What are the baseline systems that are compared against?
The system is compared to baseline models: LSTM, RL-SPINN and Gumbel Tree-LSTM
We present pre-training approaches for self-supervised representation learning of speech data. A BERT, masked language model, loss on discrete features is compared with an InfoNCE-based constrastive loss on continuous speech features. The pre-trained models are then fine-tuned with a Connectionist Temporal Classification (CTC) loss to predict target character sequences. To study impact of stacking multiple feature learning modules trained using different self-supervised loss functions, we test the discrete and continuous BERT pre-training approaches on spectral features and on learned acoustic representations, showing synergitic behaviour between acoustically motivated and masked language model loss functions. In low-resource conditions using only 10 hours of labeled data, we achieve Word Error Rates (WER) of 10.2\% and 23.5\% on the standard test "clean" and "other" benchmarks of the Librispeech dataset, which is almost on bar with previously published work that uses 10 times more labeled data. Moreover, compared to previous work that uses two models in tandem, by using one model for both BERT pre-trainining and fine-tuning, our model provides an average relative WER reduction of 9%.
Representation learning has been an active research area for more than 30 years BIBREF1, with the goal of learning high level representations which separates different explanatory factors of the phenomena represented by the input data BIBREF2, BIBREF3. Disentangled representations provide models with exponentially higher ability to generalize, using little amount of labels, to new conditions by combining multiple sources of variations. Building Automatic Speech Recognition (ASR) systems, for example, requires a large volume of training data to represent different factors contributing to the creation of speech signals, e.g. background noise, recording channel, speaker identity, accent, emotional state, topic under discussion, and the language used in communication. The practical need for building ASR systems for new conditions with limited resources spurred a lot of work focused on unsupervised speech recognition and representation learning BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, in addition to semi- and weakly-supervised learning techniques aiming at reducing the supervised data needed in real-world scenarios BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17. Recently impressive results have been reported for representation learning, that generalizes to different downstream tasks, through self-supervised learning for text and speech BIBREF18, BIBREF19, BIBREF10, BIBREF11, BIBREF0. Self-supervised representation learning is done through tasks to predict masked parts of the input, reconstruct inputs through low bit-rate channels, or contrast similar data points against different ones. Different from BIBREF0 where the a BERT-like model is trained with the masked language model loss, frozen, and then used as a feature extractor in tandem with a final fully supervised convolutional ASR model BIBREF20, in this work, our “Discrete BERT” approach achieves an average relative Word Error Rate (WER) reduction of 9% by pre-training and fine-tuning the same BERT model using a Connectionist Temporal Classification BIBREF21 loss. In addition, we present a new approach for pre-training bi-directional transformer models on continuous speech data using the InfoNCE loss BIBREF10 – dubbed “continuous BERT”. To understand the nature of their learned representations, we train models using the continuous and the discrete BERT approaches on spectral features, e.g. Mel-frequency cepstral coefficients (MFCC), as well as on pre-trained Wav2vec features BIBREF22. These comparisons provide insights on how complementary the acoustically motivated contrastive loss function is to the other masked language model one. The unsupervised and semi-supervised ASR approaches is in need for test suites like the unified downstream tasks available for language representation models BIBREF18. BIBREF23, BIBREF24, BIBREF25 evaluated semi-supervised self-labeling WER performance on the standard test “clean” and test “other” benchmarks of the Librispeech dataset BIBREF26 when using only 100 hour subset as labeled data. BIBREF22, BIBREF0, BIBREF10 use the same 960h Librispeech data as unlabeled pre-training data, however, they use Phone Error Rates (PER) on the 3h TIMIT dataset BIBREF27 as their performance metric. The zero-resource ASR literature BIBREF7, BIBREF28 use the ABX task evaluate the quality of learned features. To combine the best of these evaluation approaches, we pre-train our models on the unlabeled 960h Librispeech data, with a close-to-zero supervised set of only 1 hour and 10 hours, sampled equally from the “clean” and “other” conditions of Librispeech. Then, we report final WER performance on its standard dev and test sets. Using our proposed approaches we achieve a best WER of 10.2% and 23.5% the clean and other subsets respectively which is competitive with previous work that uses 100h of labeled data.
531
What systems are tested?
BULATS i-vector/PLDA BULATS x-vector/PLDA VoxCeleb x-vector/PLDA PLDA adaptation (X1) Extractor fine-tuning (X2)
There has been considerable attention devoted to models that learn to jointly infer an expression's syntactic structure and its semantics. Yet, Nangia and Bowman (2018) has recently shown that the current best systems fail to learn the correct parsing strategy on mathematical expressions generated from a simple context-free grammar. In this work, we present a recursive model inspired by Choi et al. (2018) that reaches near perfect accuracy on this task. Our model is composed of two separated modules for syntax and semantics. They are cooperatively trained with standard continuous and discrete optimisation schemes. Our model does not require any linguistic structure for supervision, and its recursive nature allows for out-of-domain generalisation. Additionally, our approach performs competitively on several natural language tasks, such as Natural Language Inference and Sentiment Analysis.
This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by JohannaD. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the International Joint Conference on Artificial Intelligence and the Conference on Computer Vision and Pattern Recognition.
534
What benchmark datasets they use?
VQA and GeoQA
Since programming concepts do not match their syntactic representations, code search is a very tedious task. For instance in Java or C, array doesn't match [], so using"array"as a query, one cannot find what they are looking for. Often developers have to search code whether to understand any code, or to reuse some part of that code, or just to read it, without natural language searching, developers have to often scroll back and forth or use variable names as their queries. In our work, we have used Stackoverflow (SO) question and answers to make a mapping of programming concepts with their respective natural language keywords, and then tag these natural language terms to every line of code, which can further we used in searching using natural language keywords.
1.20pt Crowd Sourced Data Analysis: Mapping of Programming Concepts to Syntactical Patterns Deepak Thukral (deepak14036@iiitd.ac.in) & Darvesh Punia (darvesh14034@iiitd.ac.in) Since programming concepts do not match their syntactic representations, code search is a very tedious task. For instance in Java or C, array doesn’t match [], so using “array” as a query, one cannot find what they are looking for. Often developers have to search code whether to understand any code, or to reuse some part of that code, or just to read it, without natural language searching, developers have to often scroll back and forth or use variable names as their queries. In our work, we have used Stackoverflow (SO) question and answers to make a mapping of programming concepts with their respective natural language keywords, and then tag these natural language terms to every line of code, which can further we used in searching using natural language keywords. Keywords: Data Analysis, Stack Overflow, Code Search, Natural Language Processing, Information Retrieval, Entity Discovery, Classification, Topic Modelling.
536
How do they select monotonicity facts?
They derive it from Wordnet
Recent works have highlighted the strength of the Transformer architecture on sequence tasks while, at the same time, neural architecture search (NAS) has begun to outperform human-designed models. Our goal is to apply NAS to search for a better alternative to the Transformer. We first construct a large search space inspired by the recent advances in feed-forward sequence models and then run evolutionary architecture search with warm starting by seeding our initial population with the Transformer. To directly search on the computationally expensive WMT 2014 English-German translation task, we develop the Progressive Dynamic Hurdles method, which allows us to dynamically allocate more resources to more promising candidate models. The architecture found in our experiments -- the Evolved Transformer -- demonstrates consistent improvement over the Transformer on four well-established language tasks: WMT 2014 English-German, WMT 2014 English-French, WMT 2014 English-Czech and LM1B. At a big model size, the Evolved Transformer establishes a new state-of-the-art BLEU score of 29.8 on WMT'14 English-German; at smaller sizes, it achieves the same quality as the original"big"Transformer with 37.6% less parameters and outperforms the Transformer by 0.7 BLEU at a mobile-friendly model size of 7M parameters.
Over the past few years, impressive advances have been made in the field of neural architecture search. Reinforcement learning and evolution have both proven their capacity to produce models that exceed the performance of those designed by humans BIBREF0 , BIBREF1 . These advances have mostly focused on improving image models, although some effort has also been invested in searching for sequence models BIBREF2 , BIBREF3 . In these cases, it has always been to find improved recurrent neural networks (RNNs), which were long established as the de facto neural model for sequence problems BIBREF4 , BIBREF5 . However, recent works have shown that there are better alternatives to RNNs for solving sequence problems. Due to the success of convolution-based networks, such as Convolution Seq2Seq BIBREF6 , and full attention networks, such as the Transformer BIBREF7 , feed-forward networks are now a viable option for solving sequence-to-sequence (seq2seq) tasks. The main strength of feed-forward networks is that they are faster, and easier to train than RNNs. The goal of this work is to examine the use of neural architecture search methods to design better feed-forward architectures for seq2seq tasks. Specifically, we apply tournament selection architecture search to evolve from the Transformer, considered to be the state-of-art and widely-used, into a better and more efficient architecture. To achieve this, we construct a search space that reflects the recent advances in feed-forward seq2seq models and develop a method called progressive dynamic hurdles (PDH) that allows us to perform our search directly on the computationally demanding WMT 2014 English-German (En-De) translation task. Our search produces a new architecture – called the Evolved Transformer (ET) – which demonstrates consistent improvement over the original Transformer on four well-established language tasks: WMT 2014 English-German, WMT 2014 English-French (En-Fr), WMT 2014 English-Czech (En-Cs) and the 1 Billion Word Language Model Benchmark (LM1B). In our experiments with big size models, the Evolved Transformer is twice as efficient as the Transformer in FLOPS without loss of quality. At a much smaller – mobile-friendly – model size of $\sim $ 7M parameters, the Evolved Transformer outperforms the Transformer by 0.7 BLEU.
540
What are the 12 categories devised?
Economics, Genocide, Geography, History, Human Rights, Kurdish, Kurdology, Philosophy, Physics, Theology, Sociology, Social Study
Recently, the development of neural machine translation (NMT) has significantly improved the translation quality of automatic machine translation. While most sentences are more accurate and fluent than translations by statistical machine translation (SMT)-based systems, in some cases, the NMT system produces translations that have a completely different meaning. This is especially the case when rare words occur. When using statistical machine translation, it has already been shown that significant gains can be achieved by simplifying the input in a preprocessing step. A commonly used example is the pre-reordering approach. In this work, we used phrase-based machine translation to pre-translate the input into the target language. Then a neural machine translation system generates the final hypothesis using the pre-translation. Thereby, we use either only the output of the phrase-based machine translation (PBMT) system or a combination of the PBMT output and the source sentence. We evaluate the technique on the English to German translation task. Using this approach we are able to outperform the PBMT system as well as the baseline neural MT system by up to 2 BLEU points. We analyzed the influence of the quality of the initial system on the final result.
In the last years, statistical machine translation (SMT) system generated state-of-the-art performance for most language pairs. Recently, systems using neural machine translation (NMT) were able to outperform SMT systems in several evaluations. These models are able to generate more fluent and accurate translation for most of sentences. Neural machine translation systems provide the output with high fluency. A weakness of NMT systems, however, is that they sometimes lose the original meaning of the source words during translation. One example from the first conference on machine translation (WMT16) test set is the segment in Table TABREF1 . The English word goalie is not translated to the correct German word Torwart, but to the German word Gott, which means god. One problem could be that we need to limit the vocabulary size in order to train the model efficiently. We used Byte Pair Encoding (BPE) BIBREF0 to represent the text using a fixed size vocabulary. In our case the word goali is splitted into three parts go, al and ie. Then it is more difficult to transport the meaning to the translation. In contrast to this, in phrase-based machine translation (PBMT), we do not need to limit the vocabulary and are often able to translate words even if we have seen them only very rarely in the training. In the example mentioned before, for instance, the PBMT system had no problems translating the expression correctly. On the other hand, official evaluation campaigns BIBREF1 have shown that NMT system often create grammatically correct sentence and are able to model the morphologically agreement much better in German. The goal of this work is to combine the advantages of neural and phrase-based machine translation systems. Handling of rare words is an essential aspect to consider when it comes to real-world applications. The pre-translation framework provides a straightforward way to support such applications. In our approach, we will first translate the input using a PBMT system, which can handle the rare words well. In a second step, we will generate the final translation using an NMT system. This NMT system is able to generate a more fluent and grammatically correct translation. Since the rare words are already handled by the PBMT system, there should be less problems to generate the translation of these words. Using this approach naturally introduces a necessity to handle the potential errors by the PBMT systems. The remaining of the paper is structured as follows: In the next section we will review the related work. In Section SECREF3 , we will briefly review the phrase-based and neural approach to machine translation. Section SECREF4 will introduce the approach presented in this paper to pre-translate the input using a PBMT system. In the following section, we will evaluate the approach and analyze the errors. Finally, we will finish with a conclusion.
541
what are the off-the-shelf systems discussed in the paper?
Answer with content missing: (Names of many identifiers missing) TextCat, ChromeCLD, LangDetect, langid.py, whatlang, whatthelang, YALI, LDIG, Polyglot 3000, Lextek Language Identifier and Open Xerox Language Identifier.
Kurdish is a less-resourced language consisting of different dialects written in various scripts. Approximately 30 million people in different countries speak the language. The lack of corpora is one of the main obstacles in Kurdish language processing. In this paper, we present KTC-the Kurdish Textbooks Corpus, which is composed of 31 K-12 textbooks in Sorani dialect. The corpus is normalized and categorized into 12 educational subjects containing 693,800 tokens (110,297 types). Our resource is publicly available for non-commercial use under the CC BY-NC-SA 4.0 license.
Kurdish is an Indo-European language mainly spoken in central and eastern Turkey, northern Iraq and Syria, and western Iran. It is a less-resourced language BIBREF0, in other words, a language for which general-purpose grammars and raw internet-based corpora are the main existing resources. The language is spoken in five main dialects, namely, Kurmanji (aka Northern Kurdish), Sorani (aka Central Kurdish), Southern Kurdish, Zazaki and Gorani BIBREF1. Creating lexical databases and text corpora are essential tasks in natural language processing (NLP) development. Text corpora are knowledge repositories which provide semantic descriptions of words. The Kurdish language lacks diverse corpora in both raw and annotated forms BIBREF2, BIBREF3. According to the literature, there is no domain-specific corpus for Kurdish. In this paper, we present KTC, a domain-specific corpus containing K-12 textbooks in Sorani. We consider a domain as a set of related concepts, and a domain-specific corpus as a collection of documents relevant to those concepts BIBREF4. Accordingly, we introduce KTC as a domain-specific corpus because it is based on the textbooks which have been written and compiled by a group of experts, appointed by the Ministry of Education (MoE) of the Kurdistan Region of Iraq, for educational purposes at the K-12 level. The textbooks are selected, written, compiled, and edited by experts in each subject and also by language editors based on a unified grammar and orthography. This corpus was initially collected as an accurate source for developing a Sorani Kurdish spellchecker for scientific writing. KTC contains a range of subjects, and its content is categorized according to those subjects. Given the accuracy of the text from scientific, grammatical, and orthographic points of view, we believe that it is also a fine-grained resource. The corpus will contribute to various NLP tasks in Kurdish, particularly in language modeling and grammatical error correction. In the rest of this paper, Section SECREF2 reviews the related work, Section SECREF3 presents the corpus, Section SECREF4 addresses the challenges in the project and, Section SECREF5 concludes the paper.
545
How many rules had to be defined?
WikiSQL - 2 rules (SELECT, WHERE) SimpleQuestions - 1 rule SequentialQA - 3 rules (SELECT, WHERE, COPY)
We present a system for keyword spotting that, except for a frontend component for feature generation, it is entirely contained in a deep neural network (DNN) model trained"end-to-end"to predict the presence of the keyword in a stream of audio. The main contributions of this work are, first, an efficient memoized neural network topology that aims at making better use of the parameters and associated computations in the DNN by holding a memory of previous activations distributed over the depth of the DNN. The second contribution is a method to train the DNN, end-to-end, to produce the keyword spotting score. This system significantly outperforms previous approaches both in terms of quality of detection as well as size and computation.
Keyword detection is like searching for a needle in a haystack: the detector must listen to continuously streaming audio, ignoring nearly all of it, yet still triggering correctly and instantly. In the last few years, with the advent of voice assistants, keyword spotting has become a common way to initiate a conversation with them (e.g. "Ok Google", "Alexa", or "Hey Siri"). As the assistant use cases spread through a variety of devices, from mobile phones to home appliances and further into the internet-of-things (IoT) –many of them battery powered or with restricted computational capacity, it is important for the keyword spotting system to be both high-quality as well as computationally efficient. Neural networks are core to the state of-the-art keyword spotting systems. These solutions, however, are not developed as a single deep neural network (DNN). Instead, they are traditionally comprised of different subsystems, independently trained, and/or manually designed. For example, a typical system is composed by three main components: 1) a signal processing frontend, 2) an acoustic encoder, and 3) a separate decoder. Of those components, it is the last two that make use of DNNs along with a wide variety of decoding implementations. They range from traditional approaches that make use of a Hidden Markov Model (HMM) to characterize acoustic features from a DNN into both "keyword" and "background" (i.e. non-keyword speech and noise) classes BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . Simpler derivatives of that approach perform a temporal integration computation that verifies the outputs of the acoustic model are high in the right sequence for the target keyword in order to produce a single detection likelyhood score BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 . Other recent systems make use of CTC-trained DNNs –typically recurrent neural networks (RNNs) BIBREF10 , or even sequence-to-sequence trained models that rely on beam search decoding BIBREF11 . This last family of systems is the closest to be considered end-to-end, however they are generally too computationally complex for many embedded applications. Optimizing independent components, however, creates added complexities and is suboptimal in quality compared to doing it jointly. Deployment also suffers due to the extra complexity, making it harder to optimize resources (e.g. processing power and memory consumption). The system described in this paper addresses those concerns by learning both the encoder and decoder components into a single deep neural network, jointly optimizing to directly produce the detection likelyhood score. This system could be trained to subsume the signal processing frontend as well as in BIBREF2 , BIBREF12 , but it is computationally costlier to replace highly optimized fast fourier transform implementations with a neural network of equivalent quality. However, it is something we consider exploring in the future. Overall, we find this system provides state of the art quality across a number of audio and speech conditions compared to a traditional, non end-to-end baseline system described in BIBREF13 . Moreover, the proposed system significantly reduces the resource requirements for deployment by cutting computation and size over five times compared to the baseline system. The rest of the paper is organized as follows. In Section SECREF2 we present the architecture of the keyword spotting system; in particular the two main contributions of this work: the neural network topology, and the end-to-end training methodology. Next, in Section SECREF3 we describe the experimental setup, and the results of our evaluations in Section SECREF4 , where we compare against the baseline approach of BIBREF13 . Finally, we conclude with a discussion of our findings in Section SECREF5 .
546
What was performance of classifiers before/after using distant supervision?
Bi-LSTM: For low resource <17k clean data: Using distant supervision resulted in huge boost of F1 score (1k eg. ~9 to ~36 wit distant supervision) BERT: <5k clean data boost of F1 (1k eg. ~32 to ~47 with distant supervision)
Neural semantic parsing has achieved impressive results in recent years, yet its success relies on the availability of large amounts of supervised data. Our goal is to learn a neural semantic parser when only prior knowledge about a limited number of simple rules is available, without access to either annotated programs or execution results. Our approach is initialized by rules, and improved in a back-translation paradigm using generated question-program pairs from the semantic parser and the question generator. A phrase table with frequent mapping patterns is automatically derived, also updated as training progresses, to measure the quality of generated instances. We train the model with model-agnostic meta-learning to guarantee the accuracy and stability on examples covered by rules, and meanwhile acquire the versatility to generalize well on examples uncovered by rules. Results on three benchmark datasets with different domains and programs show that our approach incrementally improves the accuracy. On WikiSQL, our best model is comparable to the SOTA system learned from denotations.
Semantic parsing aims to map natural language questions to the logical forms of their underlying meanings, which can be regarded as programs and executed to yield answers, aka denotations BIBREF0 . In the past few years, neural network based semantic parsers have achieved promising performances BIBREF1 , however, their success is limited to the setting with rich supervision, which is costly to obtain. There have been recent attempts at low-resource semantic parsing, including data augmentation methods which are learned from a small number of annotated examples BIBREF2 , and methods for adapting to unseen domains while only being trained on annotated examples in other domains. This work investigates neural semantic parsing in a low-resource setting, in which case we only have our prior knowledge about a limited number of simple mapping rules, including a small amount of domain-independent word-level matching tables if necessary, but have no access to either annotated programs or execution results. Our key idea is to use these rules to collect modest question-programs pairs as the starting point, and then leverage automatically generated examples to improve the accuracy and generality of the model. This presents three challenges including how to generate examples in an efficient way, how to measure the quality of generated examples which might contain errors and noise, and how to train a semantic parser that makes robust predictions for examples covered by rules and generalizes well to uncovered examples. We address the aforementioned challenges with a framework consisting of three key components. The first component is a data generator. It includes a neural semantic parsing model, which maps a natural language question to a program, and a neural question generation model, which maps a program to a natural language question. We learn these two models in a back-translation paradigm using pseudo parallel examples, inspired by its big success on unsupervised neural machine translation BIBREF3 , BIBREF4 . The second component is a quality controller, which is used for filtering out noise and errors contained in the pseudo data. We construct a phrase table with frequent mapping patterns, therefore noise and errors with low frequency can be filtered out. A similar idea has been worked as posterior regularization in neural machine translation BIBREF5 , BIBREF6 . The third component is a meta learner. Instead of transferring a model pretrained with examples covered by rules to the generated examples, we leverage model-agnostic meta-learning BIBREF7 , an elegant meta-learning algorithm which has been successfully applied to a wide range of tasks including few-shot learning and adaptive control. We regard different data sources as different tasks, and use outputs of the quality controller for stable training. We test our approach on three tasks with different programs, including SQL (and SQL-like) queries for both single-turn and multi-turn questions over web tables BIBREF8 , BIBREF9 , and subject-predicate pairs over a large-scale knowledge graph BIBREF10 . The program for SQL queries for single-turn questions and subject-predicate pairs over knowledge graph is simple while the program for SQL queries for multi-turn questions have top-tier complexity among currently proposed tasks. Results show that our approach yields large improvements over rule-based systems, and incorporating different strategies incrementally improves the overall performance. On WikiSQL, our best performing system achieves execution accuracy of 72.7%, comparable to a strong system learned from denotations BIBREF11 with an accuracy of 74.8%.
548
How big are the datasets used?
Evaluation datasets used: CMRC 2018 - 18939 questions, 10 answers DRCD - 33953 questions, 5 answers NIST MT02/03/04/05/06/08 Chinese-English - Not specified Source language train data: SQuAD - Not specified
BACKGROUND ::: We developed a system to automatically classify stance towards vaccination in Twitter messages, with a focus on messages with a negative stance. Such a system makes it possible to monitor the ongoing stream of messages on social media, offering actionable insights into public hesitance with respect to vaccination. At the moment, such monitoring is done by means of regular sentiment analysis with a poor performance on detecting negative stance towards vaccination. For Dutch Twitter messages that mention vaccination-related key terms, we annotated their stance and feeling in relation to vaccination (provided that they referred to this topic). Subsequently, we used these coded data to train and test different machine learning set-ups. With the aim to best identify messages with a negative stance towards vaccination, we compared set-ups at an increasing dataset size and decreasing reliability, at an increasing number of categories to distinguish, and with different classification algorithms. ::: ::: ::: RESULTS ::: We found that Support Vector Machines trained on a combination of strictly and laxly labeled data with a more fine-grained labeling yielded the best result, at an F1-score of 0.36 and an Area under the ROC curve of 0.66, considerably outperforming the currently used sentiment analysis that yielded an F1-score of 0.25 and an Area under the ROC curve of 0.57. We also show that the recall of our system could be optimized to 0.60 at little loss of precision. ::: ::: ::: CONCLUSION ::: The outcomes of our study indicate that stance prediction by a computerized system only is a challenging task. Nonetheless, the model showed sufficient recall on identifying negative tweets so as to reduce the manual effort of reviewing messages. Our analysis of the data and behavior of our system suggests that an approach is needed in which the use of a larger training dataset is combined with a setting in which a human-in-the-loop provides the system with feedback on its predictions.
In the light of increased vaccine hesitance in various countries, consistent monitoring of public beliefs and opinions about the national immunization program is important. Besides performing qualitative research and surveys, real-time monitoring of social media data about vaccination is a valuable tool to this end. The advantage is that one is able to detect and respond to possible vaccine concerns in a timely manner, that it generates continuous data and that it consists of unsolicited, voluntary user-generated content. Several studies that analyse tweets have already been conducted, providing insight in the content that was tweeted most during the 2009 H1N1 outbreak BIBREF0, the information flow between users with a certain sentiment during this outbreak BIBREF1, or trends in tweets that convey, for example, the worries on efficacy of HPV vaccines BIBREF2, BIBREF3. While human coders are best at deploying world knowledge and interpreting the intention behind a text, manual coding of tweets is laborious. The above-mentioned studies therefore aimed at developing and evaluating a system to code tweets automatically. There are several systems in place that make use of this automatic coding. The Vaccine Confidence Project BIBREF4 is a real-time worldwide internet monitor for vaccine concerns. The Europe Media Monitor (EMM) BIBREF5 was installed to support EU institutions and Member State organizations with, for example, the analysis real-time news for medical and health-related topics and with early warning alerts per category and country. MEDISYS, derived from the EMM and developed by the Joint Research Center of the European Commission BIBREF6, is a media monitoring system providing event-based surveillance to rapidly identify potential public health threats based on information from media reports. These systems cannot be used directly for the Netherlands because they do not contain search words in Dutch, are missing an opinion-detection functionality, or do not include categories of the proper specificity. Furthermore, opinions towards vaccination are contextualized by national debates rather than a multinational debate BIBREF7, which implies that a system for monitoring vaccination stance on Twitter should ideally be trained and applied to tweets with a similar language and nationality. Finally, by creating an automatic system for mining public opinions on vaccination concerns, one can continue training and adapting the system. We therefore believe it will be valuable to build our own system. Besides analysing the content of tweets, several other applications that use social media with regard to vaccination have been proposed. They, for example, use data about internet search activity and numbers of tweets as a proxy for (changes in) vaccination coverage or for estimating epidemiological patterns. Huang et al. BIBREF8 found a high positive correlation between reported influenza attitude and behavior on Twitter and influenza vaccination coverage in the US. In contrast, Aquino et al. BIBREF9 found an inverse correlation between Mumps, Measles, Rubella (MMR) vaccination coverage and tweets, Facebook posts and internet search activity about autism and MMR vaccine in Italy. This outcome was possibly due to a decision of the Court of Justice in one of the regions to award vaccine-injury compensation for a case of autism. Wagner, Lampos, Cox and Pebody BIBREF10 assessed the usefulness of geolocated Twitter posts and Google search as source data to model influenza rates, by measuring their fit to the traditional surveillance outcomes and analyzing the data quality. They find that Google search could be a useful alternative to the regular means of surveillance, while Twitter posts are not correlating well due to a lower volume and bias in demographics. Lampos, de Bie and Christianinni BIBREF11 also make use of geolocated Twitter posts to track academics, and present a monitoring tool with a daily flu-score based on weighted keywords. Various studies BIBREF12, BIBREF13, BIBREF14 show that estimates of influenza-like illness symptoms mentioned on Twitter can be exploited to track reported disease levels relatively accurately. However, other studies BIBREF15, BIBREF16 showed that this was only the case when looking at severe cases (e.g. hospitalizations, deaths) or only for the start of the epidemic when interest from journalists was still high. Other research focuses on detecting discussion communities on vaccination in Twitter BIBREF17 or analysing semantic networks BIBREF18 to identify the most relevant and influential users as well as to better understand complex drivers of vaccine hesitancy for public health communication. Tangherlini et al. BIBREF19 explore what can be learned about the vaccination discussion from the realm of “mommy blogs”: parents posting messages about children’s health care on forum websites. They aim to obtain insights in the underlying narrative frameworks, and analyse the topics of the messages using Latent Dirichlet Allocation (LDA) BIBREF20. They find that the most prominent frame is a focus on the exemption of one’s child from receiving a vaccination in school. The motivation against vaccination is most prominently based on personal belief about health, but could also be grounded in religion. Surian et al. BIBREF21 also apply topic modeling to distinguish dominant opinions in the discussion about vaccination, and focus on HPV vaccination as discussed on Twitter. They find a common distinction between tweets reporting on personal experience and tweets that they characterize as `evidence’ (statements of having had a vaccination) and `advocacy’ (statements that support vaccination). Most similar to our work is the study by Du, Xu, Song, Liu and Tao BIBREF2. With the ultimate aim to improve the vaccine uptake, they applied supervised machine learning to analyse the stance towards vaccination as conveyed on social media. Messages were labeled as either related to vaccination or unrelated, and, when related, as ‘positive’, ‘negative’ or ‘neutral’. The ‘negative’ category was further broken down into several considerations, such as ‘safety’ and ‘cost’. After having annotated 6,000 tweets, they trained a classifier on different combinations of features, obtaining the highest macro F1-score (the average of the separate F1-scores for each prediction category) of $0.50$ and micro F1-score (the F1-score over all predictions) of $0.73$. Tweets with a negative stance that point to safety risks could best be predicted, at an optimal F1 score of $0.75$, while the other five sub-categories with a negative stance were predicted at an F1 score below $0.5$ or even $0.0$. Like Du et al. BIBREF2, we focus on analysing sentiment about vaccination using Twitter as a data source and applying supervised machine learning approaches to extract public opinion from tweets automatically. In contrast, in our evaluation we focus on detecting messages with a negative stance in particular. Accurately monitoring such messages helps to recognize discord in an early stage and take appropriate action. We do train machine learning classifiers on modeling other categories than the negative stance, evaluating whether this is beneficial to detecting tweets with a negative stance. For example, we study whether it is beneficial to this task to model tweets with a positive and neutral stance as well. We also inquire whether a more fine-grained categorization of sentiment (e.g.: worry, relief, frustration and informing) offers an advantage. Apart from comparing performance in the context of different categorizations, we compare different machine learning algorithms and compare data with different levels of annotation reliability. Finally, the performance of the resulting systems is compared to regular sentiment analysis common to social media monitoring dashboards. At the public health institute in the Netherlands, we make use of social media monitoring tools offered by Coosto. For defining whether a message is positive, negative or neutral with regard to vaccination, this system makes use of the presence or absence of positive or negative words in the messages. We believe that we could increase the sensitivity and specificity of the sentiment analysis by using supervised machine learning approaches trained on a manually coded dataset. The performance of our machine learning approaches is therefore compared to the sentiment analysis that is currently applied in the Coosto tool.
549
How better is gCAS approach compared to other approaches?
For entity F1 in the movie, taxi and restaurant domain it results in scores of 50.86, 64, and 60.35. For success, it results it outperforms in the movie and restaurant domain with scores of 77.95 and 71.52
Though the community has made great progress on Machine Reading Comprehension (MRC) task, most of the previous works are solving English-based MRC problems, and there are few efforts on other languages mainly due to the lack of large-scale training data. In this paper, we propose Cross-Lingual Machine Reading Comprehension (CLMRC) task for the languages other than English. Firstly, we present several back-translation approaches for CLMRC task, which is straightforward to adopt. However, to accurately align the answer into another language is difficult and could introduce additional noise. In this context, we propose a novel model called Dual BERT, which takes advantage of the large-scale training data provided by rich-resource language (such as English) and learn the semantic relations between the passage and question in a bilingual context, and then utilize the learned knowledge to improve reading comprehension performance of low-resource language. We conduct experiments on two Chinese machine reading comprehension datasets CMRC 2018 and DRCD. The results show consistent and significant improvements over various state-of-the-art systems by a large margin, which demonstrate the potentials in CLMRC task. Resources available: this https URL
Machine Reading Comprehension (MRC) has been a popular task to test the reading ability of the machine, which requires to read text material and answer the questions based on it. Starting from cloze-style reading comprehension, various neural network approaches have been proposed and massive progresses have been made in creating large-scale datasets and neural models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Though various types of contributions had been made, most works are dealing with English reading comprehension. Reading comprehension in other than English has not been well-addressed mainly due to the lack of large-scale training data. To enrich the training data, there are two traditional approaches. Firstly, we can annotate data by human experts, which is ideal and high-quality, while it is time-consuming and rather expensive. One can also obtain large-scale automatically generated data BIBREF0, BIBREF1, BIBREF6, but the quality is far beyond the usable threshold. Another way is to exploit cross-lingual approaches to utilize the data in rich-resource language to implicitly learn the relations between $<$passage, question, answer$>$. In this paper, we propose the Cross-Lingual Machine Reading Comprehension (CLMRC) task that aims to help reading comprehension in low-resource languages. First, we present several back-translation approaches when there is no or partially available resources in the target language. Then we propose a novel model called Dual BERT to further improve the system performance when there is training data available in the target language. We first translate target language training data into English to form pseudo bilingual parallel data. Then we use multilingual BERT BIBREF7 to simultaneously model the $<$passage, question, answer$>$ in both languages, and fuse the representations of both to generate final predictions. Experimental results on two Chinese reading comprehension dataset CMRC 2018 BIBREF8 and DRCD BIBREF9 show that by utilizing English resources could substantially improve system performance and the proposed Dual BERT achieves state-of-the-art performances on both datasets, and even surpass human performance on some metrics. Also, we conduct experiments on the Japanese and French SQuAD BIBREF10 and achieves substantial improvements. Moreover, detailed ablations and analysis are carried out to demonstrate the effectiveness of exploiting knowledge from rich-resource language. To best of our knowledge, this is the first time that the cross-lingual approaches applied and evaluated on realistic reading comprehension data. The main contributions of our paper can be concluded as follows. [leftmargin=*] We present several back-translation based reading comprehension approaches and yield state-of-the-art performances on several reading comprehension datasets, including Chinese, Japanese, and French. We propose a model called Dual BERT to simultaneously model the $<$passage, question$>$ in both source and target language to enrich the text representations. Experimental results on two public Chinese reading comprehension datasets show that the proposed cross-lingual approaches yield significant improvements over various baseline systems and set new state-of-the-art performances.
550
What is the source of external knowledge?
counts of predicate-argument tuples from English Wikipedia
Dialogue management (DM) plays a key role in the quality of the interaction with the user in a task-oriented dialogue system. In most existing approaches, the agent predicts only one DM policy action per turn. This significantly limits the expressive power of the conversational agent and introduces unwanted turns of interactions that may challenge users' patience. Longer conversations also lead to more errors and the system needs to be more robust to handle them. In this paper, we compare the performance of several models on the task of predicting multiple acts for each turn. A novel policy model is proposed based on a recurrent cell called gated Continue-Act-Slots (gCAS) that overcomes the limitations of the existing models. Experimental results show that gCAS outperforms other approaches. The code is available at this https URL
In a task-oriented dialogue system, the dialogue manager policy module predicts actions usually in terms of dialogue acts and domain specific slots. It is a crucial component that influences the efficiency (e.g., the conciseness and smoothness) of the communication between the user and the agent. Both supervised learning (SL) BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 and reinforcement learning (RL) approaches BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9 have been adopted to learn policies. SL learns a policy to predict acts given the dialogue state. Recent work BIBREF10, BIBREF11 also used SL as pre-training for RL to mitigate the sample inefficiency of RL approaches and to reduce the number of interactions. Sequence2Sequence (Seq2Seq) BIBREF12 approaches have also been adopted in user simulators to produce user acts BIBREF13. These approaches typically assume that the agent can only produce one act per turn through classification. Generating only one act per turn significantly limits what an agent can do in a turn and leads to lengthy dialogues, making tracking of state and context throughout the dialogue harder. An example in Table TABREF3 shows how the agent can produce both an inform and a multiple_choice act, reducing the need for additional turns. The use of multiple actions has previously been used in interaction managers that keep track of the floor (who is speaking right now) BIBREF14, BIBREF15, BIBREF16, but the option of generating multiple acts simultaneously at each turn for dialogue policy has been largely ignored, and only explored in simulated scenarios without real data BIBREF17. This task can be cast as a multi-label classification problem (if the sequential dependency among the acts is ignored) or as a sequence generation one as shown in Table TABREF4. In this paper, we introduce a novel policy model to output multiple actions per turn (called multi-act), generating a sequence of tuples and expanding agents' expressive power. Each tuple is defined as $(\textit {continue}, \textit {act}, \textit {slots})$, where continue indicates whether to continue or stop producing new acts, act is an act type (e.g., inform or request), and slots is a set of slots (names) associated with the current act type. Correspondingly, a novel decoder (Figure FIGREF5) is proposed to produce such sequences. Each tuple is generated by a cell called gated Continue Act Slots (gCAS, as in Figure FIGREF7), which is composed of three sequentially connected gated units handling the three components of the tuple. This decoder can generate multi-acts in a double recurrent manner BIBREF18. We compare this model with baseline classifiers and sequence generation models and show that it consistently outperforms them.
552
What were the sizes of the test sets?
Test set 1 contained 57 drug labels and 8208 sentences and test set 2 contained 66 drug labels and 4224 sentences
Constituting highly informative network embeddings is an important tool for network analysis. It encodes network topology, along with other useful side information, into low-dimensional node-based feature representations that can be exploited by statistical modeling. This work focuses on learning context-aware network embeddings augmented with text data. We reformulate the network-embedding problem, and present two novel strategies to improve over traditional attention mechanisms: ($i$) a content-aware sparse attention module based on optimal transport, and ($ii$) a high-level attention parsing module. Our approach yields naturally sparse and self-normalized relational inference. It can capture long-term interactions between sequences, thus addressing the challenges faced by existing textual network embedding schemes. Extensive experiments are conducted to demonstrate our model can consistently outperform alternative state-of-the-art methods.
When performing network embedding, one maps network nodes into vector representations that reside in a low-dimensional latent space. Such techniques seek to encode topological information of the network into the embedding, such as affinity BIBREF0 , local interactions (e.g, local neighborhoods) BIBREF1 , and high-level properties such as community structure BIBREF2 . Relative to classical network-representation learning schemes BIBREF3 , network embeddings provide a more fine-grained representation that can be easily repurposed for other downstream applications (e.g., node classification, link prediction, content recommendation and anomaly detection). For real-world networks, one naturally may have access to rich side information about each node. Of particular interest are textual networks, where the side information comes in the form of natural language sequences BIBREF4 . For example, user profiles or their online posts on social networks (e.g., Facebook, Twitter), and documents in citation networks (e.g., Cora, arXiv). The integration of text information promises to significantly improve embeddings derived solely from the noisy, sparse edge representations BIBREF5 . Recent work has started to explore the joint embedding of network nodes and the associated text for abstracting more informative representations. BIBREF5 reformulated DeepWalk embedding as a matrix factorization problem, and fused text-embedding into the solution, while BIBREF6 augmented the network with documents as auxiliary nodes. Apart from direct embedding of the text content, one can first model the topics of the associated text BIBREF7 and then supply the predicted labels to facilitate embedding BIBREF8 . Many important downstream applications of network embeddings are context-dependent, since a static vector representation of the nodes adapts to the changing context less effectively BIBREF9 . For example, the interactions between social network users are context-dependent (e.g., family, work, interests), and contextualized user profiling can promote the specificity of recommendation systems. This motivates context-aware embedding techniques, such as CANE BIBREF9 , where the vector embedding dynamically depends on the context. For textual networks, the associated texts are natural candidates for context. CANE introduced a simple mutual attention weighting mechanism to derive context-aware dynamic embeddings for link prediction. Following the CANE setup, WANE BIBREF10 further improved the contextualized embedding, by considering fine-grained text alignment. Despite the promising results reported thus far, we identify three major limitations of existing context-aware network embedding solutions. First, mutual (or cross) attentions are computed from pairwise similarities between local text embeddings (word/phrase matching), whereas global sequence-level modeling is known to be more favorable across a wide range of NLP tasks BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Second, related to the above point, low-level affinity scores are directly used as mutual attention without considering any high-level parsing. Such an over-simplified operation denies desirable features, such as noise suppression and relational inference BIBREF15 , thereby compromising model performance. Third, mutual attention based on common similarity measures (e.g., cosine similarity) typically yields dense attention matrices, while psychological and computational evidence suggests a sparse attention mechanism functions more effectively BIBREF16 , BIBREF17 . Thus such naive similarity-based approaches can be suboptimal, since they are more likely to incorporate irrelevant word/phrase matching. This work represents an attempt to improve context-aware textual network embedding, by addressing the above issues. Our contributions include: ( INLINEFORM0 ) We present a principled and more-general formulation of the network embedding problem, under reproducing kernel Hilbert spaces (RKHS) learning; this formulation clarifies aspects of the existing literature and provides a flexible framework for future extensions. ( INLINEFORM1 ) A novel global sequence-level matching scheme is proposed, based on optimal transport, which matches key concepts between text sequences in a sparse attentive manner. ( INLINEFORM2 ) We develop a high-level attention-parsing mechanism that operates on top of low-level attention, which is capable of capturing long-term interactions and allows relational inference for better contextualization. We term our model Global Attention Network Embedding (GANE). To validate the effectiveness of GANE, we benchmarked our models against state-of-the-art counterparts on multiple datasets. Our models consistently outperform competing methods.
555
Which datasets are used?
ABSA SemEval 2014-2016 datasets Yelp Academic Dataset Wikipedia dumps
Text adventure games, in which players must make sense of the world through text descriptions and declare actions through text descriptions, provide a stepping stone toward grounding action in language. Prior work has demonstrated that using a knowledge graph as a state representation and question-answering to pre-train a deep Q-network facilitates faster control policy transfer. In this paper, we explore the use of knowledge graphs as a representation for domain knowledge transfer for training text-adventure playing reinforcement learning agents. Our methods are tested across multiple computer generated and human authored games, varying in domain and complexity, and demonstrate that our transfer learning methods let us learn a higher-quality control policy faster.
Text adventure games, in which players must make sense of the world through text descriptions and declare actions through natural language, can provide a stepping stone toward more real-world environments where agents must communicate to understand the state of the world and affect change in the world. Despite the steadily increasing body of research on text-adventure games BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, and in addition to the ubiquity of deep reinforcement learning applications BIBREF8, BIBREF9, teaching an agent to play text-adventure games remains a challenging task. Learning a control policy for a text-adventure game requires a significant amount of exploration, resulting in training runs that take hundreds of thousands of simulations BIBREF2, BIBREF7. One reason that text-adventure games require so much exploration is that most deep reinforcement learning algorithms are trained on a task without a real prior. In essence, the agent must learn everything about the game from only its interactions with the environment. Yet, text-adventure games make ample use of commonsense knowledge (e.g., an axe can be used to cut wood) and genre themes (e.g., in a horror or fantasy game, a coffin is likely to contain a vampire or other undead monster). This is in addition to the challenges innate to the text-adventure game itself—games are puzzles—which results in inefficient training. BIBREF7 developed a reinforcement learning agent that modeled the text environment as a knowledge graph and achieved state-of-the-art results on simple text-adventure games provided by the TextWorld BIBREF5 environment. They observed that a simple form of transfer from very similar games greatly improved policy training time. However, games beyond the toy TextWorld environments are beyond the reach of state-of-the-art techniques. In this paper, we explore the use of knowledge graphs and associated neural embeddings as a medium for domain transfer to improve training effectiveness on new text-adventure games. Specifically, we explore transfer learning at multiple levels and across different dimensions. We first look at the effects of playing a text-adventure game given a strong prior in the form of a knowledge graph extracted from generalized textual walk-throughs of interactive fiction as well as those made specifically for a given game. Next, we explore the transfer of control policies in deep Q-learning (DQN) by pre-training portions of a deep Q-network using question-answering and by DQN-to-DQN parameter transfer between games. We evaluate these techniques on two different sets of human authored and computer generated games, demonstrating that our transfer learning methods enable us to learn a higher-quality control policy faster.
557
What models are included in baseline benchmarking results?
BERT, XLNET RoBERTa, ALBERT, DistilBERT
Recently, there has been interest in multiplicative recurrent neural networks for language modeling. Indeed, simple Recurrent Neural Networks (RNNs) encounter difficulties recovering from past mistakes when generating sequences due to high correlation between hidden states. These challenges can be mitigated by integrating second-order terms in the hidden-state update. One such model, multiplicative Long Short-Term Memory (mLSTM) is particularly interesting in its original formulation because of the sharing of its second-order term, referred to as the intermediate state. We explore these architectural improvements by introducing new models and testing them on character-level language modeling tasks. This allows us to establish the relevance of shared parametrization in recurrent language modeling.
One of the principal challenges in computational linguistics is to account for the word order of the document or utterance being processed BIBREF0 . Of course, the numbers of possible phrases grows exponentially with respect to a given phrase length, requiring an approximate approach to summarizing its content. rnn are such an approach, and they are used in various tasks in nlp, such as machine translation BIBREF1 , abstractive summarization BIBREF2 and question answering BIBREF3 . However, rnn, as approximations, suffer from numerical troubles that have been identified, such as that of recovering from past errors when generating phrases. We take interest in a model that mitigates this problem, mrnn, and how it has been and can be combined for new models. To evaluate these models, we use the task of recurrent language modeling, which consists in predicting the next token (character or word) in a document. This paper is organized as follows: rnn and mrnn are introduced respectively in Sections SECREF2 and SECREF3 . Section SECREF4 presents new and existing multiplicative models. Section SECREF5 describes the datasets and experiments performed, as well as results obtained. Sections SECREF6 discusses and concludes our findings.
559
It looks like learning to paraphrase questions, a neural scoring model and a answer selection model cannot be trained end-to-end. How are they trained?
using multiple pivot sentences
We describe and validate a metric for estimating multi-class classifier performance based on cross-validation and adapted for improvement of small, unbalanced natural-language datasets used in chatbot design. Our experiences draw upon building recruitment chatbots that mediate communication between job-seekers and recruiters by exposing the ML/NLP dataset to the recruiting team. Evaluation approaches must be understandable to various stakeholders, and useful for improving chatbot performance. The metric, nex-cv, uses negative examples in the evaluation of text classification, and fulfils three requirements. First, it is actionable: it can be used by non-developer staff. Second, it is not overly optimistic compared to human ratings, making it a fast method for comparing classifiers. Third, it allows model-agnostic comparison, making it useful for comparing systems despite implementation differences. We validate the metric based on seven recruitment-domain datasets in English and German over the course of one year.
Smart conversational agents are increasingly used across business domains BIBREF0 . We focus on recruitment chatbots that connect recruiters and job-seekers. The recruiter teams we work with are motivated by reasons of scale and accessibility to build and maintain chatbots that provide answers to frequently asked questions (FAQs) based on ML/NLP datasets. Our enterprise clients may have up to INLINEFORM0 employees, and commensurate hiring rate. We have found that almost INLINEFORM1 of end-user (job-seeker) traffic occurs outside of working hours BIBREF1 , which is consistent with the anecdotal reports of our clients that using the chatbot helped reduce email and ticket inquiries of common FAQs. The usefulness of these question-answering conversational UIs depends on building and maintaining the ML/NLP components used in the overall flow (see Fig. FIGREF4 ). In practice, the use of NLP does not improve the experience of many chatbots BIBREF2 , which is unsurprising. Although transparency (being “honest and transparent when explaining why something doesn't work”) is a core design recommendation BIBREF3 , the most commonly available higher-level platforms BIBREF4 do not provide robust ways to understand error and communicate its implications. Interpretability is a challenge beyond chatbots, and is a prerequisite for trust in both individual predictions and the overall model BIBREF5 . The development of the nex-cv metric was driven by a need for a quantification useful to developers, as well as both vendor and client non-developer staff. The nex-cv metric uses plausible negative examples to perform actionable, model-agnostic evaluation of text classification as a component in a chatbot system. It was developed, validated, and used at jobpal, a recruiting chatbot company, in projects where a client company's recruiting team trains and maintains a semi-automated conversational agent's question-answering dataset. Use of ML and NLP is subject to conversation flow design considerations, and internal and external transparency needs BIBREF6 . The chatbots do not generate answers, but provide all responses from a bank that can be managed by client staff. Each of about a dozen live chatbots answers about INLINEFORM0 of incoming questions without having to defer to a human for an answer. About two thirds of the automated guesses are confirmed by recruiters; the rest are corrected (Fig. FIGREF9 ). In “Background”, we relate our work to prior research on curated ML/NLP datasets and evaluation in chatbots. In “Approach”, we describe the metric and provide its application and data context of use. In “Validation Datasets”, we describe the datasets with which this metric has been validated. In “Validation”, we provide results from experiments conducted while developing and using the metric for over a year, addressing each of the needs of the metric, which make it a useful tool for multiple stakeholders in the chatbot design and maintenance process. We contribute a metric definition, its validation with six real projects over the course of one year (2018.Q2 through 2019.Q1), as well as an extensible implementation and testing plan, which is described in “Metric Definition” below.
562
How much more accurate is the model than the baseline?
For the Oshiete-goo dataset, the NAGM model's ROUGE-L score is higher than the highest performing conventional model, Trans, by 0.021, and its BLEU-4 score is higher than the highest performing model CLSTM by 0.037. For the nfL6 dataset, the NAGM model's ROUGE-L score is higher than the highest performing conventional model, CLSTM, by 0.028, and its BLEU-4 score is higher than the highest performing model CLSTM by 0.040. Human evaluation of the NAGM's generated outputs for the Oshiete-goo dataset had 47% ratings of (1), the highest rating, while CLSTM only received 21% ratings of (1). For the nfL6 dataset, the comparison of (1)'s was NAGM's 50% to CLSTM's 30%.
We study methods for learning sentence embeddings with syntactic structure. We focus on methods of learning syntactic sentence-embeddings by using a multilingual parallel-corpus augmented by Universal Parts-of-Speech tags. We evaluate the quality of the learned embeddings by examining sentence-level nearest neighbours and functional dissimilarity in the embedding space. We also evaluate the ability of the method to learn syntactic sentence-embeddings for low-resource languages and demonstrate strong evidence for transfer learning. Our results show that syntactic sentence-embeddings can be learned while using less training data, fewer model parameters, and resulting in better evaluation metrics than state-of-the-art language models.
Recent success in language modelling and representation learning have largely focused on learning the semantic structures of language BIBREF0. Syntactic information, such as part-of-speech (POS) sequences, is an essential part of language and can be important for tasks such as authorship identification, writing-style analysis, translation, etc. Methods that learn syntactic representations have received relatively less attention, with focus mostly on evaluating the semantic information contained in representations produced by language models. Multilingual embeddings have been shown to achieve top performance in many downstream tasks BIBREF1, BIBREF2. By training over large corpora, these models have shown to generalize to similar but unseen contexts. However, words contain multiple types of information: semantic, syntactic, and morphologic. Therefore, it is possible that syntactically different passages have similar embeddings due to their semantic properties. On tasks like the ones mentioned above, discriminating using patterns that include semantic information may result in poor generalization, specially when datasets are not sufficiently representative. In this work, we study methods that learn sentence-level embeddings that explicitly capture syntactic information. We focus on variations of sequence-to-sequence models BIBREF3, trained using a multilingual corpus with universal part-of-speech (UPOS) tags for the target languages only. By using target-language UPOS tags in the training process, we are able to learn sentence-level embeddings for source languages that lack UPOS tagging data. This property can be leveraged to learn syntactic embeddings for low-resource languages. Our main contributions are: to study whether sentence-level syntactic embeddings can be learned efficiently, to evaluate the structure of the learned embedding space, and to explore the potential of learning syntactic embeddings for low-resource languages. We evaluate the syntactic structure of sentence-level embeddings by performing nearest-neighbour (NN) search in the embedding space. We show that these embeddings exhibit properties that correlate with similarities between UPOS sequences of the original sentences. We also evaluate the embeddings produced by language models such as BERT BIBREF0 and show that they contain some syntactic information. We further explore our method in the few-shot setting for low-resource source languages without large, high quality treebank datasets. We show its transfer-learning capabilities on artificial and real low-resource languages. Lastly, we show that training on multilingual parallel corpora significantly improves the learned syntactic embeddings. This is similar to existing results for models trained (or pre-trained) on multiple languages BIBREF4, BIBREF2 for downstream tasks BIBREF5.
563
What are two strong baseline methods authors refer to?
Marcheggiani and Titov (2017) and Cai et al. (2018)
This paper tackles the goal of conclusion-supplement answer generation for non-factoid questions, which is a critical issue in the field of Natural Language Processing (NLP) and Artificial Intelligence (AI), as users often require supplementary information before accepting a conclusion. The current encoder-decoder framework, however, has difficulty generating such answers, since it may become confused when it tries to learn several different long answers to the same non-factoid question. Our solution, called an ensemble network, goes beyond single short sentences and fuses logically connected conclusion statements and supplementary statements. It extracts the context from the conclusion decoder's output sequence and uses it to create supplementary decoder states on the basis of an attention mechanism. It also assesses the closeness of the question encoder's output sequence and the separate outputs of the conclusion and supplement decoders as well as their combination. As a result, it generates answers that match the questions and have natural-sounding supplementary sequences in line with the context expressed by the conclusion sequence. Evaluations conducted on datasets including "Love Advice" and "Arts & Humanities" categories indicate that our model outputs much more accurate results than the tested baseline models do.
Question Answering (QA) modules play particularly important roles in recent dialog-based Natural Language Understanding (NLU) systems, such as Apple's Siri and Amazon's Echo. Users chat with AI systems in natural language to get the answers they are seeking. QA systems can deal with two types of question: factoid and non-factoid ones. The former sort asks, for instance, for the name of a thing or person such as “What/Who is $X$?”. The latter sort includes more diverse questions that cannot be answered by a short fact. For instance, users may ask for advice on how to make a long-distance relationship work well or for opinions on public issues. Significant progress has been made in answering factoid questions BIBREF0, BIBREF1; however, answering non-factoid questions remains a challenge for QA modules. Long short term memory (LSTM) sequence-to-sequence models BIBREF2, BIBREF3, BIBREF4 try to generate short replies to the short utterances often seen in chat systems. Evaluations have indicated that these models have the possibility of supporting simple forms of general knowledge QA, e.g. “Is the sky blue or black?”, since they learn commonly occurring sentences in the training corpus. Recent machine reading comprehension (MRC) methods BIBREF5, BIBREF6 try to return a single short answer to a question by extracting answer spans from the provided passages. Unfortunately, they may generate unsatisfying answers to regular non-factoid questions because they can easily become confused when learning several different long answers to the same non-factoid question, as pointed out by BIBREF7, BIBREF8. This paper tackles a new problem: conclusion-supplement answer generation for non-factoid questions. Here, the conclusion consists of sentences that directly answer the question, while the supplement consists of information supporting the conclusion, e.g., reasons or examples. Such conclusion-supplement answers are important for helping questioners decide their actions, especially in NLU. As described in BIBREF9, users prefer a supporting supplement before accepting an instruction (i.e., a conclusion). Good debates also include claims (i.e., conclusions) about a topic and supplements to support them that will allow users to reach decisions BIBREF10. The following example helps to explain how conclusion-supplement answers are useful to users: “Does separation by a long distance ruin love?” Current methods tend to answer this question with short and generic replies, such as, “Distance cannot ruin true love”. The questioner, however, is not likely to be satisfied with such a trite answer and will want to know how the conclusion was reached. If a supplemental statement like “separations certainly test your love” is presented with the conclusion, the questioner is more likely to accept the answer and use it to reach a decision. Furthermore, there may be multiple answers to a non-factoid question. For example, the following answer is also a potential answer to the question: “distance ruins most relationships. You should keep in contact with him”. The current methods, however, have difficulty generating such conclusion-supplement answers because they can become easily confused when they try to learn several different and long answers to a non-factoid question. To address the above problem, we propose a novel architecture, called the ensemble network. It is an extension of existing encoder-decoder models, and it generates two types of decoder output sequence, conclusion and supplement. It uses two viewpoints for selecting the conclusion statements and supplementary statements. (Viewpoint 1) The context present in the conclusion decoder's output is linked to supplementary-decoder output states on the basis of an attention mechanism. Thus, the context of the conclusion sequence directly impacts the decoder states of the supplement sequences. This, as a result, generates natural-sounding supplementary sequences. (Viewpoint 2) The closeness of the question sequence and conclusion (or supplement) sequence as well as the closeness of the question sequence with the combination of conclusion and supplement sequences is considered. By assessing the closeness at the sentence level and sentence-combination level in addition to at the word level, it can generate answers that include good supplementary sentences following the context of the conclusion. This avoids having to learn several different conclusion-supplement answers assigned to a single non-factoid question and generating answers whose conclusions and supplements are logically inconsistent with each other. Community-based QA (CQA) websites tend to provide answers composed of conclusion and supplementary statements; from our investigation, 77% of non-factoid answers (love advice) in the Oshiete-goo (https://oshiete.goo.ne.jp) dataset consist of these two statement types. The same is true for 82% of the answers in the Yahoo non-factoid dataset related to the fields of social science, society & culture and arts & humanities. We used the above-mentioned CQA datasets in our evaluations, since they provide diverse answers given by many responders. The results showed that our method outperforms existing ones at generating correct and natural answers. We also conducted an love advice service in Oshiete goo to evaluate the usefulness of our ensemble network.
564
How many category tags are considered?
14 categories
As a fundamental NLP task, semantic role labeling (SRL) aims to discover the semantic roles for each predicate within one sentence. This paper investigates how to incorporate syntactic knowledge into the SRL task effectively. We present different approaches of encoding the syntactic information derived from dependency trees of different quality and representations; we propose a syntax-enhanced self-attention model and compare it with other two strong baseline methods; and we conduct experiments with newly published deep contextualized word representations as well. The experiment results demonstrate that with proper incorporation of the high quality syntactic information, our model achieves a new state-of-the-art performance for the Chinese SRL task on the CoNLL-2009 dataset.
The task of semantic role labeling (SRL) is to recognize arguments for a given predicate in one sentence and assign labels to them, including “who” did “what” to “whom”, “when”, “where”, etc. Figure FIGREF1 is an example sentence with both semantic roles and syntactic dependencies. Since the nature of semantic roles is more abstract than the syntactic dependencies, SRL has a wide range of applications in different areas, e.g., text classification BIBREF0, text summarization BIBREF1, BIBREF2, recognizing textual entailment BIBREF3, BIBREF4, information extraction BIBREF5, question answering BIBREF6, BIBREF7, and so on. UTF8gbsn Traditionally, syntax is the bridge to reach semantics. However, along with the popularity of the end-to-end models in the NLP community, various recent studies have been discussing the necessity of syntax in the context of SRL. For instance, BIBREF8 have observed that only good syntax helps with the SRL performance. BIBREF9 have explored what kind of syntactic information or structure is better suited for the SRL model. BIBREF10 have compared syntax-agnostic and syntax-aware approaches and claim that the syntax-agnostic model surpasses the syntax-aware ones. In this paper, we focus on analyzing the relationship between the syntactic dependency information and the SRL performance. In particular, we investigate the following four aspects: 1) Quality of the syntactic information: whether the performance of the syntactic parser output affects the SRL performance; 2) Representation of the syntactic information: how to represent the syntactic dependencies to better preserve the original structural information; 3) Incorporation of the syntactic information: at which layer of the SRL model and how to incorporate the syntactic information; and 4) the Relationship with other external resources: when we append other external resources into the SRL model, whether their contributions are orthogonal to the syntactic dependencies. For the main architecture of the SRL model, many neural-network-based models use BiLSTM as the encoder (e.g., BIBREF10, BIBREF11, BIBREF12), while recently self-attention-based encoder becomes popular due to both the effectiveness and the efficiency BIBREF13, BIBREF14, BIBREF15. By its nature, the self-attention-based model directly captures the relation between words in the sentence, which is convenient to incorporate syntactic dependency information. BIBREF15 replace one attention head with pre-trained syntactic dependency information, which can be viewed as a hard way to inject syntax into the neural model. Enlightened by the machine translation model proposed by BIBREF16, we introduce the Relation-Aware method to incorporate syntactic dependencies, which is a softer way to encode richer structural information. Various experiments for the Chinese SRL on the CoNLL-2009 dataset are conducted to evaluate our hypotheses. From the empirical results, we observe that: 1) The quality of the syntactic information is essential when we incorporate structural information into the SRL model; 2) Deeper integration of the syntactic information achieves better results than the simple concatenation to the inputs; 3) External pre-trained contextualized word representations help to boost the SRL performance further, which is not entirely overlapping with the syntactic information. In summary, the contributions of our work are: We present detailed experiments on different aspects of incorporating syntactic information into the SRL model, in what quality, in which representation and how to integrate. We introduce the relation-aware approach to employ syntactic dependencies into the self-attention-based SRL model. We compare our approach with previous studies, and achieve state-of-the-art results with and without external resources, i.e., in the so-called closed and open settings.
572
How are EAC evaluated?
Qualitatively through efficiency, effectiveness and satisfaction aspects and quantitatively through metrics such as precision, recall, accuracy, BLEU score and even human judgement.
In this paper, we study semantic role labelling (SRL), a subtask of semantic parsing of natural language sentences and its application for the Vietnamese language. We present our effort in building Vietnamese PropBank, the first Vietnamese SRL corpus and a software system for labelling semantic roles of Vietnamese texts. In particular, we present a novel constituent extraction algorithm in the argument candidate identification step which is more suitable and more accurate than the common node-mapping method. In the machine learning part, our system integrates distributed word features produced by two recent unsupervised learning models in two learned statistical classifiers and makes use of integer linear programming inference procedure to improve the accuracy. The system is evaluated in a series of experiments and achieves a good result, an $F_1$ score of 74.77%. Our system, including corpus and software, is available as an open source project for free research and we believe that it is a good baseline for the development of future Vietnamese SRL systems.
In this paper, we study semantic role labelling (SRL), a subtask of semantic parsing of natural language sentences. SRL is the task of identifying semantic roles of arguments of each predicate in a sentence. In particular, it answers a question Who did what to whom, when, where, why?. For each predicate in a sentence, the goal is to identify all constituents that fill a semantic role, and to determine their roles, such as agent, patient, or instrument, and their adjuncts, such as locative, temporal or manner. Figure 1 shows the SRL of a simple Vietnamese sentence. In this example, the arguments of the predicate giúp (helped) are labelled with their semantic roles. The meaning of the labels will be described in detail in Section "Building a Vietnamese PropBank" . SRL has been used in many natural language processing (NLP) applications such as question answering BIBREF0 , machine translation BIBREF1 , document summarization BIBREF2 and information extraction BIBREF3 . Therefore, SRL is an important task in NLP. The first SRL system was developed by Gildea and Jurafsky BIBREF4 . This system was performed on the English FrameNet corpus. Since then, SRL task has been widely studied by the NLP community. In particular, there have been two shared-tasks, CoNLL-2004 BIBREF5 and CoNLL-2005 BIBREF6 , focusing on SRL task for English. Most of the systems participating in these shared-tasks treated this problem as a classification problem which can be solved by supervised machine learning techniques. There exists also several systems for other well-studied languages like Chinese BIBREF7 or Japanese BIBREF8 . This paper covers not only the contents of two works published in conference proceedings BIBREF9 (in Vietnamese) and BIBREF10 on the construction and the evaluation of a first SRL system for Vietnamese, but also an extended investigation of techniques used in SRL. More concretely, the use of integer linear programming inference procedure and distributed word representations in our semantic role labelling system, which leads to improved results over our previous work, as well as a more elaborate evaluation are new for this article. Our system includes two main components, a SRL corpus and a SRL software which is thoroughly evaluated. We employ the same development methodology of the English PropBank to build a SRL corpus for Vietnamese containing a large number of syntactically parsed sentences with predicate-argument structures. We then use this SRL corpus and supervised machine learning models to develop a SRL software for Vietnamese. We demonstrate that a simple application of SRL techniques developed for English or other languages could not give a good accuracy for Vietnamese. In particular, in the constituent identification step, the widely used 1-1 node-mapping algorithm for extracting argument candidates performs poorly on the Vietnamese dataset, having $F_1$ score of 35.93%. We thus introduce a new algorithm for extracting candidates, which is much more accurate, achieving an $F_1$ score of 84.08%. In the classification step, in addition to the common linguistic features, we propose novel and useful features for use in SRL, including function tags and distributed word representations. These features are employed in two statistical classification models, maximum entropy and support vector machines, which are proved to be good at many classification problems. In order to incorporate important grammatical constraints into the system to improve further the performance, we combine machine learning techniques with an inference procedure based on integer linear programming. Finally, we use distributed word representations produced by two recent unsupervised models, the Skip-gram model and the GloVe model, on a large corpus to alleviate the data sparseness problem. These word embeddings help our SRL software system generalize well on unseen words. Our final system achieves an $F_1$ score of 74.77% on a test corpus. This system, including corpus and software, is available as an open source project for free research and we believe that it is a good baseline for the development of future Vietnamese SRL systems. The remainder of this paper is structured as follows. Section "Existing English SRL Corpora" describes the construction of a SRL corpus for Vietnamese. Section "Vietnamese SRL System" presents the development of a SRL software, including the methodologies of existing systems and of our system. Section "Evaluation" presents the evaluation results and discussion. Finally, Section "Conclusion" concludes the paper and suggests some directions for future work.
573
What is triangulation?
Answer with content missing: (Chapter 3) The concept can be easily explained with an example, visualized in Figure 1. Consider the Portuguese (Pt) word trabalho which, according to the MUSE Pt–En dictionary, has the words job and work as possible En translations. In turn, these two En words can be translated to 4 and 5 Czech (Cs) words respectively. By utilizing the transitive property (which translation should exhibit) we can identify the set of 7 possible Cs translations for the Pt word trabalho.
Textual conversational agent or chatbots' development gather tremendous traction from both academia and industries in recent years. Nowadays, chatbots are widely used as an agent to communicate with a human in some services such as booking assistant, customer service, and also a personal partner. The biggest challenge in building chatbot is to build a humanizing machine to improve user engagement. Some studies show that emotion is an important aspect to humanize machine, including chatbot. In this paper, we will provide a systematic review of approaches in building an emotionally-aware chatbot (EAC). As far as our knowledge, there is still no work focusing on this area. We propose three research question regarding EAC studies. We start with the history and evolution of EAC, then several approaches to build EAC by previous studies, and some available resources in building EAC. Based on our investigation, we found that in the early development, EAC exploits a simple rule-based approach while now most of EAC use neural-based approach. We also notice that most of EAC contain emotion classifier in their architecture, which utilize several available affective resources. We also predict that the development of EAC will continue to gain more and more attention from scholars, noted by some recent studies propose new datasets for building EAC in various languages.
Conversational agents or dialogue systems development are gaining more attention from both industry and academia BIBREF0 , BIBREF1 in the latest years. Some works tried to model them into domain-specific tasks such as customer service BIBREF2 , BIBREF3 , and shopping assistance BIBREF4 . Other works design a multi-purpose agents such as SIRI, Amazon Alexa, and Google Assistance. This domain is a well-researched area in Human-Computer Interaction research community but still become a hot topic now. The main development focus right now is to have an intelligent and humanizing machine to have a better engagement when communicating with human BIBREF5 . Having a better engagement will lead to higher user satisfaction, which becomes the main objective from the industry perspective. In this study, we will only focus on textual conversational agent or chatbot, a conversational artificial intelligence which can conduct a textual communication with a human by exploiting several natural language processing techniques. There are several approaches used to build a chatbot, start by using a simple rule-based approach BIBREF6 , BIBREF7 until more sophisticated one by using neural-based technique BIBREF8 , BIBREF9 . Nowadays, chatbots are mostly used as customer service such as booking systems BIBREF10 , BIBREF11 , shopping assistance BIBREF3 or just as conversational partner such as Endurance and Insomnobot . Therefore, there is a significant urgency to humanize chatbot for having a better user-engagement. Some works were already proposed several approaches to improve chatbot's user-engagement, such as building a context-aware chatbot BIBREF12 and injecting personality into the machine BIBREF13 . Other works also try to incorporate affective computing to build emotionally-aware chatbots BIBREF2 , BIBREF14 , BIBREF15 . Some existing studies shows that adding emotion information into dialogue systems is able to improve user-satisfaction BIBREF16 , BIBREF17 . Emotion information contribute to a more positive interaction between machine and human, which lead to reduce miscommunication BIBREF18 . Some previous studies also found that using affect information can help chatbot to understand users' emotional state, in order to generate better response BIBREF19 . Not only emotion, another study also introduce the use of tones to improve satisfactory service. For instance, using empathetic tone is able to reduces user stress and results in more engagement. BIBREF2 found that tones is an important aspect in building customer care chatbot. They discover eight different tones including anxious, frustrated, impolite, passionate, polite, sad, satisfied, and empathetic. In this paper, we will try to summarize some previous studies which focus on injecting emotion information into chatbots, on discovering recent issues and barriers in building engaging emotionally-aware chatbots. Therefore, we propose some research questions to have a better problem definition: This paper will be organized as follows: Section 2 introduces the history of the relation between affective information with chatbots. Section 3 outline some works which try to inject affective information into chatbots. Section 4 summarizes some affective resources which can be utilized to provide affective information. Then, Section 5 describes some evaluation metric that already applied in some previous works related to emotionally-aware chatbots. Last Section 6 will conclude the rest of the paper and provide a prediction of future development in this research direction based on our analysis.
578
What languages do they use?
Train languages are: Cantonese, Bengali, Pashto, Turkish, Vietnamese, Haitian, Tamil, Kurdish, Tokpisin and Georgian, while Assamese, Tagalog, Swahili, Lao are used as target languages.
Word2vec is a popular family of algorithms for unsupervised training of dense vector representations of words on large text corpuses. The resulting vectors have been shown to capture semantic relationships among their corresponding words, and have shown promise in reducing a number of natural language processing (NLP) tasks to mathematical operations on these vectors. While heretofore applications of word2vec have centered around vocabularies with a few million words, wherein the vocabulary is the set of words for which vectors are simultaneously trained, novel applications are emerging in areas outside of NLP with vocabularies comprising several 100 million words. Existing word2vec training systems are impractical for training such large vocabularies as they either require that the vectors of all vocabulary words be stored in the memory of a single server or suffer unacceptable training latency due to massive network data transfer. In this paper, we present a novel distributed, parallel training system that enables unprecedented practical training of vectors for vocabularies with several 100 million words on a shared cluster of commodity servers, using far less network traffic than the existing solutions. We evaluate the proposed system on a benchmark dataset, showing that the quality of vectors does not degrade relative to non-distributed training. Finally, for several quarters, the system has been deployed for the purpose of matching queries to ads in Gemini, the sponsored search advertising platform at Yahoo, resulting in significant improvement of business metrics.
Embedding words in a common vector space can enable machine learning algorithms to achieve better performance in natural language processing (NLP) tasks. Word2vec BIBREF0 is a recently proposed family of algorithms for training such vector representations from unstructured text data via shallow neural networks. The geometry of the resulting vectors was shown in BIBREF0 to capture word semantic similarity through the cosine similarity of the corresponding vectors as well as more complex semantic relationships through vector differences, such as vec(“Madrid”) - vec(“Spain”) + vec(“France”) INLINEFORM0 vec(“Paris”). More recently, novel applications of word2vec involving unconventional generalized “words” and training corpuses have been proposed. These powerful ideas from the NLP community have been adapted by researchers from other domains to tasks beyond representation of words, including relational entities BIBREF1 , BIBREF2 , general text-based attributes BIBREF3 , descriptive text of images BIBREF4 , nodes in graph structure of networks BIBREF5 , and queries BIBREF6 , to name a few. While most NLP applications of word2vec do not require training of large vocabularies, many of the above mentioned real-world applications do. For example, the number of unique nodes in a social network BIBREF5 or the number of unique queries in a search engine BIBREF6 can easily reach few hundred million, a scale that is not achievable using existing word2vec implementations. The training of vectors for such large vocabularies presents several challenges. In word2vec, each vocabulary word has two associated INLINEFORM0 -dimensional vectors which must be trained, respectively referred to as input and output vectors, each of which is represented as an array of INLINEFORM1 single precision floating point numbers BIBREF0 . To achieve acceptable training latency, all vectors need to be kept in physical memory during training, and, as a result, word2vec requires INLINEFORM2 bytes of RAM to train a vocabulary INLINEFORM3 . For example, in Section SECREF2 , we discuss the search advertisement use case with 200 million generalized words and INLINEFORM4 which would thus require INLINEFORM5 = 480GB memory which is well beyond the capacity of typical commodity servers today. Another issue with large vocabulary word2vec training is that the training corpuses required for learning meaningful vectors for such large vocabularies, are themselves very large, on the order of 30 to 90 billion generalized words in the mentioned search advertising application, for example, leading to potentially prohibitively long training times. This is problematic for the envisioned applications which require frequent retraining of vectors as additional data containing new “words” becomes available. The best known approach for refreshing vectors is to periodically retrain on a suitably large window comprised of the most recent available data. In particular, we found that tricks like freezing the vectors for previously trained words don't work as well. The training latency is thus directly linked to staleness of the vectors and should be kept as small as feasible without compromising quality. Our main contribution is a novel distributed word2vec training system for commodity shared compute clusters that addresses these challenges. The proposed system: As discussed in Section SECREF4 , to the best of our knowledge, this is the first word2vec training system that is truly scalable in both of these aspects. We have implemented the proposed word2vec training system in Java and Scala, leveraging the open source building blocks Apache Slider BIBREF10 and Apache Spark BIBREF11 running on a Hadoop YARN-scheduled cluster BIBREF12 , BIBREF13 . Our word2vec solution enables the aforementioned applications to efficiently train vectors for unprecedented vocabulary sizes. Since late 2015, it has been incorporated into the Yahoo Gemini Ad Platform (https://gemini.yahoo.com) as a part of the “broad” ad matching pipeline, with regular retraining of vectors based on fresh user search session data.
580
How they evaluate their approach?
They evaluate newly proposed models in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise
Unsupervised neural machine translation (UNMT) has recently achieved remarkable results with only large monolingual corpora in each language. However, the uncertainty of associating target with source sentences makes UNMT theoretically an ill-posed problem. This work investigates the possibility of utilizing images for disambiguation to improve the performance of UNMT. Our assumption is intuitively based on the invariant property of image, i.e., the description of the same visual content by different languages should be approximately similar. We propose an unsupervised multi-modal machine translation (UMNMT) framework based on the language translation cycle consistency loss conditional on the image, targeting to learn the bidirectional multi-modal translation simultaneously. Through an alternate training between multi-modal and uni-modal, our inference model can translate with or without the image. On the widely used Multi30K dataset, the experimental results of our approach are significantly better than those of the text-only UNMT on the 2016 test dataset.
Our long-term goal is to build intelligent systems that can perceive their visual environment and understand the linguistic information, and further make an accurate translation inference to another language. Since image has become an important source for humans to learn and acquire knowledge (e.g. video lectures, BIBREF1 , BIBREF2 , BIBREF3 ), the visual signal might be able to disambiguate certain semantics. One way to make image content easier and faster to be understood by humans is to combine it with narrative description that can be self-explainable. This is particularly important for many natural language processing (NLP) tasks as well, such as image caption BIBREF4 and some task-specific translation–sign language translation BIBREF5 . However, BIBREF6 demonstrates that most multi-modal translation algorithms are not significantly better than an off-the-shelf text-only machine translation (MT) model for the Multi30K dataset BIBREF7 . There remains an open question about how translation models should take advantage of visual context, because from the perspective of information theory, the mutual information of two random variables $I(X,Y)$ will always be no greater than $I(X;Y,Z)$ , due to the following fact. $$& I(X;Y,Z) - I(X;Y) \nonumber \\ =& KL(p(X,Y,Z)\Vert p(X|Y)p(Z|Y)p(Y)) $$ (Eq. 1) where the Kullback-Leibler (KL) divergence is non-negative. This conclusion makes us believe that the visual content will hopefully help the translation systems. Since the standard paradigm of multi-modal translation always considers the problem as a supervised learning task, the parallel corpus is usually sufficient to train a good translation model, and the gain from the extra image input is very limited. Moreover, the scarcity of the well formed dataset including both images and the corresponding multilingual text descriptions is also another constraint to prevent the development of more scaled models. In order to address this issue, we propose to formulate the multi-modal translation problem as an unsupervised learning task, which is closer to real applications. This is particularly important given the massive amounts of paired image and text data being produced everyday (e.g., news title and its illustrating picture). Our idea is originally inspired by the text-only unsupervised MT (UMT) BIBREF8 , BIBREF9 , BIBREF0 , investigating whether it is possible to train a general MT system without any form of supervision. As BIBREF0 discussed, the text-only UMT is fundamentally an ill-posed problem, since there are potentially many ways to associate target with source sentences. Intuitively, since the visual content and language are closely related, the image can play the role of a pivot “language" to bridge the two languages without paralleled corpus, making the problem “more well-defined" by reducing the problem to supervised learning. However, unlike the text translation involving word generation (usually a discrete distribution), the task to generate a dense image from a sentence description itself is a challenging problem BIBREF10 . High quality image generation usually depends on a complicated or large scale neural network architecture BIBREF11 , BIBREF12 , BIBREF13 . Thus, it is not recommended to utilize the image dataset as a pivot “language" BIBREF14 . Motivated by the cycle-consistency BIBREF15 , we tackle the unsupervised translation with a multi-modal framework which includes two sequence-to-sequence encoder-decoder models and one shared image feature extractor. We don't introduce the adversarial learning via a discriminator because of the non-differentiable $\arg \max $ operation during word generation. With five modules in our framework, there are multiple data streaming paths in the computation graph, inducing the auto-encoding loss and cycle-consistency loss, in order to achieve the unsupervised translation. Another challenge of unsupervised multi-modal translation, and more broadly for general multi-modal translation tasks, is the need to develop a reasonable multi-source encoder-decoder model that is capable of handling multi-modal documents. Moreover, during training and inference stages, it is better to process the mixed data format including both uni-modal and multi-modal corpora. First, this challenge highly depends on the attention mechanism across different domains. Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN) are naturally suitable to encode the language text and visual image respectively; however, encoded features of RNN has autoregressive property which is different from the local dependency of CNN. The multi-head self-attention transformer BIBREF16 can mimic the convolution operation, and allow each head to use different linear transformations, where in turn different heads can learn different relationships. Unlike RNN, it reduces the length of the paths of states from the higher layer to all states in the lower layer to one, and thus facilitates more effective learning. For example, the BERT model BIBREF17 , that is completely built upon self-attention, has achieved remarkable performance in 11 natural language tasks. Therefore, we employ transformer in both the text encoder and decoder of our model, and design a novel joint attention mechanism to simulate the relationships among the three domains. Besides, the mixed data format requires the desired attention to support the flexible data stream. In other words, the batch fetched at each iteration can be either uni-modal text data or multi-modal text-image paired data, allowing the model to be adaptive to various data during inference as well. Succinctly, our contributions are three-fold: (1) We formuate the multi-modal MT problem as unsupervised setting that fits the real scenario better and propose an end-to-end transformer based multi-modal model. (2) We present two technical contributions: successfully train the proposed model with auto-encoding and cycle-consistency losses, and design a controllable attention module to deal with both uni-modal and multi-modal data. (3) We apply our approach to the Multilingual Multi30K dataset in English $\leftrightarrow $ French and English $\leftrightarrow $ German translation tasks, and the translation output and the attention visualization show the gain from the extra image is significant in the unsupervised setting.
581
How large is the corpus?
It contains 106,350 documents
In low-resource settings, the performance of supervised labeling models can be improved with automatically annotated or distantly supervised data, which is cheap to create but often noisy. Previous works have shown that significant improvements can be reached by injecting information about the confusion between clean and noisy labels in this additional training data into the classifier training. However, for noise estimation, these approaches either do not take the input features (in our case word embeddings) into account, or they need to learn the noise modeling from scratch which can be difficult in a low-resource setting. We propose to cluster the training data using the input features and then compute different confusion matrices for each cluster. To the best of our knowledge, our approach is the first to leverage feature-dependent noise modeling with pre-initialized confusion matrices. We evaluate on low-resource named entity recognition settings in several languages, showing that our methods improve upon other confusion-matrix based methods by up to 9%.
Most languages, even with millions of speakers, have not been the center for natural language processing and are counted as low-resource for tasks like named entity recognition (NER). Similarly, even for high-resource languages, there exists only few labeled data for most entity types beyond person, location and organization. Distantly- or weakly-supervised approaches have been proposed to solve this issue, e.g., by using lists of entities for labeling raw text BIBREF0, BIBREF1. This allows obtaining large amounts of training data quickly and cheaply. Unfortunately, these labels often contain errors and learning with this noisily-labeled data is difficult and can even reduce overall performance (see, e.g. BIBREF2). A variety of ideas have been proposed to overcome the issues of noisy training data. One popular approach is to estimate the relation between noisy and clean, gold-standard labels and use this noise model to improve the training procedure. However, most of these approaches only assume a dependency between the labels and do not take the features into account when modeling the label noise. This may disregard important information. The global confusion matrix BIBREF3 is a simple model which assumes that the errors in the noisy labels just depend on the clean labels. Our contributions are as follows: We propose to cluster the input words with the help of additional, unlabeled data. Based on this partition of the feature space, we obtain different confusion matrices that describe the relationship between clean and noisy labels. We evaluate our newly proposed models and related baselines in several low-resource settings across different languages with real, distantly supervised data with non-synthetic noise. The advanced modeling of the noisy labels substantially improves the performance up to 36% over methods without noise-handling and up to 9% over all other noise-handling baselines.
584
What was their perplexity score?
Perplexity score 142.84 on dev and 138.91 on test
To combat fake news, researchers mostly focused on detecting fake news and journalists built and maintained fact-checking sites (e.g., this http URL and this http URL). However, fake news dissemination has been greatly promoted via social media sites, and these fact-checking sites have not been fully utilized. To overcome these problems and complement existing methods against fake news, in this paper we propose a deep-learning based fact-checking URL recommender system to mitigate impact of fake news in social media sites such as Twitter and Facebook. In particular, our proposed framework consists of a multi-relational attentive module and a heterogeneous graph attention network to learn complex/semantic relationship between user-URL pairs, user-user pairs, and URL-URL pairs. Extensive experiments on a real-world dataset show that our proposed framework outperforms eight state-of-the-art recommendation models, achieving at least 3~5.3% improvement.
While social media sites provide users with the revolutionized communication medium by bringing the communication efficiency to a new level, they can be easily misused for widely spreading misinformation and fake news. Fake news and misinformation have been a long-standing issue for various purposes such as political propaganda BIBREF0 and financial propaganda BIBREF1. To fight against fake news, traditional publishers employed human editors to manually and carefully check the content of news articles to maintain their reputation. However, social media provided a new way to spread news, which lead to broader information sources and expanded audience (i.e., anyone can be a media and create news). In particular, users share news articles with their own opinion or read articles shared by their friends from whatever the source of news is with mostly blind trust BIBREF2 or with their own ideologies BIBREF3, BIBREF4. Although social media posts usually have a very short life cycle, the unprecedented amount of fake news may lead to a catastrophic impact on both individuals and society. Besides from misleading users with false information BIBREF4, widely propagated fake news could even cause trust crisis of entire news ecosystem BIBREF5, even further affecting both the cyberspace and physical space. In literature, researchers focused on four topics regarding fake news: characterization (i.e., types of fake news), motivation, circulation, and countermeasures BIBREF6, BIBREF7. A large body of work has been done on fake news identification BIBREF5, BIBREF8, BIBREF9, BIBREF10 by exploiting multiple content-related and social-related components. However, we notice that the fake news still has been widely spread even after early detection BIBREF11. Therefore, we propose to study a complementary approach to mitigate the spread and impact of fake news. Recently, community and journalists started building and maintaining fact-checking websites (e.g., Snopes.com). Social media users called fact-checkers also started using these fact-checking pages as factual evidences to debunk fake news by replying to fake news posters. Figure FIGREF1 demonstrates a real-world example of a fact-checker's fact-checking behavior on Twitter by debunking another user's false claim with a Snopes page URL as an evidence to support the factual correction. In BIBREF12, researchers found that these fact-checkers actively debunked fake news mostly within one day, and their replies were exposed to hundreds of millions users. To motivate these fact-checkers further quickly engage with fake news posters and intelligently consume increased volume of fact-checking articles, in this paper we propose a novel personalized fact-checking URL recommender system. According to BIBREF13, co-occurrence matrix within the given context provides information of semantic similarity between two objects. Therefore, in our proposed deep-learning based recommender system, we employ two extended matrices: user-user co-occurrence matrix, and URL-URL co-occurrence matrix to facilitate our recommendation. In addition, users tend to form relationships with like-minded people BIBREF14. Therefore, we incorporate each user's social context to capture the semantic relation to enhance the recommendation performance. Our main contributions are summarized as follows: We propose a new framework for personalized fact-checking URL recommendation, which relies on multi-relational context neighbors. We propose two attention mechanisms which allow for learning deep semantic representation of both a target user and a target URL at different granularity. Experimental results show that our proposed model outperforms eight state-of-the-art baselines, covering various types of recommendation approaches. Ablation study confirm the effectiveness of each component in our proposed framework.
586
What they formulate the question generation as?
LASSO optimization problem
Self-attention (SA) network has shown profound value in image captioning. In this paper, we improve SA from two aspects to promote the performance of image captioning. First, we propose Normalized Self-Attention (NSA), a reparameterization of SA that brings the benefits of normalization inside SA. While normalization is previously only applied outside SA, we introduce a novel normalization method and demonstrate that it is both possible and beneficial to perform it on the hidden activations inside SA. Second, to compensate for the major limit of Transformer that it fails to model the geometry structure of the input objects, we propose a class of Geometry-aware Self-Attention (GSA) that extends SA to explicitly and efficiently consider the relative geometry relations between the objects in the image. To construct our image captioning model, we combine the two modules and apply it to the vanilla self-attention network. We extensively evaluate our proposals on MS-COCO image captioning dataset and superior results are achieved when comparing to state-of-the-art approaches. Further experiments on three challenging tasks, i.e. video captioning, machine translation, and visual question answering, show the generality of our methods.
Automatically generating captions for images, namely image captioning BIBREF0, BIBREF1, has emerged as a prominent research problem at the intersection of computer vision (CV) and natural language processing (NLP). This task is challenging as it requires to first recognize the objects in the image, the relationships between them, and finally properly organize and describe them in natural language. Inspired by the sequence-to-sequence model for machine translation, most image captioning approaches adopt an encoder-decoder paradigm, which uses a deep convolutional neural network (CNN) to encode the input image as a vectorial representation, and a recurrent neural network (RNN) based caption decoder to generate the output caption. Recently, self-attention (SA) networks, denoted as SANs, have been introduced by BIBREF2, BIBREF3 to replace conventional RNNs in image captioning. Since its first introduction in Transformer BIBREF4, SA and its variants have shown promising empirical results in a wide range of CV BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 and NLP BIBREF11, BIBREF12, BIBREF13 tasks. Although SAN-based framework has achieved state-of-the-art performance in image captioning, it remains two problems to be solved. Firstly, SA is susceptible to the internal covariate shift BIBREF14 problem. Typically, SA is regarded as a mapping of a set of query and key/value pairs. We observe, from another perspective, that computation of the attention weights in SA could be considered as feeding the queries into a fully-connected layer, whose parameters are dynamically computed according to the inputs. Problem could happen when the distribution of the queries shifts due to the change in network parameters during training. That is, the subsequent layers have to continuously adapt to the new input distribution, and consequently, SA may not be learned effectively. This problem is called “Internal Covariate Shift" in BIBREF14 –— the tendency that the distribution of activations drifts during training in a feed-forward network. To eliminate the internal covariate shift problem inside SA, in this paper, we introduce an effective reparameterization of SA, named Normalized Self-Attention (NSA). NSA performs a novel normalization method on the hidden activations of SA to fix their distributions. By doing so, we can effectively decouple the fully-connected layer's parameters from those of other layers, leading to a better-conditioned optimization of SA. While Layer Normalization (LN) BIBREF15 is proven to be very critical for enabling the convergence of Transformer, however, LN is only applied outside SA blocks. To our knowledge, there has not been any deep exploration to find a suitable normalization method inside SA. We demonstrate that our NSA can collaborate with LN to bring improved generalization for SA-based networks. Another critical issue in SA is its inability to model the geometric relationships among input elements. The vanilla self-attention treats its inputs as “bag-of-features", simply neglecting their structure and the relationships between them. However, the objects in the image, from which the region-based visual features are extracted for image captioning, inherently have geometric structure — 2D spatial layout and variations in scale/aspect ratio. Such inherent geometric relationships between objects play a very complex yet critical role in understanding the image content. One common solution to inject position information into SA is adding representations of absolute positions to each element of the inputs, as is often used in the case of 1D sentences. Nonetheless, this solution does not work well for image captioning because the 2D geometry relations between objects are harder to infer from their absolute positions. We present a more efficient approach to the above problem: explicitly incorporating relative geometry relationships between objects into SA. The module is named Geometry-aware Self-Attention (GSA). GSA extends the original attention weight into two components: the original content-based weight, and a new geometric bias, which is efficiently calculated by the relative geometry relations and, importantly, the content of the associated elements, i.e. query or key. By combining both NSA and GSA, we obtain an enhanced SA module. We then construct our Normalized and Geometry-aware Self-Attention Network, namely NG-SAN, by replacing the vanilla SA modules in the encoder of the self-attention network with the proposed one. Extensive experiments on MS-COCO validates the effectiveness of our proposals. In particular, our NG-SAN establishes a new state-of-the-art on the MS-COCO evaluation sever, improving the best single-model result in terms of CIDEr from 125.5 to 128.6. To demonstrate the generality of NSA, we further present video captioning, machine translation, and visual question answering experiments on the VATEX, WMT 2014 English-to-German, and VQA-v2 datasets, respectively. On top of the strong Transformer-based baselines, our methods can consistently increase accuracies on all tasks at a negligible extra computational cost. To summarize, the main contributions of this paper are three-fold: We presented Normalized Self-Attention, an effective reparameterization of self-attention, which brings the benefits of normalization technique inside SA. We introduce a class of Geometry-aware Self-Attention that explicitly makes use of the relative geometry relationships and the content of objects to aid image understanding. By combining the two modules and apply it on the self-attention network, we establish a new state-of-the-art on the MS-COCO image captioning benchmark. Further experiments on video captioning, machine translation, and visual question answering tasks demonstrate the generality of our methods.
588
Was the degree of offensiveness taken as how generally offensive the text was, or how personally offensive it was to the annotator?
Personal thought of the annotator.
Semantic role labeling (SRL) is to recognize the predicate-argument structure of a sentence, including subtasks of predicate disambiguation and argument labeling. Previous studies usually formulate the entire SRL problem into two or more subtasks. For the first time, this paper introduces an end-to-end neural model which unifiedly tackles the predicate disambiguation and the argument labeling in one shot. Using a biaffine scorer, our model directly predicts all semantic role labels for all given word pairs in the sentence without relying on any syntactic parse information. Specifically, we augment the BiLSTM encoder with a non-linear transformation to further distinguish the predicate and the argument in a given sentence, and model the semantic role labeling process as a word pair classification task by employing the biaffine attentional mechanism. Though the proposed model is syntax-agnostic with local decoder, it outperforms the state-of-the-art syntax-aware SRL systems on the CoNLL-2008, 2009 benchmarks for both English and Chinese. To our best knowledge, we report the first syntax-agnostic SRL model that surpasses all known syntax-aware models.
This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/ Semantic role labeling (SRL) is a shallow semantic parsing, which is dedicated to identifying the semantic arguments of a predicate and labeling them with their semantic roles. SRL is considered as one of the core tasks in the natural language processing (NLP), which has been successfully applied to various downstream tasks, such as information extraction BIBREF0 , question answering BIBREF1 , BIBREF2 , machine translation BIBREF3 , BIBREF4 . Typically, SRL task can be put into two categories: constituent-based (i.e., phrase or span) SRL and dependency-based SRL. This paper will focus on the latter one popularized by CoNLL-2008 and 2009 shared tasks BIBREF5 , BIBREF6 . Most conventional SRL systems relied on sophisticated handcraft features or some declarative constraints BIBREF7 , BIBREF8 , which suffers from poor efficiency and generalization ability. A recently tendency for SRL is adopting neural networks methods attributed to their significant success in a wide range of applications BIBREF9 , BIBREF10 . However, most of those works still heavily resort to syntactic features. Since the syntactic parsing task is equally hard as SRL and comes with its own errors, it is better to get rid of such prerequisite as in other NLP tasks. Accordingly, marcheggiani2017 presented a neural model putting syntax aside for dependency-based SRL and obtain favorable results, which overturns the inherent belief that syntax is indispensable in SRL task BIBREF11 . Besides, SRL task is generally formulated as multi-step classification subtasks in pipeline systems, consisting of predicate identification, predicate disambiguation, argument identification and argument classification. Most previous SRL approaches adopt a pipeline framework to handle these subtasks one after another. Until recently, some works BIBREF12 , BIBREF13 introduce end-to-end models for span-based SRL, which motivates us to explore integrative model for dependency SRL. In this work, we propose a syntactic-agnostic end-to-end system, dealing with predicate disambiguation and argument labeling in one model, unlike previous systems that treat the predicate disambiguation as a subtask and handle it separately. In detail, our model contains (1) a deep BiLSTM encoder, which is able to distinguish the predicates and arguments by mapping them into two different vector spaces, and (2) a biaffine attentional BIBREF14 scorer, which unifiedly predicts the semantic role for argument and the sense for predicate. We experimentally show that though our biaffine attentional model remains simple and does not rely on any syntactic feature, it achieves the best result on the benchmark for both Chinese and English even compared to syntax-aware systems. In summary, our major contributions are shown as follows:
589
Which embeddings do they detect biases in?
Word embeddings trained on GoogleNews and Word embeddings trained on Reddit dataset
Some users of social media are spreading racist, sexist, and otherwise hateful content. For the purpose of training a hate speech detection system, the reliability of the annotations is crucial, but there is no universally agreed-upon definition. We collected potentially hateful messages and asked two groups of internet users to determine whether they were hate speech or not, whether they should be banned or not and to rate their degree of offensiveness. One of the groups was shown a definition prior to completing the survey. We aimed to assess whether hate speech can be annotated reliably, and the extent to which existing definitions are in accordance with subjective ratings. Our results indicate that showing users a definition caused them to partially align their own opinion with the definition but did not improve reliability, which was very low overall. We conclude that the presence of hate speech should perhaps not be considered a binary yes-or-no decision, and raters need more detailed instructions for the annotation.
Social media are sometimes used to disseminate hateful messages. In Europe, the current surge in hate speech has been linked to the ongoing refugee crisis. Lawmakers and social media sites are increasingly aware of the problem and are developing approaches to deal with it, for example promising to remove illegal messages within 24 hours after they are reported BIBREF0 . This raises the question of how hate speech can be detected automatically. Such an automatic detection method could be used to scan the large amount of text generated on the internet for hateful content and report it to the relevant authorities. It would also make it easier for researchers to examine the diffusion of hateful content through social media on a large scale. From a natural language processing perspective, hate speech detection can be considered a classification task: given an utterance, determine whether or not it contains hate speech. Training a classifier requires a large amount of data that is unambiguously hate speech. This data is typically obtained by manually annotating a set of texts based on whether a certain element contains hate speech. The reliability of the human annotations is essential, both to ensure that the algorithm can accurately learn the characteristics of hate speech, and as an upper bound on the expected performance BIBREF1 , BIBREF2 . As a preliminary step, six annotators rated 469 tweets. We found that agreement was very low (see Section 3). We then carried out group discussions to find possible reasons. They revealed that there is considerable ambiguity in existing definitions. A given statement may be considered hate speech or not depending on someone's cultural background and personal sensibilities. The wording of the question may also play a role. We decided to investigate the issue of reliability further by conducting a more comprehensive study across a large number of annotators, which we present in this paper. Our contribution in this paper is threefold: