diff --git "a/SQuAD_qasper-validation_processed.json" "b/SQuAD_qasper-validation_processed.json" new file mode 100644--- /dev/null +++ "b/SQuAD_qasper-validation_processed.json" @@ -0,0 +1 @@ +{"data": [{"answers": ["", ""], "context": "Although Neural Machine Translation (NMT) has dominated recent research on translation tasks BIBREF0, BIBREF1, BIBREF2, NMT heavily relies on large-scale parallel data, resulting in poor performance on low-resource or zero-resource language pairs BIBREF3. Translation between these low-resource languages (e.g., Arabic$\\rightarrow $Spanish) is usually accomplished with pivoting through a rich-resource language (such as English), i.e., Arabic (source) sentence is translated to English (pivot) first which is later translated to Spanish (target) BIBREF4, BIBREF5. However, the pivot-based method requires doubled decoding time and suffers from the propagation of translation errors.", "id": 0, "question": "which multilingual approaches do they compare with?", "title": "Cross-lingual Pre-training Based Transfer for Zero-shot Neural Machine Translation"}, {"answers": ["", ""], "context": "In recent years, zero-shot translation in NMT has attracted widespread attention in academic research. Existing methods are mainly divided into four categories: pivot-based method, transfer learning, multilingual NMT, and unsupervised NMT.", "id": 1, "question": "what are the pivot-based baselines?", "title": "Cross-lingual Pre-training Based Transfer for Zero-shot Neural Machine Translation"}, {"answers": ["", ""], "context": "In this section, we will present a cross-lingual pre-training based transfer approach. This method is designed for a common zero-shot scenario where there are a lot of source$\\leftrightarrow $pivot and pivot$\\leftrightarrow $target bilingual data but no source$\\leftrightarrow $target parallel data, and the whole training process can be summarized as follows step by step:", "id": 2, "question": "which datasets did they experiment with?", "title": "Cross-lingual Pre-training Based Transfer for Zero-shot Neural Machine Translation"}, {"answers": ["De-En, En-Fr, Fr-En, En-Es, Ro-En, En-De, Ar-En, En-Ru", ""], "context": "Two existing cross-lingual pre-training methods, Masked Language Modeling (MLM) and Translation Language Modeling (TLM), have shown their effectiveness on XNLI cross-lingual classification task BIBREF11, BIBREF28, but these methods have not been well studied on cross-lingual generation tasks in zero-shot condition. We attempt to take advantage of the cross-lingual ability of the two methods for zero-shot translation.", "id": 3, "question": "what language pairs are explored?", "title": "Cross-lingual Pre-training Based Transfer for Zero-shot Neural Machine Translation"}, {"answers": ["", ""], "context": "Named entity recognition is an important task of natural language processing, featuring in many popular text processing toolkits. This area of natural language processing has been actively studied in the latest decades and the advent of deep learning reinvigorated the research on more effective and accurate models. However, most of existing approaches require large annotated corpora. To the best of our knowledge, no such work has been done for the Armenian language, and in this work we address several problems, including the creation of a corpus for training machine learning models, the development of gold-standard test corpus and evaluation of the effectiveness of established approaches for the Armenian language.", "id": 4, "question": "what ner models were evaluated?", "title": "pioNER: Datasets and Baselines for Armenian Named Entity Recognition"}, {"answers": ["", ""], "context": "We used Sysoev and Andrianov's modification of the Nothman et al. approach to automatically generate data for training a named entity recognizer. This approach uses links between Wikipedia articles to generate sequences of named-entity annotated tokens.", "id": 5, "question": "what is the source of the news sentences?", "title": "pioNER: Datasets and Baselines for Armenian Named Entity Recognition"}, {"answers": ["", ""], "context": "The main steps of the dataset extraction system are described in Figure FIGREF3 .", "id": 6, "question": "did they use a crowdsourcing platform for manual annotations?", "title": "pioNER: Datasets and Baselines for Armenian Named Entity Recognition"}, {"answers": ["", "training data has posts from politics, business, science and other popular topics; the trained model is applied to millions of unannotated posts on all of Reddit"], "context": "\u201cI'm supposed to trust the opinion of a MS minion? The people that produced Windows ME, Vista and 8? They don't even understand people, yet they think they can predict the behavior of new, self-guiding AI?\u201d \u2013anonymous", "id": 7, "question": "what are the topics pulled from Reddit?", "title": "Identifying Dogmatism in Social Media: Signals and Models"}, {"answers": ["", ""], "context": "Posts on Reddit capture debate and discussion across a diverse set of topics, making them a natural starting point for untangling domain-independent linguistic features of dogmatism.", "id": 8, "question": "What predictive model do they build?", "title": "Identifying Dogmatism in Social Media: Signals and Models"}, {"answers": ["F1 scores of 85.99 on the DL-PS data, 75.15 on the EC-MT data and 71.53 on the EC-UQ data ", "F1 of 85.99 on the DL-PS dataset (dialog domain); 75.15 on EC-MT and 71.53 on EC-UQ (e-commerce domain)"], "context": "There has been significant progress on Named Entity Recognition (NER) in recent years using models based on machine learning algorithms BIBREF0 , BIBREF1 , BIBREF2 . As with other Natural Language Processing (NLP) tasks, building NER systems typically requires a massive amount of labeled training data which are annotated by experts. In real applications, we often need to consider new types of entities in new domains where we do not have existing annotated data. For such new types of entities, however, it is very hard to find experts to annotate the data within short time limits and hiring experts is costly and non-scalable, both in terms of time and money.", "id": 9, "question": "What accuracy does the proposed system achieve?", "title": "Adversarial Learning for Chinese NER from Crowd Annotations"}, {"answers": ["", "They did not use any platform, instead they hired undergraduate students to do the annotation."], "context": "Our work is related to three lines of research: Sequence labeling, Adversarial training, and Crowdsourcing.", "id": 10, "question": "What crowdsourcing platform is used?", "title": "Adversarial Learning for Chinese NER from Crowd Annotations"}, {"answers": ["", ""], "context": "Deep Learning approaches have achieved impressive results on various NLP tasks BIBREF0 , BIBREF1 , BIBREF2 and have become the de facto approach for any NLP task. However, these deep learning techniques have found to be less effective for low-resource languages when the available training data is very less BIBREF3 . Recently, several approaches like Multi-task learning BIBREF4 , multilingual learning BIBREF5 , semi-supervised learning BIBREF2 , BIBREF6 and transfer learning BIBREF7 , BIBREF3 have been explored by the deep learning community to overcome data sparsity in low-resource languages. Transfer learning trains a model for a parent task and fine-tunes the learned parent model weights (features) for a related child task BIBREF7 , BIBREF8 . This effectively reduces the requirement on training data for the child task as the model would have learned relevant features from the parent task data thereby, improving the performance on the child task.", "id": 11, "question": "How do they match words before reordering them?", "title": "Addressing word-order Divergence in Multilingual Neural Machine Translation for extremely Low Resource Languages"}, {"answers": ["5", ""], "context": " BIBREF3 explored transfer learning for NMT on low-resource languages. They studied the influence of language divergence between languages chosen for training the parent and child model, and showed that choosing similar languages for training the parent and child model leads to better improvements from transfer learning. A limitation of BIBREF3 approach is that they ignore the lexical similarity between languages and also the source language embeddings are randomly initialized. BIBREF10 , BIBREF11 , BIBREF12 take advantage of lexical similarity between languages in their work. BIBREF10 proposed to use Byte-Pair Encoding (BPE) to represent the sentences in both the parent and the child language to overcome the above limitation. They show using BPE benefits transfer learning especially when the involved languages are closely-related agglutinative languages. Similarly, BIBREF11 utilize lexical similarity between the source and assisting languages by training a character-level NMT system. BIBREF12 address lexical divergence by using bilingual embeddings and mixture of universal token embeddings. One of the languages' vocabulary, usually English vocabulary is considered as universal tokens and every word in the other languages is represented as a mixture of universal tokens. They show results on extremely low-resource languages.", "id": 12, "question": "On how many language pairs do they show that preordering assisting language sentences helps translation quality?", "title": "Addressing word-order Divergence in Multilingual Neural Machine Translation for extremely Low Resource Languages"}, {"answers": ["", ""], "context": "To the best of our knowledge, no work has addressed word order divergence in transfer learning for multilingual NMT. However, some work exists for other NLP tasks that could potentially address word order. For Named Entity Recognition (NER), BIBREF14 use a self-attention layer after the Bi-LSTM layer to address word-order divergence for Named Entity Recognition (NER) task. The approach does not show any significant improvements over multiple languages. A possible reason is that the divergence has to be addressed before/during construction of the contextual embeddings in the Bi-LSTM layer, and the subsequent self-attention layer does not address word-order divergence. BIBREF15 use adversarial training for cross-lingual question-question similarity ranking in community question answering. The adversarial training tries to force the encoder representations of similar sentences from different input languages to have similar representations.", "id": 13, "question": "Which dataset(s) do they experiment with?", "title": "Addressing word-order Divergence in Multilingual Neural Machine Translation for extremely Low Resource Languages"}, {"answers": ["", "paragraph, lines, textspan element (paragraph segmentation, line segmentation, Information on physical page segmentation(for PDF only))"], "context": "Simplified language is a variety of standard language characterized by reduced lexical and syntactic complexity, the addition of explanations for difficult concepts, and clearly structured layout. Among the target groups of simplified language commonly mentioned are persons with cognitive impairment or learning disabilities, prelingually deaf persons, functionally illiterate persons, and foreign language learners BIBREF0.", "id": 14, "question": "Which information about text structure is included in the corpus?", "title": "A Corpus for Automatic Readability Assessment and Text Simplification of German"}, {"answers": ["", ""], "context": "A number of corpora for use in automatic readability assessment and automatic text simplification exist. The most well-known example is the Parallel Wikipedia Simplification Corpus (PWKP) compiled from parallel articles of the English Wikipedia and Simple English Wikipedia BIBREF13 and consisting of around 108,000 sentence pairs. The corpus profile is shown in Table TABREF2. While the corpus represents the largest dataset involving simplified language to date, its application has been criticized for various reasons BIBREF15, BIBREF14, BIBREF16; among these, the fact that Simple English Wikipedia articles are not necessarily direct translations of articles from the English Wikipedia stands out. hwang-et-al-2015 provided an updated version of the corpus that includes a total of 280,000 full and partial matches between the two Wikipedia versions. Another frequently used data collection for English is the Newsela Corpus BIBREF14 consisting of 1,130 news articles, each simplified into four school grade levels by professional editors. Table TABREF3 shows the profile of the Newsela Corpus. The table obviates that the difference in vocabulary size between the English and the simplified English side of the PWKP Corpus amounts to only 18%, while the corresponding number for the English side and the level representing the highest amount of simplification in the Newsela Corpus (Simple-4) is 50.8%. Vocabulary size as an indicator of lexical richness is generally taken to correlate positively with complexity BIBREF17.", "id": 15, "question": "Which information about typography is included in the corpus?", "title": "A Corpus for Automatic Readability Assessment and Text Simplification of German"}, {"answers": ["", ""], "context": "Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . For an input question, these systems typically generate a KB query, which can be executed to retrieve the answers from a KB. Figure 1 illustrates the process used to parse two sample questions in a KBQA system: (a) a single-relation question, which can be answered with a single $<$ head-entity, relation, tail-entity $>$ KB tuple BIBREF6 , BIBREF7 , BIBREF2 ; and (b) a more complex case, where some constraints need to be handled for multiple entities in the question. The KBQA system in the figure performs two key tasks: (1) entity linking, which links $n$ -grams in questions to KB entities, and (2) relation detection, which identifies the KB relation(s) a question refers to.", "id": 16, "question": "On which benchmarks they achieve the state of the art?", "title": "Improved Neural Relation Detection for Knowledge Base Question Answering"}, {"answers": ["", ""], "context": "Previous research BIBREF4 , BIBREF20 formulates KB relation detection as a sequence matching problem. However, while the questions are natural word sequences, how to represent relations as sequences remains a challenging problem. Here we give an overview of two types of relation sequence representations commonly used in previous work.", "id": 17, "question": "What they use in their propsoed framework?", "title": "Improved Neural Relation Detection for Knowledge Base Question Answering"}, {"answers": ["", ""], "context": "This section describes our hierarchical sequence matching with residual learning approach for relation detection. In order to match the question to different aspects of a relation (with different abstraction levels), we deal with three problems as follows on learning question/relation representations.", "id": 18, "question": "What does KBQA abbreviate for", "title": "Improved Neural Relation Detection for Knowledge Base Question Answering"}, {"answers": ["", ""], "context": "We provide our model with both types of relation representation: word-level and relation-level. Therefore, the input relation becomes $\\mathbf {r}=\\lbrace r^{word}_1,\\cdots ,r^{word}_{M_1}\\rbrace \\cup \\lbrace r^{rel}_1,\\cdots ,r^{rel}_{M_2}\\rbrace $ , where the first $M_1$ tokens are words (e.g. {episode, written}), and the last $M_2$ tokens are relation names, e.g., {episode_written} or {starring_roles, series} (when the target is a chain like in Figure 1 (b)). We transform each token above to its word embedding then use two BiLSTMs (with shared parameters) to get their hidden representations $[\\mathbf {B}^{word}_{1:M_1}:\\mathbf {B}^{rel}_{1:M_2}]$ (each row vector $\\mathbf {\\beta }_i$ is the concatenation between forward/backward representations at $i$ ). We initialize the relation sequence LSTMs with the final state representations of the word sequence, as a back-off for unseen relations. We apply one max-pooling on these two sets of vectors and get the final relation representation $\\mathbf {h}^r$ .", "id": 19, "question": "What is te core component for KBQA?", "title": "Improved Neural Relation Detection for Knowledge Base Question Answering"}, {"answers": ["They measure self-similarity, intra-sentence similarity and maximum explainable variance of the embeddings in the upper layers.", "They plot the average cosine similarity between uniformly random words increases exponentially from layers 8 through 12. \nThey plot the average self-similarity of uniformly randomly sampled words in each layer of BERT, ELMo, and GPT-2 and shown that the higher layer produces more context-specific embeddings.\nThey plot that word representations in a sentence become more context-specific in upper layers, they drift away from one another."], "context": "The application of deep learning methods to NLP is made possible by representing words as vectors in a low-dimensional continuous space. Traditionally, these word embeddings were static: each word had a single vector, regardless of context BIBREF0, BIBREF1. This posed several problems, most notably that all senses of a polysemous word had to share the same representation. More recent work, namely deep neural language models such as ELMo BIBREF2 and BERT BIBREF3, have successfully created contextualized word representations, word vectors that are sensitive to the context in which they appear. Replacing static embeddings with contextualized representations has yielded significant improvements on a diverse array of NLP tasks, ranging from question-answering to coreference resolution.", "id": 20, "question": "What experiments are proposed to test that upper layers produce context-specific embeddings?", "title": "How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings"}, {"answers": ["They use the first principal component of a word's contextualized representation in a given layer as its static embedding.", ""], "context": "Skip-gram with negative sampling (SGNS) BIBREF0 and GloVe BIBREF1 are among the best known models for generating static word embeddings. Though they learn embeddings iteratively in practice, it has been proven that in theory, they both implicitly factorize a word-context matrix containing a co-occurrence statistic BIBREF7, BIBREF8. Because they create a single representation for each word, a notable problem with static word embeddings is that all senses of a polysemous word must share a single vector.", "id": 21, "question": "How do they calculate a static embedding for each word?", "title": "How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings"}, {"answers": ["F1 scores are:\nHUBES-PHI: Detection(0.965), Classification relaxed (0.95), Classification strict (0.937)\nMedoccan: Detection(0.972), Classification (0.967)", ""], "context": "During the first two decades of the 21st century, the sharing and processing of vast amounts of data has become pervasive. This expansion of data sharing and processing capabilities is both a blessing and a curse. Data helps build better information systems for the digital era and enables further research for advanced data management that benefits the society in general. But the use of this very data containing sensitive information conflicts with private data protection, both from an ethical and a legal perspective.", "id": 22, "question": "What is the performance of BERT on the task?", "title": "Sensitive Data Detection and Classification in Spanish Clinical Text: Experiments with BERT"}, {"answers": ["", ""], "context": "The state of the art in the field of Natural Language Processing (NLP) has reached an important milestone in the last couple of years thanks to deep-learning architectures, increasing in several points the performance of new models for almost any text processing task.", "id": 23, "question": "What are the other algorithms tested?", "title": "Sensitive Data Detection and Classification in Spanish Clinical Text: Experiments with BERT"}, {"answers": ["", ""], "context": "The aim of this paper is to evaluate BERT's multilingual model and compare it to other established machine-learning algorithms in a specific task: sensitive data detection and classification in Spanish clinical free text. This section describes the data involved in the experiments and the systems evaluated. Finally, we introduce the experimental setup.", "id": 24, "question": "Does BERT reach the best performance among all the algorithms compared?", "title": "Sensitive Data Detection and Classification in Spanish Clinical Text: Experiments with BERT"}, {"answers": ["", ""], "context": "Two datasets are exploited in this article. Both datasets consist of plain text containing clinical narrative written in Spanish, and their respective manual annotations of sensitive information in BRAT BIBREF13 standoff format. In order to feed the data to the different algorithms presented in Section SECREF7, these datasets were transformed to comply with the commonly used BIO sequence representation scheme BIBREF14.", "id": 25, "question": "What are the clinical datasets used in the paper?", "title": "Sensitive Data Detection and Classification in Spanish Clinical Text: Experiments with BERT"}, {"answers": ["Using file size on disk", ""], "context": "Accurate grapheme-to-phoneme conversion (g2p) is important for any application that depends on the sometimes inconsistent relationship between spoken and written language. Most prominently, this includes text-to-speech and automatic speech recognition. Most work on g2p has focused on a few languages for which extensive pronunciation data is available BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Most languages lack these resources. However, a low resource language's writing system is likely to be similar to the writing systems of languages that do have sufficient pronunciation data. Therefore g2p may be possible for low resource languages if this high resource data can be properly utilized.", "id": 26, "question": "how is model compactness measured?", "title": "Massively Multilingual Neural Grapheme-to-Phoneme Conversion"}, {"answers": ["", ""], "context": "Our approach is similar in goal to deri2016grapheme's model for adapting high resource g2p models for low resource languages. They trained weighted finite state transducer (wFST) models on a variety of high resource languages, then transferred those models to low resource languages, using a language distance metric to choose which high resource models to use and a phoneme distance metric to map the high resource language's phonemes to the low resource language's phoneme inventory. These distance metrics are computed based on data from Phoible BIBREF4 and URIEL BIBREF5 .", "id": 27, "question": "what was the baseline?", "title": "Massively Multilingual Neural Grapheme-to-Phoneme Conversion"}, {"answers": ["", ""], "context": "In recent years, neural networks have emerged as a common way to use data from several languages in a single system. Google's zero-shot neural machine translation system BIBREF7 shares an encoder and decoder across all language pairs. In order to facilitate this multi-way translation, they prepend an artificial token to the beginning of each source sentence at both training and translation time. The token identifies what language the sentence should be translated to. This approach has three benefits: it is far more efficient than building a separate model for each language pair; it allows for translation between languages that share no parallel data; and it improves results on low-resource languages by allowing them to implicitly share parameters with high-resource languages. Our g2p system is inspired by this approach, although it differs in that there is only one target \u201clanguage\u201d, IPA, and the artificial tokens identify the language of the source instead of the language of the target.", "id": 28, "question": "what evaluation metrics were used?", "title": "Massively Multilingual Neural Grapheme-to-Phoneme Conversion"}, {"answers": ["", ""], "context": "g2p is the problem of converting the orthographic representation of a word into a phonemic representation. A phoneme is an abstract unit of sound which may have different realizations in different contexts. For example, the English phoneme has two phonetic realizations (or allophones):", "id": 29, "question": "what datasets did they use?", "title": "Massively Multilingual Neural Grapheme-to-Phoneme Conversion"}, {"answers": ["", ""], "context": "Recently, with the emergence of neural seq2seq models, abstractive summarization methods have seen great performance strides BIBREF0, BIBREF1, BIBREF2. However, complex neural summarization models with thousands of parameters usually require a large amount of training data. In fact, much of the neural summarization work has been trained and tested in news domains where numerous large datasets exist. For example, the CNN/DailyMail (CNN/DM) BIBREF3, BIBREF4 and New York Times (NYT) datasets are in the magnitude of 300k and 700k documents, respectively. In contrast, in other domains such as student reflections, summarization datasets are only in the magnitude of tens or hundreds of documents (e.g., BIBREF5). We hypothesize that training complex neural abstractive summarization models in such domains will not yield good performing models, and we will indeed later show that this is the case for student reflections.", "id": 30, "question": "What is the interannotator agreement for the human evaluation?", "title": "Abstractive Summarization for Low Resource Data using Domain Transfer and Data Synthesis"}, {"answers": ["", "20 annotatos from author's institution"], "context": "Abstractive Summarization. Abstractive summarization aims to generate coherent summaries with high readability, and has seen increasing interest and improved performance due to the emergence of seq2seq models BIBREF8 and attention mechanisms BIBREF9. For example, BIBREF0, BIBREF2, and BIBREF1 in addition to using encoder-decoder model with attention, they used pointer networks to solve the out of vocabulary issue, while BIBREF0 used coverage mechanism to solve the problem of word repetition. In addition, BIBREF2 and BIBREF10 used reinforcement learning in an end-to-end setting.", "id": 31, "question": "Who were the human evaluators used?", "title": "Abstractive Summarization for Low Resource Data using Domain Transfer and Data Synthesis"}, {"answers": ["", ""], "context": "Student reflections are comments provided by students in response to a set of instructor prompts. The prompts are directed towards gathering students' feedback on course material. Student reflections are collected directly following each of a set of classroom lectures over a semester. In this paper, the set of reflections for each prompt in each lecture is considered a student reflection document. The objective of our work is to provide a comprehensive and meaningful abstractive summary of each student reflection document. Our dataset consists of documents and summaries from four course instantiations: ENGR (Introduction to Materials Science and Engineering), Stat2015 and Stat2016 (Statistics for Industrial Engineers, taught in 2015 and 2016, respectively), and CS (Data Structures in Computer Science). All reflections were collected in response to two pedagogically-motivated prompts BIBREF16: \u201cPoint of Interest (POI): Describe what you found most interesting in today's class\u201d and \u201cMuddiest Point (MP): Describe what was confusing or needed more detail.\u201d", "id": 32, "question": "Is the template-based model realistic? ", "title": "Abstractive Summarization for Low Resource Data using Domain Transfer and Data Synthesis"}, {"answers": ["", ""], "context": "To overcome the size issue of the student reflection dataset, we first explore the effect of incorporating domain transfer into a recent abstractive summarization model: pointer networks with coverage mechanism (PG-net)BIBREF0. To experiment with domain transfer, the model was pretrained using the CNN/DM dataset, then fine tuned using the student reflection dataset (see the Experiments section). A second approach we explore to overcome the lack of reflection data is data synthesis. We first propose a template model for synthesizing new data, then investigate the performance impact of using this data when training the summarization model. The proposed model makes use of the nature of datasets such as ours, where the reference summaries tend to be close in structure: humans try to find the major points that students raise, then present the points in a way that marks their relative importance (recall the CS example in Table TABREF4). Our third explored approach is to combine domain transfer with data synthesis.", "id": 33, "question": "Is the student reflection data very different from the newspaper data? ", "title": "Abstractive Summarization for Low Resource Data using Domain Transfer and Data Synthesis"}, {"answers": ["", ""], "context": "Our motivation for using templates for data synthesis is that seq2seq synthesis models (as discussed in related work) tend to generate irrelevant and repeated words BIBREF17, while templates can produce more coherent and concise output. Also, extracting templates can be done either manually or automatically typically by training a few parameters or even doing no training, then external information in the form of keywords or snippets can be populated into the templates with the help of more sophisticated models. Accordingly, using templates can be very tempting for domains with limited resources such as ours.", "id": 34, "question": "What is the recent abstractive summarization method in this paper?", "title": "Abstractive Summarization for Low Resource Data using Domain Transfer and Data Synthesis"}, {"answers": ["", ""], "context": "Recently, contextual-aware language models such as ELMo BIBREF0, GPT BIBREF1, BERT BIBREF2 and XLNet BIBREF3 have shown to greatly outperform traditional word embedding models including Word2Vec BIBREF4 and GloVe BIBREF5 in a variety of NLP tasks. These pre-trained language models, when fine-tuned on downstream language understanding tasks such as sentiment classification BIBREF6, natural language inference BIBREF7 and reading comprehension BIBREF8, BIBREF9, have achieved state-of-the-art performance. However, the large number of parameters in these models, often above hundreds of millions, makes it impossible to host them on resource-constrained tasks such as doing real-time inference on mobile and edge devices.", "id": 35, "question": "Why are prior knowledge distillation techniques models are ineffective in producing student models with vocabularies different from the original teacher models? ", "title": "Extreme Language Model Compression with Optimal Subwords and Shared Projections"}, {"answers": ["", ""], "context": "Research in neural network model compression has been concomitant with the rise in popularity of neural networks themselves, since these models have often been memory-intensive for the hardware of their time. Work in model compression for NLP applications falls broadly into four categories: matrix approximation, parameter pruning/sharing, weight quantization and knowledge distillation.", "id": 36, "question": "What state-of-the-art compression techniques were used in the comparison?", "title": "Extreme Language Model Compression with Optimal Subwords and Shared Projections"}, {"answers": ["", ""], "context": "The Flickr30K dataset BIBREF0 is a collection of over 30,000 images with 5 crowdsourced descriptions each. It is commonly used to train and evaluate neural network models that generate image descriptions (e.g. BIBREF2 ). An untested assumption behind the dataset is that the descriptions are based on the images, and nothing else. Here are the authors (about the Flickr8K dataset, a subset of Flickr30K):", "id": 37, "question": "What evaluations methods do they take?", "title": "Stereotyping and Bias in the Flickr30K Dataset"}, {"answers": ["", ""], "context": "Stereotypes are ideas about how other (groups of) people commonly behave and what they are likely to do. These ideas guide the way we talk about the world. I distinguish two kinds of verbal behavior that result from stereotypes: (i) linguistic bias, and (ii) unwarranted inferences. The former is discussed in more detail by beukeboom2014mechanisms, who defines linguistic bias as \u201ca systematic asymmetry in word choice as a function of the social category to which the target belongs.\u201d So this bias becomes visible through the distribution of terms used to describe entities in a particular category. Unwarranted inferences are the result of speculation about the image; here, the annotator goes beyond what can be glanced from the image and makes use of their knowledge and expectations about the world to provide an overly specific description. Such descriptions are directly identifiable as such, and in fact we have already seen four of them (descriptions 2\u20135) discussed earlier.", "id": 38, "question": "What is the size of the dataset?", "title": "Stereotyping and Bias in the Flickr30K Dataset"}, {"answers": ["", "Looking for adjectives marking the noun \"baby\" and also looking for most-common adjectives related to certain nouns using POS-tagging"], "context": "Generally speaking, people tend to use more concrete or specific language when they have to describe a person that does not meet their expectations. beukeboom2014mechanisms lists several linguistic `tools' that people use to mark individuals who deviate from the norm. I will mention two of them.", "id": 39, "question": "Which methods are considered to find examples of biases and unwarranted inferences??", "title": "Stereotyping and Bias in the Flickr30K Dataset"}, {"answers": ["Ethnic bias", ""], "context": "Unwarranted inferences are statements about the subject(s) of an image that go beyond what the visual data alone can tell us. They are based on additional assumptions about the world. After inspecting a subset of the Flickr30K data, I have grouped these inferences into six categories (image examples between parentheses):", "id": 40, "question": "What biases are found in the dataset?", "title": "Stereotyping and Bias in the Flickr30K Dataset"}, {"answers": ["", "Best: Expansion (Exp). Worst: Comparison (Comp)."], "context": "PDTB-style discourse relations, mostly defined between two adjacent text spans (i.e., discourse units, either clauses or sentences), specify how two discourse units are logically connected (e.g., causal, contrast). Recognizing discourse relations is one crucial step in discourse analysis and can be beneficial for many downstream NLP applications such as information extraction, machine translation and natural language generation.", "id": 41, "question": "What discourse relations does it work best/worst for?", "title": "Improving Implicit Discourse Relation Classification by Modeling Inter-dependencies of Discourse Units in a Paragraph"}, {"answers": ["", ""], "context": "Since the PDTB BIBREF7 corpus was created, a surge of studies BIBREF8 , BIBREF3 , BIBREF9 , BIBREF10 have been conducted for predicting discourse relations, primarily focusing on the challenging task of implicit discourse relation classification when no explicit discourse connective phrase was presented. Early studies BIBREF11 , BIBREF3 , BIBREF2 , BIBREF12 focused on extracting linguistic and semantic features from two discourse units. Recent research BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 tried to model compositional meanings of two discourse units by exploiting interactions between words in two units with more and more complicated neural network models, including the ones using neural tensor BIBREF5 , BIBREF17 , BIBREF18 and attention mechanisms BIBREF6 , BIBREF19 , BIBREF20 . Another trend is to alleviate the shortage of annotated data by leveraging related external data, such as explicit discourse relations in PDTB BIBREF9 , BIBREF19 , BIBREF21 and unlabeled data obtained elsewhere BIBREF12 , BIBREF19 , often in a multi-task joint learning framework.", "id": 42, "question": "How much does this model improve state-of-the-art?", "title": "Improving Implicit Discourse Relation Classification by Modeling Inter-dependencies of Discourse Units in a Paragraph"}, {"answers": ["", ""], "context": "In recent years, there has been an increasing interest in Machine reading comprehension (MRC), which plays a vital role in the assessment of how well a machine could understand natural language. Several datasets BIBREF0 , BIBREF1 , BIBREF2 for machine reading comprehension have been released in recent years and have driven the evolution of powerful neural models. However, much of the research up to now has been dominated by answering questions that can be well solved solved using superficial information, yet struggles to do accurate natural language understanding and reasoning. For example, BIBREF3 jia2017Adversarial show that existing machine learning systems for MRC perform poorly under adversarial evaluation. Recent developments in MRC datasets BIBREF4 , BIBREF5 , BIBREF6 have heightened the need for deep understanding.", "id": 43, "question": "Where is a question generation model used?", "title": "Knowledge Based Machine Reading Comprehension"}, {"answers": ["", ""], "context": "The recently introduced BERT model BIBREF0 exhibits strong performance on several language understanding benchmarks. To what extent does it capture syntax-sensitive structures?", "id": 44, "question": "Were any of these tasks evaluated in any previous work?", "title": "Assessing BERT's Syntactic Abilities"}, {"answers": ["", ""], "context": "Blogging gained momentum in 1999 and became especially popular after the launch of freely available, hosted platforms such as blogger.com or livejournal.com. Blogging has progressively been used by individuals to share news, ideas, and information, but it has also developed a mainstream role to the extent that it is being used by political consultants and news services as a tool for outreach and opinion forming as well as by businesses as a marketing tool to promote products and services BIBREF0 .", "id": 45, "question": "Do they build a model to automatically detect demographic, lingustic or psycological dimensons of people?", "title": "Stateology: State-Level Interactive Charting of Language, Feelings, and Values"}, {"answers": ["", ""], "context": "Our premise is that we can generate informative maps using geolocated information available on social media; therefore, we guide the blog collection process with the constraint that we only accept blogs that have specific location information. Moreover, we aim to find blogs belonging to writers from all 50 U.S. states, which will allow us to build U.S. maps for various dimensions of interest.", "id": 46, "question": "Which demographic dimensions of people do they obtain?", "title": "Stateology: State-Level Interactive Charting of Language, Feelings, and Values"}, {"answers": ["", ""], "context": "Our dataset provides mappings between location, profile information, and language use, which we can leverage to generate maps that reflect demographic, linguistic, and psycholinguistic properties of the population represented in the dataset.", "id": 47, "question": "How do they obtain psychological dimensions of people?", "title": "Stateology: State-Level Interactive Charting of Language, Feelings, and Values"}, {"answers": ["", ""], "context": "To appear in Proceedings of International Workshop on Health Intelligence (W3PHIAI) of the 34th AAAI Conference on Artificial Intelligence, 2020.", "id": 48, "question": "What is the baseline?", "title": "Medication Regimen Extraction From Clinical Conversations"}, {"answers": ["", ""], "context": "Our dataset consists of a total of 6,693 real doctor-patient conversations recorded in a clinical setting using distant microphones of varying quality. The recordings have an average duration of 9min 28s and have a verbatim transcript of 1,500 words on average (written by the experts). Both the audio and the transcript are de-identified (by removing the identifying information) with digital zeros and [de-identified] tags, respectively. The sentences in the transcript are grounded to the audio with the timestamps of its first and last word.", "id": 49, "question": "Is the data de-identified?", "title": "Medication Regimen Extraction From Clinical Conversations"}, {"answers": ["", ""], "context": "We frame the Medication Regimen extraction problem as a Question Answering (QA) task, which forms the basis for our first approach. It can also be considered as a specific inference or relation extract task, since we extract specific information about an entity (Medication Name), hence our second approach is at the intersection of Question Answering (QA) and Information Extraction (IE) domains. Both the approaches involve using a contiguous segment of the transcript and the Medication Name as input, to find/infer the medication's Dosage and Frequency. When testing the approaches mimicking real-world conditions, we extract Medication Name from the transcript separately using ontology, refer to SECREF19.", "id": 50, "question": "What embeddings are used?", "title": "Medication Regimen Extraction From Clinical Conversations"}, {"answers": ["", ""], "context": "Bidirectional Encoder Representations from Transformers (BERT) is a novel Transformer BIBREF0 model, which recently achieved state-of-the-art performance in several language understanding tasks, such as question answering, natural language inference, semantic similarity, sentiment analysis, and others BIBREF1. While well-suited to dealing with relatively short sequences, Transformers suffer from a major issue that hinders their applicability in classification of long sequences, i.e. they are able to consume only a limited context of symbols as their input BIBREF2.", "id": 51, "question": "What datasets did they use for evaluation?", "title": "Hierarchical Transformers for Long Document Classification"}, {"answers": ["", "The transformer layer"], "context": "Several dimensionality reduction algorithms such as RBM, autoencoders, subspace multinomial models (SMM) are used to obtain a low dimensional representation of documents from a simple BOW representation and then classify it using a simple linear classifiers BIBREF11, BIBREF12, BIBREF13, BIBREF4. In BIBREF14 hierarchical attention networks are used for document classification. They evaluate their model on several datasets with average number of words around 150. Character-level CNN are explored in BIBREF15 but it is prohibitive for very long documents. In BIBREF16, dataset collected from arXiv papers is used for classification. For classification, they sample random blocks of words and use them together for classification instead of using full document which may work well as arXiv papers are usually coherent and well written on a well defined topic. Their method may not work well on spoken conversations as random block of words usually do not represent topic of full conversation.", "id": 52, "question": "On top of BERT does the RNN layer work better or the transformer layer?", "title": "Hierarchical Transformers for Long Document Classification"}, {"answers": ["", "The crowdsourcing platform CrowdFlower was used to obtain natural dialog data that prompted the user to paraphrase, explain, and/or answer a question from a Simple questions BIBREF7 dataset. The CrowdFlower users were restricted to English-speaking countries to avoid dialogs with poor English."], "context": "Nowadays, dialog systems are usually designed for a single domain BIBREF0 . They store data in a well-defined format with a fixed number of attributes for entities that the system can provide. Because data in this format can be stored as a two-dimensional table within a relational database, we call the data flat. This data representation allows the system to query the database in a simple and efficient way. It also allows to keep the dialog state in the form of slots (which usually correspond to columns in the table) and track it through the dialog using probabilistic belief tracking BIBREF1 , BIBREF2 .", "id": 53, "question": "How was this data collected?", "title": "Data Collection for Interactive Learning through the Dialog"}, {"answers": ["4.49 turns", "4.5 turns per dialog (8533 turns / 1900 dialogs)"], "context": "From the point of view of dialog systems providing general information from a knowledge base, the most limiting factor is that a large portion of the questions is understood poorly.", "id": 54, "question": "What is the average length of dialog?", "title": "Data Collection for Interactive Learning through the Dialog"}, {"answers": ["", ""], "context": "Suppose a user wants to write a sentence \u201cI will be 10 minutes late.\u201d Ideally, she would type just a few keywords such as \u201c10 minutes late\u201d and an autocomplete system would be able to infer the intended sentence (Figure FIGREF1). Existing left-to-right autocomplete systems BIBREF0, BIBREF1 can often be inefficient, as the prefix of a sentence (e.g. \u201cI will be\u201d) fails to capture the core meaning of the sentence. Besides the practical goal of building a better autocomplete system, we are interested in exploring the tradeoffs inherent to such communication schemes between the efficiency of typing keywords, accuracy of reconstruction, and interpretability of keywords.", "id": 55, "question": "How are models evaluated in this human-machine communication game?", "title": "Learning Autocomplete Systems as a Communication Game"}, {"answers": ["", ""], "context": "Consider a communication game in which the goal is for a user to communicate a target sequence $x= (x_1, ..., x_m)$ to a system by passing a sequence of keywords $z= (z_1, ..., z_n)$. The user generates keywords $z$ using an encoding strategy $q_{\\alpha }(z\\mid x)$, and the system attempts to guess the target sequence $x$ via a decoding strategy $p_{\\beta }(x\\mid z)$.", "id": 56, "question": "How many participants were trying this communication game?", "title": "Learning Autocomplete Systems as a Communication Game"}, {"answers": ["", ""], "context": "To learn communication schemes without supervision, we model the cooperative communication between a user and system through an encoder-decoder framework. Concretely, we model the user's encoding strategy $q_{\\alpha }(z\\mid x)$ with an encoder which encodes the target sentence $x$ into the keywords $z$ by keeping a subset of the tokens. This stochastic encoder $q_{\\alpha }(z\\mid x)$ is defined by a model which returns the probability of each token retained in the final subsequence $z$. Then, we sample from Bernoulli distributions according to these probabilities to either keep or drop the tokens independently (see Appendix for an example).", "id": 57, "question": "What user variations have been tested?", "title": "Learning Autocomplete Systems as a Communication Game"}, {"answers": ["", ""], "context": "Our goal now is to learn encoder-decoder pairs which optimally balance the communication cost and reconstruction loss. The simplest approach to balancing efficiency and accuracy is to weight $\\mathrm {cost}(x, \\alpha )$ and $\\mathrm {loss}(x, \\alpha , \\beta )$ linearly using a weight $\\lambda $ as follows,", "id": 58, "question": "What are the baselines used?", "title": "Learning Autocomplete Systems as a Communication Game"}, {"answers": ["", ""], "context": "Robotic Process Automation (RPA) is a type of software bots that simulates hand-operated human activities like entering data into a system, registering into accounts, and accomplishing straightforward but repetitive workflows BIBREF0. However, one of the drawbacks of RPA-bots is their susceptibility to changes in defined scenarios: being designed for a particular task, the RPA-bot is usually not adaptable to other domains or even light modifications in a workflow BIBREF0. This inability to readjust to shifting conditions gave rise to Intelligent Process Automation (IPA) systems. IPA-bots combine RPA with Artificial Intelligence (AI) and thus are able to execute more cognitively demanding tasks that require i.a. reasoning and language understanding. Hence, IPA-bots advanced beyond automating shallow \u201cclick tasks\u201d and can perform jobs more intelligently \u2013 by means of machine learning algorithms. Such IPA-systems undertake time-consuming and routine tasks, and thus enable smart workflows and free up skilled workers to accomplish higher-value activities.", "id": 59, "question": "Do they use off-the-shelf NLP systems to build their assitant?", "title": "Multipurpose Intelligent Process Automation via Conversational Assistant"}, {"answers": ["It defined a sequence labeling task to extract custom entities from user input and label the next action (out of 13 custom actions defined).", ""], "context": "This paper addresses the challenge of implementing a dialogue system for IPA purposes within the practical e-learning domain with the initial absence of training data. Our contributions within this work are as follows:", "id": 60, "question": "How does the IPA label data after interacting with users?", "title": "Multipurpose Intelligent Process Automation via Conversational Assistant"}, {"answers": ["", ""], "context": "OMB+ is a German e-learning platform that assists students who are preparing for an engineering or computer science study at a university. The central purpose of the course is to support students in reviving their mathematical skills so that they can follow the upcoming university courses. The platform is thematically segmented into 13 sections and includes free mathematical classes with theoretical and practical content. Besides that, OMB+ provides a possibility to get assistance from a human tutor via a chat interface. Usually, the students and tutors interact in written form, and the language of communication is German. The current problem of the OMB+ platform is that the number of students grows every year, but to hire more qualified human tutors is challenging and expensive. This results in a more extended waiting period for students until their problems can be considered.", "id": 61, "question": "What kind of repetitive and time-consuming activities does their assistant handle?", "title": "Multipurpose Intelligent Process Automation via Conversational Assistant"}, {"answers": ["Through the All India Radio new channel where actors read news.", ""], "context": "The idea of language identification is to classify a given audio signal into a particular class using a classification algorithm. Commonly language identification task was done using i-vector systems [1]. A very well known approach for language identification proposed by N. Dahek et al. [1] uses the GMM-UBM model to obtain utterance level features called i-vectors. Recent advances in deep learning [15,16] have helped to improve the language identification task using many different neural network architectures which can be trained efficiently using GPUs for large scale datasets. These neural networks can be configured in various ways to obtain better accuracy for language identification task. Early work on using Deep learning for language Identification was published by Pavel Matejka et al. [2], where they used stacked bottleneck features extracted from deep neural networks for language identification task and showed that the bottleneck features learned by Deep neural networks are better than simple MFCC or PLP features. Later the work by I. Lopez-Moreno et al. [3] from Google showed how to use Deep neural networks to directly map the sequence of MFCC frames into its language class so that we can apply language identification at the frame level. Speech signals will have both spatial and temporal information, but simple DNNs are not able to capture temporal information. Work done by J. Gonzalez-Dominguez et al. [4] by Google developed an LSTM based language identification model which improves the accuracy over the DNN based models. Work done by Alicia et al. [5] used CNNs to improve upon i-vector [1] and other previously developed systems. The work done by Daniel Garcia-Romero et al. [6] has used a combination of Acoustic model trained for speech recognition with Time-delay neural networks where they train the TDNN model by feeding the stacked bottleneck features from acoustic model to predict the language labels at the frame level. Recently X-vectors [7] is proposed for speaker identification task and are shown to outperform all the previous state of the art speaker identification algorithms and are also used for language identification by David Snyder et al. [8].", "id": 62, "question": "How was the audio data gathered?", "title": "Identification of Indian Languages using Ghost-VLAD pooling"}, {"answers": ["", "An extension of NetVLAD which replaces hard assignment-based clustering with soft assignment-based clustering with the additon o fusing Ghost clusters to deal with noisy content."], "context": "In any language identification model, we want to obtain utterance level representation which has very good language discriminative features. These representations should be compact and should be easily separable by a linear classifier. The idea of any pooling strategy is to pool the frame-level representations into a single utterance level representation. Previous works by [7] have used simple mean and standard deviation aggregation to pool the frame-level features from the top layer of the neural network to obtain the utterance level features. Recently [9] used VLAD based pooling strategy for speaker identification which is inspired from [10] proposed for face recognition. The NetVLAD [11] and Ghost-VLAD [10] methods are proposed for Place recognition and face recognition, respectively, and in both cases, they try to aggregate the local descriptors into global features. In our case, the local descriptors are features extracted from ResNet [15], and the global utterance level feature is obtained by using GhostVLAD pooling. In this section, we explain different pooling methods, including NetVLAD, Ghost-VLAD, Statistic pooling, and Average pooling.", "id": 63, "question": "What is the GhostVLAD approach?", "title": "Identification of Indian Languages using Ghost-VLAD pooling"}, {"answers": ["Hindi, English, Kannada, Telugu, Assamese, Bengali and Malayalam", "Kannada, Hindi, Telugu, Malayalam, Bengali, English and Assamese (in table, missing in text)"], "context": "The NetVLAD pooling strategy was initially developed for place recognition by R. Arandjelovic et al. [11]. The NetVLAD is an extension to VLAD [18] approach where they were able to replace the hard assignment based clustering with soft assignment based clustering so that it can be trained with neural network in an end to end fashion. In our case, we use the NetVLAD layer to map N local features of dimension D into a fixed dimensional vector, as shown in Figure 1 (Left side).", "id": 64, "question": "Which 7 Indian languages do they experiment with?", "title": "Identification of Indian Languages using Ghost-VLAD pooling"}, {"answers": ["", ""], "context": "Data annotation is a major bottleneck for the application of supervised learning approaches to many problems. As a result, unsupervised methods that learn directly from unlabeled data are increasingly important. For tasks related to unsupervised syntactic analysis, discrete generative models have dominated in recent years \u2013 for example, for both part-of-speech (POS) induction BIBREF0 , BIBREF1 and unsupervised dependency parsing BIBREF2 , BIBREF3 , BIBREF4 . While similar models have had success on a range of unsupervised tasks, they have mostly ignored the apparent utility of continuous word representations evident from supervised NLP applications BIBREF5 , BIBREF6 . In this work, we focus on leveraging and explicitly representing continuous word embeddings within unsupervised models of syntactic structure.", "id": 65, "question": "What datasets do they evaluate on?", "title": "Unsupervised Learning of Syntactic Structure with Invertible Neural Projections"}, {"answers": ["", ""], "context": " As an illustrative example, we first present a baseline model for Markov syntactic structure (POS induction) that treats a sequence of pre-trained word embeddings as observations. Then, we propose our novel approach, again using Markov structure, that introduces latent word embedding variables and a neural projector. Lastly, we extend our approach to more general syntactic structures.", "id": 66, "question": "Do they evaluate only on English datasets?", "title": "Unsupervised Learning of Syntactic Structure with Invertible Neural Projections"}, {"answers": ["The neural projector must be invertible.", ""], "context": "We start by describing the Gaussian hidden Markov model introduced by BIBREF9 , which is a locally normalized model with multinomial transitions and Gaussian emissions. Given a sentence of length INLINEFORM0 , we denote the latent POS tags as INLINEFORM1 , observed (pre-trained) word embeddings as INLINEFORM2 , transition parameters as INLINEFORM3 , and Gaussian emission parameters as INLINEFORM4 . The joint distribution of data and latent variables factors as:", "id": 67, "question": "What is the invertibility condition?", "title": "Unsupervised Learning of Syntactic Structure with Invertible Neural Projections"}, {"answers": ["", ""], "context": "Modelling the relationship between sequences is extremely significant in most retrieval or classification problems involving two sequences. Traditionally, in Siamese networks, Hadamard product or concatenation have been used to fuse two vector representations of two input sequences to form a final representation for tasks like semantic similarity, passage retrieval. This representation, subsequently, has been used to compute similarity scores which has been used in a variety of training objectives like margin loss for ranking or cross-entropy error in classification.", "id": 68, "question": "Do they show on which examples how conflict works better than attention?", "title": "Conflict as an Inverse of Attention in Sequence Relationship"}, {"answers": ["GRU-based encoder, interaction block, and classifier consisting of stacked fully-connected layers.", ""], "context": "Bahdanau et al. BIBREF2 introduced attention first in neural machine translation. It used a feed-forward network over addition of encoder and decoder states to compute alignment score. Our work is very similar to this except we use element wise difference instead of addition to build our conflict function. BIBREF3 came up with a scaled dot-product attention in their Transformer model which is fast and memory-efficient. Due to the scaling factor, it didn't have the issue of gradients zeroing out. On the other hand, BIBREF4 has experimented with global and local attention based on the how many hidden states the attention function takes into account. Their experiments have revolved around three attention functions - dot, concat and general. Their findings include that dot product works best for global attention. Our work also belongs to the global attention family as we consider all the hidden states of the sequence.", "id": 69, "question": "Which neural architecture do they use as a base for their attention conflict mechanisms?", "title": "Conflict as an Inverse of Attention in Sequence Relationship"}, {"answers": ["", ""], "context": "Let us consider that we have two sequences INLINEFORM0 and INLINEFORM1 each with M and N words respectively. The objective of attention is two-fold: compute alignment scores (or weight) between every word representation pairs from INLINEFORM2 and INLINEFORM3 and fuse the matching information of INLINEFORM4 with INLINEFORM5 thus computing a new representation of INLINEFORM6 conditioned on INLINEFORM7 .", "id": 70, "question": "On which tasks do they test their conflict method?", "title": "Conflict as an Inverse of Attention in Sequence Relationship"}, {"answers": ["", ""], "context": "Following developing news stories is imperative to making real-time decisions on important political and public safety matters. Given the abundance of media providers and languages, this endeavor is an extremely difficult task. As such, there is a strong demand for automatic clustering of news streams, so that they can be organized into stories or themes for further processing. Performing this task in an online and efficient manner is a challenging problem, not only for newswire, but also for scientific articles, online reviews, forum posts, blogs, and microblogs.", "id": 71, "question": "Do they use graphical models?", "title": "Multilingual Clustering of Streaming News"}, {"answers": ["", ""], "context": "", "id": 72, "question": "What are the sources of the datasets?", "title": "Multilingual Clustering of Streaming News"}, {"answers": ["F1, precision, recall, accuracy", "Precision, recall, F1, accuracy"], "context": "Each document INLINEFORM0 is represented by two vectors in INLINEFORM1 and INLINEFORM2 . The first vector exists in a \u201cmonolingual space\u201d (of dimensionality INLINEFORM3 ) and is based on a bag-of-words representation of the document. The second vector exists in a \u201ccrosslingual space\u201d (of dimensionality INLINEFORM4 ) which is common to all languages. More details about these representations are discussed in \u00a7 SECREF4 .", "id": 73, "question": "What metric is used for evaluation?", "title": "Multilingual Clustering of Streaming News"}, {"answers": ["BC5CDR-disease, NCBI-disease, BC5CDR-chem, BC4CHEMD, BC2GM, JNLPBA, LINNAEUS, Species-800", "BC5CDR-disease, NCBI-disease, BC5CDR-chem, BC4CHEMD, BC2GM, JNLPBA, LINNAEUS, Species-800"], "context": "Pretrained Language Models (PTLMs) such as BERT BIBREF1 have spearheaded advances on many NLP tasks. Usually, PTLMs are pretrained on unlabeled general-domain and/or mixed-domain text, such as Wikipedia, digital books or the Common Crawl corpus.", "id": 74, "question": "Which eight NER tasks did they evaluate on?", "title": "Inexpensive Domain Adaptation of Pretrained Language Models: A Case Study on Biomedical Named Entity Recognition"}, {"answers": ["", ""], "context": "For our purpose, a PTLM consists of three parts: A tokenizer $\\mathcal {T}_\\mathrm {LM} : \\mathbb {L}^+ \\rightarrow \\mathbb {L}_\\mathrm {LM}^+$, a wordpiece embedding function $\\mathcal {E}_\\mathrm {LM}: \\mathbb {L}_\\mathrm {LM} \\rightarrow \\mathbb {R}^{d_\\mathrm {LM}}$ and an encoder function $\\mathcal {F}_\\mathrm {LM}$. $\\mathbb {L}_\\mathrm {LM}$ is a limited vocabulary of wordpieces. All words that are not in $\\mathbb {L}_\\mathrm {LM}$ are tokenized into sequences of shorter wordpieces, e.g., tachycardia becomes ta ##chy ##card ##ia. Given a sentence $S = [w_1, \\ldots , w_T]$, tokenized as $\\mathcal {T}_\\mathrm {LM}(S) = [\\mathcal {T}_\\mathrm {LM}(w_1); \\ldots ; \\mathcal {T}_\\mathrm {LM}(w_T)]$, $\\mathcal {E}_\\mathrm {LM}$ embeds every wordpiece in $\\mathcal {T}_\\mathrm {LM}(S)$ into a real-valued, trainable wordpiece vector. The wordpiece vectors of the entire sequence are stacked and fed into $\\mathcal {F}_\\mathrm {LM}$. Note that we consider position and segment embeddings to be a part of $\\mathcal {F}_\\mathrm {LM}$ rather than $\\mathcal {E}_\\mathrm {LM}$.", "id": 75, "question": "What in-domain text did they use?", "title": "Inexpensive Domain Adaptation of Pretrained Language Models: A Case Study on Biomedical Named Entity Recognition"}, {"answers": ["", ""], "context": "Neural Machine Translation (NMT) has shown its effectiveness in translation tasks when NMT systems perform best in recent machine translation campaigns BIBREF0 , BIBREF1 . Compared to phrase-based Statistical Machine Translation (SMT) which is basically an ensemble of different features trained and tuned separately, NMT directly modeling the translation relationship between source and target sentences. Unlike SMT, NMT does not require much linguistic information and large monolingual data to achieve good performances.", "id": 76, "question": "Does their framework automatically optimize for hyperparameters?", "title": "Toward Multilingual Neural Machine Translation with Universal Encoder and Decoder"}, {"answers": ["", ""], "context": "An NMT system consists of an encoder which automatically learns the characteristics of a source sentence into fix-length context vectors and a decoder that recursively combines the produced context vectors with the previous target word to generate the most probable word from a target vocabulary.", "id": 77, "question": "Does their framework always generate purely attention-based models?", "title": "Toward Multilingual Neural Machine Translation with Universal Encoder and Decoder"}, {"answers": ["", ""], "context": "While the majority of previous research has focused on improving the performance of NMT on individual language pairs with individual NMT systems, recent work has started investigating potential ways to conduct the translation involved in multiple languages using a single NMT system. The possible reason explaining these efforts lies on the unique architecture of NMT. Unlike SMT, NMT consists of separated neural networks for the source and target sides, or the encoder and decoder, respectively. This allows these components to map a sentence in any language to a representation in an embedding space which is believed to share common semantics among the source languages involved. From that shared space, the decoder, with some implicit or explicit relevant constraints, could transform the representation into a concrete sentence in any desired language. In this section, we review some related work on this matter. We then describe a unified approach toward an universal attention-based NMT scheme. Our approach does not require any architecture modification and it can be trained to learn a minimal number of parameters compared to the other work.", "id": 78, "question": "Do they test their framework performance on commonly used language pairs, such as English-to-German?", "title": "Toward Multilingual Neural Machine Translation with Universal Encoder and Decoder"}, {"answers": ["", ""], "context": "By extending the solution of sequence-to-sequence modeling using encoder-decoder architectures to multi-task learning, Luong2016 managed to achieve better performance on some INLINEFORM0 tasks such as translation, parsing and image captioning compared to individual tasks. Specifically in translation, the work utilizes multiple encoders to translate from multiple languages, and multiple decoders to translate to multiple languages. In this view of multilingual translation, each language in source or target side is modeled by one encoder or decoder, depending on the side of the translation. Due to the natural diversity between two tasks in that multi-task learning scenario, e.g. translation and parsing, it could not feature the attention mechanism although it has proven its effectiveness in NMT. There exists two directions which proposed for multilingual translation scenarios where they leverage the attention mechanism. The first one is indicated in the work from BIBREF8 , where it introduce an one-to-many multilingual NMT system to translates from one source language into multiple target languages. Having one source language, the attention mechanism is then handed over to the corresponding decoder. The objective function is changed to adapt to multilingual settings. In testing time, the parameters specific to a desired language pair are used to perform the translation.", "id": 79, "question": "Which languages do they test on for the under-resourced scenario?", "title": "Toward Multilingual Neural Machine Translation with Universal Encoder and Decoder"}, {"answers": ["", ""], "context": "Automatically answering questions, especially in the open-domain setting (i.e., where minimal or no contextual knowledge is explicitly provided), requires bringing to bear considerable amount of background knowledge and reasoning abilities. For example, knowing the answers to the two questions in Figure FIGREF1 requires identifying a specific ISA relation (i.e., that cooking is a type of learned behavior) as well as recalling the definition of a concept (i.e., that global warming is defined as a worldwide increase in temperature). In the multiple-choice setting, which is the variety of question-answering (QA) that we focus on in this paper, there is also pragmatic reasoning involved in selecting optimal answer choices (e.g., while greenhouse effect might in some other context be a reasonable answer to the second question in Figure FIGREF1, global warming is a preferable candidate).", "id": 80, "question": "Are the automatically constructed datasets subject to quality control?", "title": "What Does My QA Model Know? Devising Controlled Probes using Expert Knowledge"}, {"answers": ["MULTIPLE CHOICE QUESTION ANSWERING", ""], "context": "We follow recent work on constructing challenge datasets for probing neural models, which has primarily focused on the task of natural language inference (NLI) BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18. Most of this work looks at constructing data through adversarial generation methods, which have also been found useful for creating stronger models BIBREF19. There has also been work on using synthetic data of the type we consider in this paper BIBREF20, BIBREF21, BIBREF22. We closely follow the methodology of BIBREF22, who use hand-constructed linguistic fragments to probe NLI models and study model re-training using a variant of the inoculation by fine-tuning strategy of BIBREF23. In contrast, we focus on probing open-domain MCQA models (see BIBREF24 for a related study in the reading comprehension setting) as well as constructing data from much larger sources of structured knowledge.", "id": 81, "question": "Do they focus on Reading Comprehension or multiple choice question answering?", "title": "What Does My QA Model Know? Devising Controlled Probes using Expert Knowledge"}, {"answers": ["", "one additional hop"], "context": "Our probing methodology starts by constructing challenge datasets (Figure FIGREF1, yellow box) from a target set of knowledge resources. Each of our probing datasets consists of multiple-choice questions that include a question $\\textbf {q}$ and a set of answer choices or candidates $\\lbrace a_{1},...a_{N}\\rbrace $. This section describes in detail the 5 different datasets we build, which are drawn from two sources of expert knowledge, namely WordNet BIBREF35 and the GNU Collaborative International Dictionary of English (GCIDE). We describe each resource in turn, and explain how the resulting dataset probes, which we call WordNetQA and DictionaryQA, are constructed.", "id": 82, "question": "After how many hops does accuracy decrease?", "title": "What Does My QA Model Know? Devising Controlled Probes using Expert Knowledge"}, {"answers": ["", ""], "context": "WordNet is an English lexical database consisting of around 117k concepts, which are organized into groups of synsets that each contain a gloss (i.e., a definition of the target concept), a set of representative English words (called lemmas), and, in around 33k synsets, example sentences. In addition, many synsets have ISA links to other synsets that express complex taxonomic relations. Figure FIGREF6 shows an example and Table TABREF4 summarizes how we formulate WordNet as a set of triples $\\mathcal {T}$ of various types. These triples together represent a directed, edge-labeled graph $G$. Our main motivation for using WordNet, as opposed to a resource such as ConceptNet BIBREF36, is the availability of glosses ($\\mathcal {D}$) and example sentences ($\\mathcal {S}$), which allows us to construct natural language questions that contextualize the types of concepts we want to probe.", "id": 83, "question": "How do they control for annotation artificats?", "title": "What Does My QA Model Know? Devising Controlled Probes using Expert Knowledge"}, {"answers": ["", ""], "context": "We build 4 individual datasets based on semantic relations native to WordNet (see BIBREF37): hypernymy (i.e., generalization or ISA reasoning up a taxonomy, ISA$^\\uparrow $), hyponymy (ISA$^{\\downarrow }$), synonymy, and definitions. To generate a set of questions in each case, we employ a number of rule templates $\\mathcal {Q}$ that operate over tuples. A subset of such templates is shown in Table TABREF8. The templates were designed to mimic naturalistic questions we observed in our science benchmarks.", "id": 84, "question": "Is WordNet useful for taxonomic reasoning for this task?", "title": "What Does My QA Model Know? Devising Controlled Probes using Expert Knowledge"}, {"answers": ["", ""], "context": "This paper describes our approach and results for Task 2 of the CoNLL\u2013SIGMORPHON 2018 shared task on universal morphological reinflection BIBREF0 . The task is to generate an inflected word form given its lemma and the context in which it occurs.", "id": 85, "question": "How do they perform multilingual training?", "title": "Copenhagen at CoNLL--SIGMORPHON 2018: Multilingual Inflection in Context with Explicit Morphosyntactic Decoding"}, {"answers": ["German, English, Spanish, Finnish, French, Russian, Swedish.", ""], "context": "Our system is a modification of the provided CoNLL\u2013SIGMORPHON 2018 baseline system, so we begin this section with a reiteration of the baseline system architecture, followed by a description of the three augmentations we introduce.", "id": 86, "question": "What languages are evaluated?", "title": "Copenhagen at CoNLL--SIGMORPHON 2018: Multilingual Inflection in Context with Explicit Morphosyntactic Decoding"}, {"answers": ["", ""], "context": "The CoNLL\u2013SIGMORPHON 2018 baseline is described as follows:", "id": 87, "question": "Does the model have attention?", "title": "Copenhagen at CoNLL--SIGMORPHON 2018: Multilingual Inflection in Context with Explicit Morphosyntactic Decoding"}, {"answers": ["", ""], "context": "Here we compare and contrast our system to the baseline system. A diagram of our system is shown in Figure FIGREF4 .", "id": 88, "question": "What architecture does the decoder have?", "title": "Copenhagen at CoNLL--SIGMORPHON 2018: Multilingual Inflection in Context with Explicit Morphosyntactic Decoding"}, {"answers": ["", ""], "context": "Test results are listed in Table TABREF17 . Our system outperforms the baseline for all settings and languages in Track 1 and for almost all in Track 2\u2014only in the high resource setting is our system not definitively superior to the baseline.", "id": 89, "question": "What architecture does the encoder have?", "title": "Copenhagen at CoNLL--SIGMORPHON 2018: Multilingual Inflection in Context with Explicit Morphosyntactic Decoding"}, {"answers": ["The task of predicting MSD tags: V, PST, V.PCTP, PASS.", ""], "context": "We analyse the incremental effect of the different features in our system, focusing on the low-resource setting in Track 1 and using development data.", "id": 90, "question": "What is MSD prediction?", "title": "Copenhagen at CoNLL--SIGMORPHON 2018: Multilingual Inflection in Context with Explicit Morphosyntactic Decoding"}, {"answers": ["", ""], "context": "Here we study the errors produced by our system on the English test set to better understand the remaining shortcomings of the approach. A small portion of the wrong predictions point to an incorrect interpretation of the morpho-syntactic conditioning of the context, e.g. the system predicted plan instead of plans in the context Our _ include raising private capital. The majority of wrong predictions, however, are nonsensical, like bomb for job, fify for fixing, and gnderrate for understand. This observation suggests that generally the system did not learn to copy the characters of lemma into inflected form, which is all it needs to do in a large number of cases. This issue could be alleviated with simple data augmentation techniques that encourage autoencoding BIBREF2 .", "id": 91, "question": "What type of inflections are considered?", "title": "Copenhagen at CoNLL--SIGMORPHON 2018: Multilingual Inflection in Context with Explicit Morphosyntactic Decoding"}, {"answers": ["", ""], "context": "Teaching machine to read and comprehend a given passage/paragraph and answer its corresponding questions is a challenging task. It is also one of the long-term goals of natural language understanding, and has important applications in e.g., building intelligent agents for conversation and customer service support. In a real world setting, it is necessary to judge whether the given questions are answerable given the available knowledge, and then generate correct answers for the ones which are able to infer an answer in the passage or an empty answer (as an unanswerable question) otherwise.", "id": 92, "question": "Do they use attention?", "title": "Stochastic Answer Networks for SQuAD 2.0"}, {"answers": ["SAN Baseline, BNA, DocQA, R.M-Reader, R.M-Reader+Verifier and DocQA+ELMo", "BNA, DocQA, R.M-Reader, R.M-Reader + Verifier, DocQA + ELMo, R.M-Reader+Verifier+ELMo"], "context": "The Machine Reading Comprehension is a task which takes a question INLINEFORM0 and a passage/paragraph INLINEFORM1 as inputs, and aims to find an answer span INLINEFORM2 in INLINEFORM3 . We assume that if the question is answerable, the answer INLINEFORM4 exists in INLINEFORM5 as a contiguous text string; otherwise, INLINEFORM6 is an empty string indicating an unanswerable question. Note that to handle the unanswerable questions, we manually append a dumpy text string NULL at the end of each corresponding passage/paragraph. Formally, the answer is formulated as INLINEFORM7 . In case of unanswerable questions, INLINEFORM8 points to the last token of the passage.", "id": 93, "question": "What other models do they compare to?", "title": "Stochastic Answer Networks for SQuAD 2.0"}, {"answers": ["", "GRU"], "context": "We evaluate our system on SQuAD 2.0 dataset BIBREF14 , a new MRC dataset which is a combination of Stanford Question Answering Dataset (SQuAD) 1.0 BIBREF15 and additional unanswerable question-answer pairs. The answerable pairs are around 100K; while the unanswerable questions are around 53K. This dataset contains about 23K passages and they come from approximately 500 Wikipedia articles. All the questions and answers are obtained by crowd-sourcing. Two evaluation metrics are used: Exact Match (EM) and Macro-averaged F1 score (F1) BIBREF14 .", "id": 94, "question": "What is the architecture of the span detector?", "title": "Stochastic Answer Networks for SQuAD 2.0"}, {"answers": ["Accuracy", ""], "context": "This research addresses the problem of representing the semantics of text documents in multi-lingual comparable corpora. We present a new approach to this problem, based on neural embeddings, and test it on the task of clustering texts into meaningful classes depending on their topics. The setting is unsupervised, meaning that one either does not have enough annotated data to train a supervised classifier or does not want to be limited with a pre-defined set of classes. There is a lot of sufficiently good approaches to this problem in the case of mono-lingual text collections, but the presence of multiple languages introduces complications.", "id": 95, "question": "What evaluation metric do they use?", "title": "Clustering Comparable Corpora of Russian and Ukrainian Academic Texts: Word Embeddings and Semantic Fingerprints"}, {"answers": ["Reward of 11.8 for the A2C-chained model, 41.8 for the KG-A2C-chained model, 40 for A2C-Explore and 44 for KG-A2C-Explore.", ""], "context": "Many reinforcement learning algorithms are designed for relatively small discrete or continuous action spaces and so have trouble scaling. Text-adventure games\u2014or interaction fictions\u2014are simulations in which both an agents' state and action spaces are in textual natural language. An example of a one turn agent interaction in the popular text-game Zork1 can be seen in Fig. FIGREF1. Text-adventure games provide us with multiple challenges in the form of partial observability, commonsense reasoning, and a combinatorially-sized state-action space. Text-adventure games are structured as long puzzles or quests, interspersed with bottlenecks. The quests can usually be completed through multiple branching paths. However, games can also feature one or more bottlenecks. Bottlenecks are areas that an agent must pass through in order to progress to the next section of the game regardless of what path the agent has taken to complete that section of the quest BIBREF0. In this work, we focus on more effectively exploring this space and surpassing these bottlenecks\u2014building on prior work that focuses on tackling the other problems.", "id": 96, "question": "What are the results from these proposed strategies?", "title": "How To Avoid Being Eaten By a Grue: Exploration Strategies for Text-Adventure Agents"}, {"answers": ["", ""], "context": "In this section, we describe methods to explore combinatorially sized action spaces such as text-games\u2014focusing especially on methods that can deal with their inherent bottleneck structure. We first describe our method that explicitly attempts to detect bottlenecks and then describe how an exploration algorithm such as Go Explore BIBREF9 can leverage knowledge graphs.", "id": 97, "question": "What are the baselines?", "title": "How To Avoid Being Eaten By a Grue: Exploration Strategies for Text-Adventure Agents"}, {"answers": ["", ""], "context": "We compare our two exploration strategies to the following baselines and ablations:", "id": 98, "question": "What are the two new strategies?", "title": "How To Avoid Being Eaten By a Grue: Exploration Strategies for Text-Adventure Agents"}, {"answers": ["", ""], "context": "Chatbots such as dialog and question-answering systems have a long history in AI and natural language processing. Early such systems were mostly built using markup languages such as AIML, handcrafted conversation generation rules, and/or information retrieval techniques BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Recent neural conversation models BIBREF4 , BIBREF5 , BIBREF6 are even able to perform open-ended conversations. However, since they do not use explicit knowledge bases and do not perform inference, they often suffer from generic and dull responses BIBREF5 , BIBREF7 . More recently, BIBREF8 and BIBREF9 proposed to use knowledge bases (KBs) to help generate responses for knowledge-grounded conversation. However, one major weakness of all existing chat systems is that they do not explicitly or implicitly learn new knowledge in the conversation process. This seriously limits the scope of their applications. In contrast, we humans constantly learn new knowledge in our conversations. Even if some existing systems can use very large knowledge bases either harvested from a large data source such as the Web or built manually, these KBs still miss a large number of facts (knowledge) BIBREF10 . It is thus important for a chatbot to continuously learn new knowledge in the conversation process to expand its KB and to improve its conversation ability.", "id": 99, "question": "Do they report results only on English data?", "title": "Towards a Continuous Knowledge Learning Engine for Chatbots"}, {"answers": ["In case of Freebase knowledge base, LiLi model had better F1 score than the single model by 0.20 , 0.01, 0.159 for kwn, unk, and all test Rel type. The values for WordNet are 0.25, 0.1, 0.2. \n", ""], "context": "To the best of our knowledge, we are not aware of any knowledge learning system that can learn new knowledge in the conversation process. This section thus discusses other related work.", "id": 100, "question": "How much better than the baseline is LiLi?", "title": "Towards a Continuous Knowledge Learning Engine for Chatbots"}, {"answers": ["", ""], "context": "We design LiLi as a combination of two interconnected models: (1) a RL model that learns to formulate a query-specific inference strategy for performing the OKBC task, and (2) a lifelong prediction model to predict whether a triple should be in the KB, which is invoked by an action while executing the inference strategy and is learned for each relation as in C-PR. The framework improves its performance over time through user interaction and knowledge retention. Compared to the existing KB inference methods, LiLi overcomes the following three challenges for OKBC:", "id": 101, "question": "What baseline is used in the experiments?", "title": "Towards a Continuous Knowledge Learning Engine for Chatbots"}, {"answers": ["", ""], "context": "As lifelong learning needs to retain knowledge learned from past tasks and use it to help future learning BIBREF31 , LiLi uses a Knowledge Store (KS) for knowledge retention. KS has four components: (i) Knowledge Graph ( INLINEFORM0 ): INLINEFORM1 (the KB) is initialized with base KB triples (see \u00a74) and gets updated over time with the acquired knowledge. (ii) Relation-Entity Matrix ( INLINEFORM2 ): INLINEFORM3 is a sparse matrix, with rows as relations and columns as entity-pairs and is used by the prediction model. Given a triple ( INLINEFORM4 , INLINEFORM5 , INLINEFORM6 ) INLINEFORM7 , we set INLINEFORM8 [ INLINEFORM9 , ( INLINEFORM10 , INLINEFORM11 )] = 1 indicating INLINEFORM12 occurs for pair ( INLINEFORM13 , INLINEFORM14 ). (iii) Task Experience Store ( INLINEFORM15 ): INLINEFORM16 stores the predictive performance of LiLi on past learned tasks in terms of Matthews correlation coefficient (MCC) that measures the quality of binary classification. So, for two tasks INLINEFORM17 and INLINEFORM18 (each relation is a task), if INLINEFORM19 [ INLINEFORM20 ] INLINEFORM21 INLINEFORM22 [ INLINEFORM23 ] [where INLINEFORM24 [ INLINEFORM25 ]=MCC( INLINEFORM26 )], we say C-PR has learned INLINEFORM27 well compared to INLINEFORM28 . (iv) Incomplete Feature DB ( INLINEFORM29 ): INLINEFORM30 stores the frequency of an incomplete path INLINEFORM31 in the form of a tuple ( INLINEFORM32 , INLINEFORM33 , INLINEFORM34 ) and is used in formulating MLQs. INLINEFORM35 [( INLINEFORM36 , INLINEFORM37 , INLINEFORM38 )] = INLINEFORM39 implies LiLi has extracted incomplete path INLINEFORM40 INLINEFORM41 times involving entity-pair INLINEFORM42 [( INLINEFORM43 , INLINEFORM44 )] for query relation INLINEFORM45 .", "id": 102, "question": "In what way does LiLi imitate how humans acquire knowledge and perform inference during an interactive conversation?", "title": "Towards a Continuous Knowledge Learning Engine for Chatbots"}, {"answers": ["", ""], "context": "Given an OKBC query ( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 ), we represent it as a data instance INLINEFORM3 . INLINEFORM4 consists of INLINEFORM5 (the query triple), INLINEFORM6 (interaction limit set for INLINEFORM7 ), INLINEFORM8 (experience list storing the transition history of MDP for INLINEFORM9 in RL) and INLINEFORM10 (mode of INLINEFORM11 ) denoting if INLINEFORM12 is ` INLINEFORM13 ' (training), ` INLINEFORM14 ' (validation), ` INLINEFORM15 ' (evaluation) or ` INLINEFORM16 ' (clue) instance and INLINEFORM17 (feature set). We denote INLINEFORM18 ( INLINEFORM19 ) as the set of all complete (incomplete) path features in INLINEFORM20 . Given a data instance INLINEFORM21 , LiLi starts its initialization as follows: it sets the state as INLINEFORM22 (based on INLINEFORM23 , explained later), pushes the query tuple ( INLINEFORM24 , INLINEFORM25 ) into INLINEFORM26 and feeds INLINEFORM27 [top] to the RL-model for strategy formulation from INLINEFORM28 .", "id": 103, "question": "What metrics are used to establish that this makes chatbots more knowledgeable and better at learning and conversation? ", "title": "Towards a Continuous Knowledge Learning Engine for Chatbots"}, {"answers": ["Answer with content missing: (list)\nLiLi should have the following capabilities:\n1. to formulate an inference strategy for a given query that embeds processing and interactive actions.\n2. to learn interaction behaviors (deciding what to ask and when to ask the user).\n3. to leverage the acquired knowledge in the current and future inference process.\n4. to perform 1, 2 and 3 in a lifelong manner for continuous knowledge learning.", ""], "context": "We now evaluate LiLi in terms of its predictive performance and strategy formulation abilities.", "id": 104, "question": "What are the components of the general knowledge learning engine?", "title": "Towards a Continuous Knowledge Learning Engine for Chatbots"}, {"answers": ["719313", "Book, Electronics, Beauty and Music each have 6000, IMDB 84919, Yelp 231163, Cell Phone 194792 and Baby 160792 labeled data."], "context": "In practice, it is often difficult and costly to annotate sufficient training data for diverse application domains on-the-fly. We may have sufficient labeled data in an existing domain (called the source domain), but very few or no labeled data in a new domain (called the target domain). This issue has motivated research on cross-domain sentiment classification, where knowledge in the source domain is transferred to the target domain in order to alleviate the required labeling effort.", "id": 105, "question": "How many labels do the datasets have?", "title": "Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification"}, {"answers": ["", ""], "context": "Domain Adaptation: The majority of feature adaptation methods for sentiment analysis rely on a key intuition that even though certain opinion words are completely distinct for each domain, they can be aligned if they have high correlation with some domain-invariant opinion words (pivot words) such as \u201cexcellent\u201d or \u201cterrible\u201d. Blitzer et al. ( BIBREF0 ) proposed a method based on structural correspondence learning (SCL), which uses pivot feature prediction to induce a projected feature space that works well for both the source and the target domains. The pivot words are selected in a way to cover common domain-invariant opinion words. Subsequent research aims to better align the domain-specific words BIBREF1 , BIBREF5 , BIBREF3 such that the domain discrepancy could be reduced. More recently, Yu and Jiang ( BIBREF4 ) borrow the idea of pivot feature prediction from SCL and extend it to a neural network-based solution with auxiliary tasks. In their experiment, substantial improvement over SCL has been observed due to the use of real-valued word embeddings. Unsupervised representation learning with deep neural networks (DNN) such as denoising autoencoders has also been explored for feature adaptation BIBREF6 , BIBREF7 , BIBREF8 . It has been shown that DNNs could learn transferable representations that disentangle the underlying factors of variation behind data samples.", "id": 106, "question": "What is the architecture of the model?", "title": "Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification"}, {"answers": ["", ""], "context": "We conduct most of our experiments under an unsupervised domain adaptation setting, where we have no labeled data from the target domain. Consider two sets INLINEFORM0 and INLINEFORM1 . INLINEFORM2 is from the source domain with INLINEFORM3 labeled examples, where INLINEFORM4 is a one-hot vector representation of sentiment label and INLINEFORM5 denotes the number of classes. INLINEFORM6 is from the target domain with INLINEFORM7 unlabeled examples. INLINEFORM8 denotes the total number of training documents including both labeled and unlabeled. We aim to learn a sentiment classifier from INLINEFORM13 and INLINEFORM14 such that the classifier would work well on the target domain. We also present some results under a setting where we assume that a small number of labeled target examples are available (see Figure FIGREF27 ).", "id": 107, "question": "What are the baseline methods?", "title": "Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification"}, {"answers": ["Book, electronics, beauty, music, IMDB, Yelp, cell phone, baby, DVDs, kitchen", ""], "context": "Unlike prior works BIBREF0 , BIBREF4 , our method does not attempt to align domain-specific words through pivot words. In our preliminary experiments, we found that word embeddings pre-trained on a large corpus are able to adequately capture this information. As we will later show in our experiments, even without adaptation, a naive neural network classifier with pre-trained word embeddings can already achieve reasonably good results.", "id": 108, "question": "What are the source and target domains?", "title": "Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification"}, {"answers": ["", ""], "context": "The focus of the word sense disambiguation (WSD) task is polysemy, i.e. words having several substantially different meanings. Two common examples are bank (riverside or financial institution) and bass (fish or musical instrument), but usually the meanings of a word are closely related, e.g. class may refer to: (a) a group of students, (b) the period when they meet to study or (c) a room where such meetings occur. Readers deal with this problem by using a word's context and in WSD we aim at doing it automatically.", "id": 109, "question": "Did they use a crowdsourcing platform for annotations?", "title": "How big is big enough? Unsupervised word sense disambiguation using a very large corpus"}, {"answers": ["The N\u00e4ive-Bayes classifier is corrected so it is not biased to most frequent classes", ""], "context": "The problem of WSD has received a lot of attention since the beginning of natural language processing research. WSD is typically expected to improve the results of real-world applications: originally machine translation and recently information retrieval and extraction, especially question answering BIBREF0 . Like many other areas, WSD has greatly benefited from publicly available test sets and competitions. Two notable corpora are: 1) SemCor BIBREF1 , built by labelling a subset of Brown corpus with Princeton WordNet synsets and 2) the public evaluations of Senseval workshops BIBREF2 , BIBREF3 .", "id": 110, "question": "How do they deal with unknown distribution senses?", "title": "How big is big enough? Unsupervised word sense disambiguation using a very large corpus"}, {"answers": ["", ""], "context": "Since its rise in 2013, the Islamic State of Iraq and Syria (ISIS) has utilized the Internet to spread its ideology, radicalize individuals, and recruit them to their cause. In comparison to other Islamic extremist groups, ISIS' use of technology was more sophisticated, voluminous, and targeted. For example, during ISIS' advance toward Mosul, ISIS related accounts tweeted some 40,000 tweets in one day BIBREF0.However, this heavy engagement forced social media platforms to institute policies to prevent unchecked dissemination of terrorist propaganda to their users, forcing ISIS to adapt to other means to reach their target audience.", "id": 111, "question": "Do they report results only on English data?", "title": "Women in ISIS Propaganda: A Natural Language Processing Analysis of Topics and Emotions in a Comparison with Mainstream Religious Group"}, {"answers": ["", "By comparing scores for each word calculated using Depechemood dictionary and normalize emotional score for each article, they found Catholic and ISIS materials show similar scores"], "context": "Soon after ISIS emerged and declared its caliphate, counterterrorism practitioners and political science researchers started to turn their attention towards understanding how the group operated. Researchers investigated the origins of ISIS, its leadership, funding, and how they rose became a globally dominant non-state actor BIBREF1. This interest in the organization's distinctiveness immediately led to inquiries into ISIS' rhetoric, particularly their use of social media and online resources in recruitment and ideological dissemination. For example, Al-Tamimi examines how ISIS differentiated itself from other jihadist movements by using social media with unprecedented efficiency to improve its image with locals BIBREF2. One of ISIS' most impressive applications of its online prowess was in the recruitment process. The organization has used a variety of materials, especially videos, to recruit both foreign and local fighters. Research shows that ISIS propaganda is designed to portray the organization as a provider of justice, governance, and development in a fashion that resonates with young westerners BIBREF3. This propaganda machine has become a significant area of research, with scholars such as Winter identifying key themes in it such as brutality, mercy, victimhood, war, belonging and utopianism. BIBREF4. However, there has been insufficient attention focused on how these approaches have particularly targeted and impacted women. This is significant given that scholars have identified the distinctiveness of this population when it comes to nearly all facets of terrorism.", "id": 112, "question": "What conclusions do the authors draw from their finding that the emotional appeal of ISIS and Catholic materials are similar?", "title": "Women in ISIS Propaganda: A Natural Language Processing Analysis of Topics and Emotions in a Comparison with Mainstream Religious Group"}, {"answers": ["By multiplying crowd-annotated document-emotion matrix with emotion-word matrix. ", ""], "context": "Finding useful collections of texts where ISIS targets women is a challenging task. Most of the available material are not reflecting ISIS' official point of view or they do not talk specifically about women. However, ISIS' online magazines are valuable resources for understanding how the organization attempts to appeal to western audiences, particularly women. Looking through both Dabiq and Rumiyah, many issues of the magazines contain articles specifically addressing women, usually with \u201c to our sisters \u201d incorporated into the title. Seven out of fifteen Dabiq issues and all thirteen issues of Rumiyah contain articles targeting women, clearly suggesting an increase in attention to women over time.", "id": 113, "question": "How id Depechemood trained?", "title": "Women in ISIS Propaganda: A Natural Language Processing Analysis of Topics and Emotions in a Comparison with Mainstream Religious Group"}, {"answers": ["By using topic modeling and unsupervised emotion detection on ISIS materials and articles from Catholic women forum", ""], "context": "Most text and document datasets contain many unnecessary words such as stopwords, misspelling, slang, etc. In many algorithms, especially statistical and probabilistic learning algorithms, noise and unnecessary features can have adverse effects on system performance. In this section, we briefly explain some techniques and methods for text cleaning and pre-processing text datasets BIBREF13.", "id": 114, "question": "How are similarities and differences between the texts from violent and non-violent religious groups analyzed?", "title": "Women in ISIS Propaganda: A Natural Language Processing Analysis of Topics and Emotions in a Comparison with Mainstream Religious Group"}, {"answers": ["", "Using NMF based topic modeling and their coherence prominent topics are identified"], "context": "Tokenization is a pre-processing method which breaks a stream of text into words, phrases, symbols, or other meaningful elements called tokens BIBREF14. The main goal of this step is to investigate the words in a sentence BIBREF14. Both text classification and text mining requires a parser which processes the tokenization of the documents; for example:", "id": 115, "question": "How are prominent topics idenified in Dabiq and Rumiyah?", "title": "Women in ISIS Propaganda: A Natural Language Processing Analysis of Topics and Emotions in a Comparison with Mainstream Religious Group"}, {"answers": ["", ""], "context": "Automatically generating text to describe the content of images, also known as image captioning, is a multimodal task of considerable interest in both the computer vision and the NLP communities. Image captioning can be framed as a translation task from an image to a descriptive natural language statement. Many existing captioning models BIBREF0, BIBREF1, BIBREF2, BIBREF3 follow the typical encoder-decoder framework where a convolutional network is used to condense images into visual feature representations, combined with a recurrent network for language generation. While these models demonstrate promising results, quantifying image captioning performance remains a challenging problem, in a similar way to other generative tasks BIBREF4, BIBREF5.", "id": 116, "question": "Are the images from a specific domain?", "title": "Going Beneath the Surface: Evaluating Image Captioning for Grammaticality, Truthfulness and Diversity"}, {"answers": ["Existential (OneShape, MultiShapes), Spacial (TwoShapes, Multishapes), Quantification (Count, Ratio) datasets are generated from ShapeWorldICE", "ShapeWorldICE datasets: OneShape, MultiShapes, TwoShapes, MultiShapes, Count, and Ratio"], "context": "As a natural language generation task, image captioning frequently uses evaluation metrics such as BLEU BIBREF6, METEOR BIBREF7, ROUGE BIBREF8 and CIDEr BIBREF9. These metrics use n-gram similarity between the candidate caption and reference captions to approximate the correlation between a candidate caption and the associated ground truth. SPICE BIBREF10 is a more recent metric specifically designed for image captioning. For SPICE, both the candidate caption and reference captions are parsed to scene graphs, and the agreement between tuples extracted from these scene graphs is examined. SPICE more closely relates to our truthfulness evaluation than the other metrics, but it still uses overlap comparison to reference captions as a proxy to ground truth. In contrast, our truthfulness metric directly evaluates a candidate caption against a model of the actual visual content.", "id": 117, "question": "Which datasets are used?", "title": "Going Beneath the Surface: Evaluating Image Captioning for Grammaticality, Truthfulness and Diversity"}, {"answers": ["", ""], "context": "Recently, many synthetic datasets have been proposed as diagnostic tools for deep learning models, such as CLEVR BIBREF21 for visual question answering (VQA), the bAbI tasks BIBREF22 for text understanding and reasoning, and ShapeWorld BIBREF11 for visually grounded language understanding. The primary motivation is to reduce complexity which is considered irrelevant to the evaluation focus, to enable better control over the data, and to provide more detailed insights into strengths and limitations of existing models.", "id": 118, "question": "Which existing models are evaluated?", "title": "Going Beneath the Surface: Evaluating Image Captioning for Grammaticality, Truthfulness and Diversity"}, {"answers": ["", ""], "context": "In the following we introduce GTD in more detail, consider it as an evaluation protocol covering necessary aspects of the multifaceted captioning task, rather than a specific metric.", "id": 119, "question": "How is diversity measured?", "title": "Going Beneath the Surface: Evaluating Image Captioning for Grammaticality, Truthfulness and Diversity"}, {"answers": ["", ""], "context": "Named entity recognition (NER) is a challenging problem in Natural Language Processing, and often serves as an important step for many popular applications, such as information extraction and question answering. NER requires phrases referring to entities in text be identified and assigned to particular entity types, thus can be naturally modeled as a sequence labeling task. In recent years, a lot of progress has been made on NER by applying sequential models such as conditional random field (CRF) or neural network models such as long short-term memory (LSTM) (e.g., BIBREF0, BIBREF1, BIBREF2, BIBREF3). Yet this task still remains a challenging one, especially in social media domain such as tweets, partially because of informality and noise of such text and low frequencies of distinctive named entities BIBREF4.", "id": 120, "question": "What state-of-the-art deep neural network is used?", "title": "Integrating Boundary Assembling into a DNN Framework for Named Entity Recognition in Chinese Social Media Text"}, {"answers": ["", ""], "context": "Our model consists of three modules. A diagram of the model is shown in Figure FIGREF1. Characters in the input text for Chinese word segmentation are converted to vectors that are used to train the LSTM module. Output of the LSTM module are transformed by a biased-linear transformation to get likelihood scores of segmentation labeling, then passed through the boundary assembling module. The updated boundary information is used as feature input into the CRF for Chinese word segmentation (CWS), together with character-vector sequences. In each training epoch, CRF for CWS provides feedback into the LSTM hidden layer and the biased-linear transformation to update the hyper-parameters. Another corpus for NER is then used to train the LSTM again, the hidden vector of which (now contains segmentation information updated by the boundary assembling method) is taken as feature input to CRF for NER. Lexical features extracted from the input text for NER, as well as the word embedding sequence, are also taken by the CRF module as input to generate NER labels. This section provides descriptions for each module.", "id": 121, "question": "What boundary assembling method is used?", "title": "Integrating Boundary Assembling into a DNN Framework for Named Entity Recognition in Chinese Social Media Text"}, {"answers": ["Overall F1 score:\n- He and Sun (2017) 58.23\n- Peng and Dredze (2017) 58.99\n- Xu et al. (2018) 59.11", "For Named entity the maximum precision was 66.67%, and the average 62.58%, same values for Recall was 55.97% and 50.33%, and for F1 57.14% and 55.64%. Where for Nominal Mention had maximum recall of 74.48% and average of 73.67%, Recall had values of 54.55% and 53.7%, and F1 had values of 62.97% and 62.12%. Finally the Overall F1 score had maximum value of 59.11% and average of 58.77%"], "context": "We choose an LSTM module for the CWS task. Raw input Chinese text is converted from characters to vectors with character-positional input embeddings pre-trained by BIBREF5 over 112,971,734 Weibo messages using word2vec BIBREF18. Detailed parameter settings can be found in BIBREF13. The embeddings contain 52,057 unique characters in a 100-dimension space.", "id": 122, "question": "What are previous state of the art results?", "title": "Integrating Boundary Assembling into a DNN Framework for Named Entity Recognition in Chinese Social Media Text"}, {"answers": ["", ""], "context": "Reading Comprehension (RC) has become a central task in natural language processing, with great practical value in various industries. In recent years, many large-scale RC datasets in English BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 have nourished the development of numerous powerful and diverse RC models BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11. The state-of-the-art model BIBREF12 on SQuAD, one of the most widely used RC benchmarks, even surpasses human-level performance. Nonetheless, RC on languages other than English has been limited due to the absence of sufficient training data. Although some efforts have been made to create RC datasets for Chinese BIBREF13, BIBREF14 and Korean BIBREF15, it is not feasible to collect RC datasets for every language since annotation efforts to collect a new RC dataset are often far from trivial. Therefore, the setup of transfer learning, especially zero-shot learning, is of extraordinary importance.", "id": 123, "question": "What is the model performance on target language reading comprehension?", "title": "Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model"}, {"answers": ["En-Fr, En-Zh, En-Jp, En-Kr, Zh-En, Zh-Fr, Zh-Jp, Zh-Kr to English, Chinese or Korean", "", ""], "context": "Multi-BERT has showcased its ability to enable cross-lingual zero-shot learning on the natural language understanding tasks including XNLI BIBREF19, NER, POS, Dependency Parsing, and so on. We now seek to know if a pre-trained multi-BERT has ability to solve RC tasks in the zero-shot setting.", "id": 124, "question": "What source-target language pairs were used in this work? ", "title": "Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model"}, {"answers": ["", ""], "context": "We have training and testing sets in three different languages: English, Chinese and Korean. The English dataset is SQuAD BIBREF2. The Chinese dataset is DRCD BIBREF14, a Chinese RC dataset with 30,000+ examples in the training set and 10,000+ examples in the development set. The Korean dataset is KorQuAD BIBREF15, a Korean RC dataset with 60,000+ examples in the training set and 10,000+ examples in the development set, created in exactly the same procedure as SQuAD. We always use the development sets of SQuAD, DRCD and KorQuAD for testing since the testing sets of the corpora have not been released yet.", "id": 125, "question": "What model is used as a baseline? ", "title": "Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model"}, {"answers": [""], "context": "Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. This shows that the model learned with zero-shot can roughly identify the answer spans in context but less accurate. In row (c), we fine-tuned a BERT model pre-trained on English monolingual corpus (English BERT) on Chinese RC training data directly by appending fastText-initialized Chinese word embeddings to the original word embeddings of English-BERT. Its F1 score is even lower than that of zero-shot transferring multi-BERT (rows (c) v.s. (e)). The result implies multi-BERT does acquire better cross-lingual capability through pre-training on multilingual corpus. Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.", "id": 126, "question": "what does the model learn in zero-shot setting?", "title": "Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model"}, {"answers": ["", ""], "context": "Social media with abundant user-generated posts provide a rich platform for understanding events, opinions and preferences of groups and individuals. These insights are primarily hidden in unstructured forms of social media posts, such as in free-form text or images without tags. Named entity recognition (NER), the task of recognizing named entities from free-form text, is thus a critical step for building structural information, allowing for its use in personalized assistance, recommendations, advertisement, etc.", "id": 127, "question": "Do they inspect their model to see if their model learned to associate image parts with words related to entities?", "title": "Multimodal Named Entity Recognition for Short Social Media Posts"}, {"answers": ["", ""], "context": "Neural models for NER have been recently proposed, producing state-of-the-art performance on standard NER tasks. For example, some of the end-to-end NER systems BIBREF4 , BIBREF2 , BIBREF3 , BIBREF0 , BIBREF1 use a recurrent neural network usually with a CRF BIBREF5 , BIBREF6 for sequence labeling, accompanied with feature extractors for words and characters (CNN, LSTMs, etc.), and achieve the state-of-the-art performance mostly without any use of gazetteers information. Note that most of these work aggregate textual contexts via concatenation of word embeddings and character embeddings. Recently, several work have addressed the NER task specifically on noisy short text segments such as Tweets, etc. BIBREF7 , BIBREF8 . They report performance gains from leveraging external sources of information such as lexical information (POS tags, etc.) and/or from several preprocessing steps (token substitution, etc.). Our model builds upon these state-of-the-art neural models for NER tasks, and improves the model in two critical ways: (1) incorporation of visual contexts to provide auxiliary information for short media posts, and (2) addition of the modality attention module, which better incorporates word embeddings and character embeddings, especially when there are many missing tokens in the given word embedding matrix. Note that we do not explore the use of gazetteers information or other auxiliary information (POS tags, etc.) BIBREF9 as it is not the focus of our study.", "id": 128, "question": "Does their NER model learn NER from both text and images?", "title": "Multimodal Named Entity Recognition for Short Social Media Posts"}, {"answers": ["", ""], "context": "Figure FIGREF2 illustrates the proposed multimodal NER (MNER) model. First, we obtain word embeddings, character embeddings, and visual features (Section SECREF3 ). A Bi-LSTM-CRF model then takes as input a sequence of tokens, each of which comprises a word token, a character sequence, and an image, in their respective representation (Section SECREF4 ). At each decoding step, representations from each modality are combined via the modality attention module to produce an entity label for each token ( SECREF5 ). We formulate each component of the model in the following subsections.", "id": 129, "question": "Which types of named entities do they recognize?", "title": "Multimodal Named Entity Recognition for Short Social Media Posts"}, {"answers": ["", ""], "context": "Similar to the state-of-the-art NER approaches BIBREF0 , BIBREF1 , BIBREF8 , BIBREF4 , BIBREF2 , BIBREF3 , we use both word embeddings and character embeddings.", "id": 130, "question": "Can named entities in SnapCaptions be discontigious?", "title": "Multimodal Named Entity Recognition for Short Social Media Posts"}, {"answers": ["", "10000"], "context": "Our MNER model is built on a Bi-LSTM and CRF hybrid model. We use the following implementation for the entity Bi-LSTM.", "id": 131, "question": "How large is their MNER SnapCaptions dataset?", "title": "Multimodal Named Entity Recognition for Short Social Media Posts"}, {"answers": ["A task for seq2seq model pra-training that recovers a masked document to its original form.", ""], "context": "Large pre-trained language models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 improved the state-of-the-art of various natural language understanding (NLU) tasks such as question answering (e.g., SQuAD; BIBREF5), natural language inference (e.g., MNLI; BIBREF6) as well as text classification BIBREF7. These models (i.e., large LSTMs; BIBREF8 or Transformers; BIBREF9) are pre-trained on large scale unlabeled text with language modeling BIBREF0, BIBREF1, masked language modeling BIBREF2, BIBREF4 and permutation language modeling BIBREF3 objectives. In NLU tasks, pre-trained language models are mostly used as text encoders.", "id": 132, "question": "What is masked document generation?", "title": "STEP: Sequence-to-Sequence Transformer Pre-training for Document Summarization"}, {"answers": ["", ""], "context": "This section introduces extractive and abstractive document summarization as well as pre-training methods for natural language processing tasks.", "id": 133, "question": "Which of the three pretraining tasks is the most helpful?", "title": "STEP: Sequence-to-Sequence Transformer Pre-training for Document Summarization"}, {"answers": ["", "Alignment points of the POS tags."], "context": "Neural machine translation (NMT) has gained a lot of attention recently due to its substantial improvements in machine translation quality achieving state-of-the-art performance for several languages BIBREF0 , BIBREF1 , BIBREF2 . The core architecture of neural machine translation models is based on the general encoder-decoder approach BIBREF3 . Neural machine translation is an end-to-end approach that learns to encode source sentences into distributed representations and decode these representations into sentences in the target language. Among the different neural MT models, attentional NMT BIBREF4 , BIBREF5 has become popular due to its capability to use the most relevant parts of the source sentence at each translation step. This capability also makes the attentional model superior in translating longer sentences BIBREF4 , BIBREF5 .", "id": 134, "question": "What useful information does attention capture?", "title": "What does Attention in Neural Machine Translation Pay Attention to?"}, {"answers": ["", ""], "context": "liu-EtAl:2016:COLING investigate how training the attention model in a supervised manner can benefit machine translation quality. To this end they use traditional alignments obtained by running automatic alignment tools (GIZA++ BIBREF10 and fast_align BIBREF11 ) on the training data and feed it as ground truth to the attention network. They report some improvements in translation quality arguing that the attention model has learned to better align source and target words. The approach of training attention using traditional alignments has also been proposed by others BIBREF9 , BIBREF8 . chen2016guided show that guided attention with traditional alignment helps in the domain of e-commerce data which includes lots of out of vocabulary (OOV) product names and placeholders, but not much in the other domains. alkhouli-EtAl:2016:WMT have separated the alignment model and translation model, reasoning that this avoids propagation of errors from one model to the other as well as providing more flexibility in the model types and training of the models. They use a feed-forward neural network as their alignment model that learns to model jumps in the source side using HMM/IBM alignments obtained by using GIZA++.", "id": 135, "question": "What datasets are used?", "title": "What does Attention in Neural Machine Translation Pay Attention to?"}, {"answers": ["For certain POS tags, e.g. VERB, PRON.", ""], "context": "This section provides a short background on attention and discusses two most popular attention models which are also used in this paper. The first model is a non-recurrent attention model which is equivalent to the \u201cglobal attention\" method proposed by DBLPjournalscorrLuongPM15. The second attention model that we use in our investigation is an input-feeding model similar to the attention model first proposed by bahdanau-EtAl:2015:ICLR and turned to a more general one and called input-feeding by DBLPjournalscorrLuongPM15. Below we describe the details of both models.", "id": 136, "question": "In what cases is attention different from alignment?", "title": "What does Attention in Neural Machine Translation Pay Attention to?"}, {"answers": ["", ""], "context": "State-of-the-art automatic speech recognition (ASR) systems BIBREF0 have large model capacities and require significant quantities of training data to generalize. Labeling thousands of hours of audio, however, is expensive and time-consuming. A natural question to ask is how to achieve better generalization with fewer training examples. Active learning studies this problem by identifying and labeling only the most informative data, potentially reducing sample complexity. How much active learning can help in large-scale, end-to-end ASR systems, however, is still an open question.", "id": 137, "question": "How do they calculate variance from the model outputs?", "title": "Active Learning for Speech Recognition: the Power of Gradients"}, {"answers": ["", ""], "context": "Denote INLINEFORM0 as an utterance and INLINEFORM1 the corresponding label (transcription). A speech recognition system models the conditional distribution INLINEFORM2 , where INLINEFORM3 are the parameters in the model, and INLINEFORM4 is typically implemented by a Recurrent Neural Network (RNN). A training set is a collection of INLINEFORM5 pairs, denoted as INLINEFORM6 . The parameters of the model are estimated by minimizing the negative log-likelihood on the training set: DISPLAYFORM0 ", "id": 138, "question": "How much data samples do they start with before obtaining the initial model labels?", "title": "Active Learning for Speech Recognition: the Power of Gradients"}, {"answers": ["", ""], "context": "Confidence scoring has been used extensively as a proxy for the informativeness of training samples. Specifically, an INLINEFORM0 is considered informative if the predictions are uniformly distributed over all the labels BIBREF2 , or if the best prediction of its label is with low probability BIBREF1 . By taking the instances which \u201cconfuse\u201d the model, these methods may effectively explore under-sampled regions of the input space.", "id": 139, "question": "Which model do they use for end-to-end speech recognition?", "title": "Active Learning for Speech Recognition: the Power of Gradients"}, {"answers": ["", ""], "context": "Intuitively, an instance can be considered informative if it results in large changes in model parameters. A natural measure of the change is gradient length, INLINEFORM0 . Motivated by this intuition, Expected Gradient Length (EGL) BIBREF3 picks the instances expected to have the largest gradient length. Since labels are unknown on INLINEFORM1 , EGL computes the expectation of the gradient norm over all possible labelings. BIBREF3 interprets EGL as \u201cexpected model change\u201d. In the following section, we formalize the intuition for EGL and show that it follows naturally from reducing the variance of an estimator.", "id": 140, "question": "Which dataset do they use?", "title": "Active Learning for Speech Recognition: the Power of Gradients"}, {"answers": ["Various tree structured neural networks including variants of Tree-LSTM, Tree-based CNN, RNTN, and non-tree models including variants of LSTMs, CNNs, residual, and self-attention based networks", "Sentence classification baselines: RNTN (Socher et al. 2013), AdaMC-RNTN (Dong et al. 2014), TE-RNTN (Qian et al. 2015), TBCNN (Mou et al. 2015), Tree-LSTM (Tai, Socher, and Manning 2015), AdaHT-LSTM-CM (Liu, Qiu, and Huang 2017), DC-TreeLSTM (Liu, Qiu, and Huang 2017), TE-LSTM (Huang, Qian, and Zhu 2017), BiConTree (Teng and Zhang 2017), Gumbel Tree-LSTM (Choi, Yoo, and Lee 2018), TreeNet (Cheng et al. 2018), CNN (Kim 2014), AdaSent (Zhao, Lu, and Poupart 2015), LSTM-CNN (Zhou et al. 2016), byte-mLSTM (Radford, Jozefowicz, and Sutskever 2017), BCN + Char + CoVe (McCann et al. 2017), BCN + Char + ELMo (Peters et al. 2018). \nStanford Natural Language Inference baselines: Latent Syntax Tree-LSTM (Yogatama et al. 2017), Tree-based CNN (Mou et al. 2016), Gumbel Tree-LSTM (Choi, Yoo, and Lee 2018), NSE (Munkhdalai and Yu 2017), Reinforced Self- Attention Network (Shen et al. 2018), Residual stacked encoders: (Nie and Bansal 2017), BiLSTM with generalized pooling (Chen, Ling, and Zhu 2018)."], "context": "One of the most fundamental topics in natural language processing is how best to derive high-level representations from constituent parts, as natural language meanings are a function of their constituent parts. How best to construct a sentence representation from distributed word embeddings is an example domain of this larger issue. Even though sequential neural models such as recurrent neural networks (RNN) BIBREF0 and their variants including Long Short-Term Memory (LSTM) BIBREF1 and Gated Recurrent Unit (GRU) BIBREF2 have become the de-facto standard for condensing sentence-level information from a sequence of words into a fixed vector, there have been many lines of research towards better sentence representation using other neural architectures, e.g. convolutional neural networks (CNN) BIBREF3 or self-attention based models BIBREF4 .", "id": 141, "question": "Which baselines did they compare against?", "title": "Dynamic Compositionality in Recursive Neural Networks with Structure-aware Tag Representations"}, {"answers": ["", "Linear SVM, RBF SVM, and Random Forest"], "context": "Explanations of happenings in one's life, causal explanations, are an important topic of study in social, psychological, economic, and behavioral sciences. For example, psychologists have analyzed people's causal explanatory style BIBREF0 and found strong negative relationships with depression, passivity, and hostility, as well as positive relationships with life satisfaction, quality of life, and length of life BIBREF1 , BIBREF2 , BIBREF0 .", "id": 142, "question": "What baselines did they consider?", "title": "Causal Explanation Analysis on Social Media"}, {"answers": ["", ""], "context": "Identifying causal explanations in documents can be viewed as discourse relation parsing. The Penn Discourse Treebank (PDTB) BIBREF7 has a `Cause' and `Pragmatic Cause' discourse type under a general `Contingency' class and Rhetorical Structure Theory (RST) BIBREF8 has a `Relations of Cause'. In most cases, the development of discourse parsers has taken place in-domain, where researchers have used the existing annotations of discourse arguments in newswire text (e.g. Wall Street Journal) from the discourse treebank and focused on exploring different features and optimizing various types of models for predicting relations BIBREF9 , BIBREF10 , BIBREF11 . In order to further develop automated systems, researchers have proposed end-to-end discourse relation parsers, building models which are trained and evaluated on the annotated PDTB and RST Discourse Treebank (RST DT). These corpora consist of documents from Wall Street Journal (WSJ) which are much more well-organized and grammatical than social media texts BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 .", "id": 143, "question": "What types of social media did they consider?", "title": "Causal Explanation Analysis on Social Media"}, {"answers": ["intents are annotated manually with guidance from queries collected using a scoping crowdsourcing task", ""], "context": "Task-oriented dialog systems have become ubiquitous, providing a means for billions of people to interact with computers using natural language. Moreover, the recent influx of platforms and tools such as Google's DialogFlow or Amazon's Lex for building and deploying such systems makes them even more accessible to various industries and demographics across the globe.", "id": 144, "question": "How was the dataset annotated?", "title": "An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction"}, {"answers": ["", ""], "context": "We introduce a new crowdsourced dataset of 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains. The dataset also includes 1,200 out-of-scope queries. Table TABREF2 shows examples of the data.", "id": 145, "question": "Which classifiers are evaluated?", "title": "An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction"}, {"answers": ["", " 23,700 queries, including 22,500 in-scope queries covering 150 intents, which can be grouped into 10 general domains and 1,200 out-of-scope queries."], "context": "We defined the intents with guidance from queries collected using a scoping crowdsourcing task, which prompted crowd workers to provide questions and commands related to topic domains in the manner they would interact with an artificially intelligent assistant. We manually grouped data generated by scoping tasks into intents. To collect additional data for each intent, we used the rephrase and scenario crowdsourcing tasks proposed by BIBREF2. For each intent, there are 100 training queries, which is representative of what a team with a limited budget could gather while developing a task-driven dialog system. Along with the 100 training queries, there are 20 validation and 30 testing queries per intent.", "id": 146, "question": "What is the size of this dataset?", "title": "An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction"}, {"answers": ["crowsourcing platform", "For ins scope data collection:crowd workers which provide questions and commands related to topic domains and additional data the rephrase and scenario crowdsourcing tasks proposed by BIBREF2 is used. \nFor out of scope data collection: from workers mistakes-queries written for one of the 150 intents that did not actually match any of the intents and using scoping and scenario tasks with prompts based on topic areas found on Quora, Wikipedia, and elsewhere."], "context": "Out-of-scope queries were collected in two ways. First, using worker mistakes: queries written for one of the 150 intents that did not actually match any of the intents. Second, using scoping and scenario tasks with prompts based on topic areas found on Quora, Wikipedia, and elsewhere. To help ensure the richness of this additional out-of-scope data, each of these task prompts contributed to at most four queries. Since we use the same crowdsourcing method for collecting out-of-scope data, these queries are similar in style to their in-scope counterparts.", "id": 147, "question": "Where does the data come from?", "title": "An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction"}, {"answers": ["", ""], "context": "Data imbalance is a common issue in a variety of NLP tasks such as tagging and machine reading comprehension. Table TABREF3 gives concrete examples: for the Named Entity Recognition (NER) task BIBREF2, BIBREF3, most tokens are backgrounds with tagging class $O$. Specifically, the number of tokens tagging class $O$ is 5 times as many as those with entity labels for the CoNLL03 dataset and 8 times for the OntoNotes5.0 dataset; Data-imbalanced issue is more severe for MRC tasks BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 with the value of negative-positive ratio being 50-200.", "id": 148, "question": "What are method improvements of F1 for paraphrase identification?", "title": "Dice Loss for Data-imbalanced NLP Tasks"}, {"answers": ["", ""], "context": "The idea of weighting training examples has a long history. Importance sampling BIBREF17 assigns weights to different samples and changes the data distribution. Boosting algorithms such as AdaBoost BIBREF18 select harder examples to train subsequent classifiers. Similarly, hard example mining BIBREF19 downsamples the majority class and exploits the most difficult examples. Oversampling BIBREF20, BIBREF21 is used to balance the data distribution. Another line of data resampling is to dynamically control the weights of examples as training proceeds. For example, focal loss BIBREF16 used a soft weighting scheme that emphasizes harder examples during training. In self-paced learning BIBREF22, example weights are obtained through optimizing the weighted training loss which encourages learning easier examples first. At each training step, self-paced learning algorithm optimizes model parameters and example weights jointly. Other works BIBREF23, BIBREF24 adjusted the weights of different training examples based on training loss. Besides, recent work BIBREF25, BIBREF26 proposed to learn a separate network to predict sample weights.", "id": 149, "question": "What are method's improvements of F1 for NER task for English and Chinese datasets?", "title": "Dice Loss for Data-imbalanced NLP Tasks"}, {"answers": ["", ""], "context": "The background-object label imbalance issue is severe and thus well studied in the field of object detection BIBREF27, BIBREF28, BIBREF29, BIBREF30, BIBREF31. The idea of hard negative mining (HNM) BIBREF30 has gained much attention recently. shrivastava2016ohem proposed the online hard example mining (OHEM) algorithm in an iterative manner that makes training progressively more difficult, and pushes the model to learn better. ssd2016liu sorted all of the negative samples based on the confidence loss and picking the training examples with the negative-positive ratio at 3:1. pang2019rcnn proposed a novel method called IoU-balanced sampling and aploss2019chen designed a ranking model to replace the conventional classification task with a average-precision loss to alleviate the class imbalance issue. The efforts made on object detection have greatly inspired us to solve the data imbalance issue in NLP.", "id": 150, "question": "What are method's improvements of F1 w.r.t. baseline BERT tagger for Chinese POS datasets?", "title": "Dice Loss for Data-imbalanced NLP Tasks"}, {"answers": ["", ""], "context": "For illustration purposes, we use the binary classification task to demonstrate how different losses work. The mechanism can be easily extended to multi-class classification.", "id": 151, "question": "How are weights dynamically adjusted?", "title": "Dice Loss for Data-imbalanced NLP Tasks"}, {"answers": ["", "Answer with content missing: (Parent subsections) combine precisions for n-gram orders 1-4"], "context": "The task of generating natural language descriptions of structured data (such as tables) BIBREF2 , BIBREF3 , BIBREF4 has seen a growth in interest with the rise of sequence to sequence models that provide an easy way of encoding tables and generating text from them BIBREF0 , BIBREF1 , BIBREF5 , BIBREF6 .", "id": 152, "question": "Ngrams of which length are aligned using PARENT?", "title": "Handling Divergent Reference Texts when Evaluating Table-to-Text Generation"}, {"answers": ["about 500", ""], "context": "We briefly review the task of generating natural language descriptions of semi-structured data, which we refer to as tables henceforth BIBREF11 , BIBREF12 . Tables can be expressed as set of records INLINEFORM0 , where each record is a tuple (entity, attribute, value). When all the records are about the same entity, we can truncate the records to (attribute, value) pairs. For example, for the table in Figure FIGREF2 , the records are {(Birth Name, Michael Dahlquist), (Born, December 22 1965), ...}. The task is to generate a text INLINEFORM1 which summarizes the records in a fluent and grammatical manner. For training and evaluation we further assume that we have a reference description INLINEFORM2 available for each table. We let INLINEFORM3 denote an evaluation set of tables, references and texts generated from a model INLINEFORM4 , and INLINEFORM5 , INLINEFORM6 denote the collection of n-grams of order INLINEFORM7 in INLINEFORM8 and INLINEFORM9 , respectively. We use INLINEFORM10 to denote the count of n-gram INLINEFORM11 in INLINEFORM12 , and INLINEFORM13 to denote the minimum of its counts in INLINEFORM14 and INLINEFORM15 . Our goal is to assign a score to the model, which correlates highly with human judgments of the quality of that model.", "id": 153, "question": "How many people participated in their evaluation study of table-to-text models?", "title": "Handling Divergent Reference Texts when Evaluating Table-to-Text Generation"}, {"answers": ["Best proposed metric has average correlation with human judgement of 0.913 and 0.846 compared to best compared metrics result of 0.758 and 0.829 on WikiBio and WebNLG challenge.", "Their average correlation tops the best other model by 0.155 on WikiBio."], "context": "PARENT evaluates each instance INLINEFORM0 separately, by computing the precision and recall of INLINEFORM1 against both INLINEFORM2 and INLINEFORM3 .", "id": 154, "question": "By how much more does PARENT correlate with human judgements in comparison to other text generation metrics?", "title": "Handling Divergent Reference Texts when Evaluating Table-to-Text Generation"}, {"answers": ["Energy with accuracy of 0.538", "Energy"], "context": "Natural Language Processing (NLP) has increasingly attracted the attention of the financial community. This trend can be explained by at least three major factors. The first factor refers to the business perspective. It is the economics of gaining competitive advantage using alternative sources of data and going beyond historical stock prices, thus, trading by analyzing market news automatically. The second factor is the major advancements in the technologies to collect, store, and query massive amounts of user-generated data almost in real-time. The third factor refers to the progress made by the NLP community in understanding unstructured text. Over the last decades the number of studies using NLP for financial forecasting has experienced exponential growth. According to BIBREF0 , until 2008, less than five research articles were published per year mentioning both \u201cstock market\u201d and \u201ctext mining\u201d or \u201csentiment analysis\u201d keywords. In 2012, this number increased to slightly more than ten articles per year. The last numbers available for 2016 indicates this has increased to sixty articles per year.", "id": 155, "question": "Which stock market sector achieved the best performance?", "title": "Multimodal deep learning for short-term stock volatility prediction"}, {"answers": ["", ""], "context": "Recurrent neural networks (RNNs), including gated variants such as the long short-term memory (LSTM) BIBREF0 have become the standard model architecture for deep learning approaches to sequence modeling tasks. RNNs repeatedly apply a function with trainable parameters to a hidden state. Recurrent layers can also be stacked, increasing network depth, representational power and often accuracy. RNN applications in the natural language domain range from sentence classification BIBREF1 to word- and character-level language modeling BIBREF2 . RNNs are also commonly the basic building block for more complex models for tasks such as machine translation BIBREF3 , BIBREF4 , BIBREF5 or question answering BIBREF6 , BIBREF7 . Unfortunately standard RNNs, including LSTMs, are limited in their capability to handle tasks involving very long sequences, such as document classification or character-level machine translation, as the computation of features or states for different parts of the document cannot occur in parallel.", "id": 156, "question": "What languages pairs are used in machine translation?", "title": "Quasi-Recurrent Neural Networks"}, {"answers": ["", ""], "context": "Each layer of a quasi-recurrent neural network consists of two kinds of subcomponents, analogous to convolution and pooling layers in CNNs. The convolutional component, like convolutional layers in CNNs, allows fully parallel computation across both minibatches and spatial dimensions, in this case the sequence dimension. The pooling component, like pooling layers in CNNs, lacks trainable parameters and allows fully parallel computation across minibatch and feature dimensions.", "id": 157, "question": "What sentiment classification dataset is used?", "title": "Quasi-Recurrent Neural Networks"}, {"answers": ["", ""], "context": "Motivated by several common natural language tasks, and the long history of work on related architectures, we introduce several extensions to the stacked QRNN described above. Notably, many extensions to both recurrent and convolutional models can be applied directly to the QRNN as it combines elements of both model types.", "id": 158, "question": "What pooling function is used?", "title": "Quasi-Recurrent Neural Networks"}, {"answers": ["", ""], "context": "", "id": 159, "question": "Do they report results only on English?", "title": "NeuronBlocks: Building Your NLP DNN Models Like Playing Lego"}, {"answers": ["", ""], "context": "There are several general-purpose deep learning frameworks, such as TensorFlow, PyTorch and Keras, which have gained popularity in NLP community. These frameworks offer huge flexibility in DNN model design and support various NLP tasks. However, building models under these frameworks requires a large overhead of mastering these framework details. Therefore, higher level abstraction to hide the framework details is favored by many engineers.", "id": 160, "question": "What neural network modules are included in NeuronBlocks?", "title": "NeuronBlocks: Building Your NLP DNN Models Like Playing Lego"}, {"answers": ["By conducting a survey among engineers", ""], "context": "", "id": 161, "question": "How do the authors evidence the claim that many engineers find it a big overhead to choose from multiple frameworks, models and optimization techniques?", "title": "NeuronBlocks: Building Your NLP DNN Models Like Playing Lego"}, {"answers": ["Dataset of total 3500 questions from the Internet and other sources such as books of general knowledge questions, history, etc.", "3500 questions collected from the internet and books."], "context": "Question classification (QC) deals with question analysis and question labeling based on the expected answer type. The goal of QC is to assign classes accurately to the questions based on expected answer. In modern system, there are two types of questions BIBREF0. One is Factoid question which is about providing concise facts and another one is Complex question that has a presupposition which is complex. Question Answering (QA) System is an integral part of our daily life because of the high amount of usage of Internet for information acquisition. In recent years, most of the research works related to QA are based on English language such as IBM Watson, Wolfram Alpha. Bengali speakers often fall in difficulty while communicating in English BIBREF1.", "id": 162, "question": "what datasets did they use?", "title": "A Comprehensive Comparison of Machine Learning Based Methods Used in Bengali Question Classification"}, {"answers": ["", ""], "context": "Over the years, a handful of QA systems have gained popularity around the world. One of the oldest QA system is BASEBALL (created on 1961) BIBREF4 which answers question related to baseball league in America for a particular season. LUNAR BIBREF5 system answers questions about soil samples taken from Apollo lunar exploration. Some of the most popular QA Systems are IBM Watson, Apple Siri and Wolfram Alpha. Examples of some QA systems based on different languages are: Zhang Yu Chinese question classification BIBREF6 based on Incremental Modified Bayes, Arabic QA system (AQAS) BIBREF7 by F. A. Mohammed, K. Nasser, & H. M. Harb and Syntactic open domain Arabic QA system for factoid questions BIBREF8 by Fareed et al. QA systems have been built on different analysis methods such as morphological analysis BIBREF9, syntactical analysis BIBREF10, semantic analysis BIBREF11 and expected answer Type analysis BIBREF12.", "id": 163, "question": "what ml based approaches were compared?", "title": "A Comprehensive Comparison of Machine Learning Based Methods Used in Bengali Question Classification"}, {"answers": ["", ""], "context": "Recently, neural machine translation (NMT) has gained popularity in the field of machine translation. The conventional encoder-decoder NMT proposed by Cho2014 uses two recurrent neural networks (RNN): one is an encoder, which encodes a source sequence into a fixed-length vector, and the other is a decoder, which decodes the vector into a target sequence. A newly proposed attention-based NMT by DzmitryBahdana2014 can predict output words using the weights of each hidden state of the encoder by the attention mechanism, improving the adequacy of translation.", "id": 164, "question": "Is pre-training effective in their evaluation?", "title": "English-Japanese Neural Machine Translation with Encoder-Decoder-Reconstructor"}, {"answers": ["", ""], "context": "Several studies have addressed the NMT-specific problem of missing or repeating words. Niehues2016 optimized NMT by adding the outputs of PBSMT to the input of NMT. Mi2016a and Feng2016 introduced a distributed version of coverage vector taken from PBSMT to consider which words have been already translated. All these methods, including ours, employ information of the source sentence to improve the quality of translation, but our method uses back-translation to ensure that there is no inconsistency. Unlike other methods, once learned, our method is identical to the conventional NMT model, so it does not need any additional parameters such as coverage vector or a PBSMT system for testing.", "id": 165, "question": "What parallel corpus did they use?", "title": "English-Japanese Neural Machine Translation with Encoder-Decoder-Reconstructor"}, {"answers": ["Best proposed model result vs best previous result:\nArxiv dataset: Rouge 1 (43.62 vs 42.81), Rouge L (29.30 vs 31.80), Meteor (21.78 vs 21.35)\nPubmed dataset: Rouge 1 (44.85 vs 44.29), Rouge L (31.48 vs 35.21), Meteor (20.83 vs 20.56)", "On arXiv dataset, the proposed model outperforms baselie model by (ROUGE-1,2,L) 0.67 0.72 0.77 respectively and by Meteor 0.31.\n"], "context": "Single-document summarization is the task of generating a short summary for a given document. Ideally, the generated summaries should be fluent and coherent, and should faithfully maintain the most important information in the source document. purpleThis is a very challenging task, because it arguably requires an in-depth understanding of the source document, and current automatic solutions are still far from human performance BIBREF0 .", "id": 166, "question": "How much does their model outperform existing models?", "title": "Extractive Summarization of Long Documents by Combining Global and Local Context"}, {"answers": ["", ""], "context": "Traditional extractive summarization methods are mostly based on explicit surface features BIBREF10 , relying on graph-based methods BIBREF11 , or on submodular maximization BIBREF12 . Benefiting from the success of neural sequence models in other NLP tasks, chenglapata propose a novel approach to extractive summarization based on neural networks and continuous sentence features, which outperforms traditional methods on the DailyMail dataset. In particular, they develop a general encoder-decoder architecture, where a CNN is used as sentence encoder, a uni-directional LSTM as document encoder, with another uni-directional LSTM as decoder. To decrease the number of parameters while maintaining the accuracy, summarunner present SummaRuNNer, a simple RNN-based sequence classifier without decoder, outperforming or matching the model of BIBREF2 . They take content, salience, novelty, and position of each sentence into consideration when deciding if a sentence should be included in the extractive summary. Yet, they do not capture any aspect of the topical structure, as we do in this paper. So their approach would arguably suffer when applied to long documents, likely containing multiple and diverse topics.", "id": 167, "question": "What do they mean by global and local context?", "title": "Extractive Summarization of Long Documents by Combining Global and Local Context"}, {"answers": ["", ""], "context": "Propaganda aims at influencing people's mindset with the purpose of advancing a specific agenda. In the Internet era, thanks to the mechanism of sharing in social networks, propaganda campaigns have the potential of reaching very large audiences BIBREF0, BIBREF1, BIBREF2.", "id": 168, "question": "What are the 18 propaganda techniques?", "title": "Findings of the NLP4IF-2019 Shared Task on Fine-Grained Propaganda Detection"}, {"answers": ["", ""], "context": "Propaganda has been tackled mostly at the article level. BIBREF3 created a corpus of news articles labelled as propaganda, trusted, hoax, or satire. BIBREF4 experimented with a binarized version of that corpus: propaganda vs. the other three categories. BIBREF5 annotated a large binary corpus of propagandist vs. non-propagandist articles and proposed a feature-based system for discriminating between them. In all these cases, the labels were obtained using distant supervision, assuming that all articles from a given news outlet share the label of that outlet, which inevitably introduces noise BIBREF6.", "id": 169, "question": "What dataset was used?", "title": "Findings of the NLP4IF-2019 Shared Task on Fine-Grained Propaganda Detection"}, {"answers": ["The baseline system for the SLC task is a very simple logistic regression classifier with default parameters. The baseline for the FLC task generates spans and selects one of the 18 techniques randomly.", ""], "context": "Propaganda uses psychological and rhetorical techniques to achieve its objective. Such techniques include the use of logical fallacies and appeal to emotions. For the shared task, we use 18 techniques that can be found in news articles and can be judged intrinsically, without the need to retrieve supporting information from external resources. We refer the reader to BIBREF10 for more details on the propaganda techniques; below we report the list of techniques:", "id": 170, "question": "What was the baseline for this task?", "title": "Findings of the NLP4IF-2019 Shared Task on Fine-Grained Propaganda Detection"}, {"answers": ["", "The matrix containing co-occurrences of the words which occur with the both words of every given pair of words."], "context": "Measures of semantic similarity and relatedness quantify the degree to which two concepts are similar (e.g., INLINEFORM0 \u2013 INLINEFORM1 ) or related (e.g., INLINEFORM2 \u2013 INLINEFORM3 ). Semantic similarity can be viewed as a special case of semantic relatedness \u2013 to be similar is one of many ways that a pair of concepts may be related. The automated discovery of groups of semantically similar or related terms is critical to improving the retrieval BIBREF0 and clustering BIBREF1 of biomedical and clinical documents, and the development of biomedical terminologies and ontologies BIBREF2 .", "id": 171, "question": "What is a second order co-ocurrence matrix?", "title": "Improving Correlation with Human Judgments by Integrating Semantic Similarity with Second--Order Vectors"}, {"answers": ["", "16"], "context": "This section describes the similarity and relatedness measures we integrate in our second\u2013order co\u2013occurrence vectors. We use two taxonomies in this study, SNOMED\u2013CT and MeSH. SNOMED\u2013CT (Systematized Nomenclature of Medicine Clinical Terms) is a comprehensive clinical terminology created for the electronic representation of clinical health information. MeSH (Medical Subject Headings) is a taxonomy of biomedical terms developed for indexing biomedical journal articles.", "id": 172, "question": "How many humans participated?", "title": "Improving Correlation with Human Judgments by Integrating Semantic Similarity with Second--Order Vectors"}, {"answers": ["", ""], "context": "Measures of semantic similarity can be classified into three broad categories : path\u2013based, feature\u2013based and information content (IC). Path\u2013based similarity measures use the structure of a taxonomy to measure similarity \u2013 concepts positioned close to each other are more similar than those further apart. Feature\u2013based methods rely on set theoretic measures of overlap between features (union and intersection). The information content measures quantify the amount of information that a concept provides \u2013 more specific concepts have a higher amount of information content.", "id": 173, "question": "What embedding techniques are explored in the paper?", "title": "Improving Correlation with Human Judgments by Integrating Semantic Similarity with Second--Order Vectors"}, {"answers": ["", ""], "context": "Single-relation factoid questions are the most common form of questions found in search query logs and community question answering websites BIBREF1 , BIBREF2 . A knowledge-base (KB) such as Freebase, DBpedia, or Wikidata can help answer such questions after users reformulate them as queries. For instance, the question Where was Barack Obama born? can be answered by issuing the following KB query: $\n\\lambda (x).place\\_of\\_birth(Barack\\_Obama, x)\n$ ", "id": 174, "question": "Do the authors also try the model on other datasets?", "title": "Character-Level Question Answering with Attention"}, {"answers": ["None", "Word-level Memory Neural Networks (MemNNs) proposed in Bordes et al. (2015)"], "context": "Our work is motivated by three major threads of research in machine learning and natural language processing: semantic-parsing for open-domain question answering, character-level language modeling, and encoder-decoder methods.", "id": 175, "question": "What word level and character level model baselines are used?", "title": "Character-Level Question Answering with Attention"}, {"answers": ["", ""], "context": "The use of RNNs in the field of Statistical Machine Translation (SMT) has revolutionised the approaches to automated translation. As opposed to traditional shallow SMT models, which require a lot of memory to run, these neural translation models require only a small fraction of memory used, about 5% BIBREF0 . Also, neural translation models are optimized such that every module is trained to jointly improve translation quality. With that being said, one of the main downsides of neural translation models is the heavy corpus requirement in order to ensure learning of deeper contexts. This is where the application of these encoder decoder architectures in translation to and/or from morphologically rich languages takes a severe hit.", "id": 176, "question": "By how much do they improve the efficacy of the attention mechanism?", "title": "Improving the Performance of Neural Machine Translation Involving Morphologically Rich Languages"}, {"answers": ["50 human annotators ranked a random sample of 100 translations by Adequacy, Fluency and overall ranking on a 5-point scale.", ""], "context": "The corpus selected for this experiment was a combination of different corpora from various domains. The major part of the corpus was made up by the EnTam v2 corpus BIBREF2 . This corpus contained sentences taken from parallel news articles, English and Tamil bible corpus and movie subtitles. It also comprised of a tourism corpus that was obtained from TDIL (Technology Development for Indian Languages) and a corpus created from Tamil novels and short stories from AU-KBC, Anna university. The complete corpus consisted of 197,792 sentences. Fig. FIGREF20 shows the skinny shift and heatmap representations of the relativity between the sentences in terms of their sentence lengths.", "id": 177, "question": "How were the human judgements assembled?", "title": "Improving the Performance of Neural Machine Translation Involving Morphologically Rich Languages"}, {"answers": ["", ""], "context": "Reordering in machine translation (MT) is a crucial process to get the correct translation output word order given an input source sentence, as word order reflects meaning. It remains a major challenge, especially for language pairs with a significant word order difference. Phrase-based MT systems BIBREF0 generally adopt a reordering model that predicts reordering based on the span of a phrase and that of the adjacent phrase BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 .", "id": 178, "question": "Did they only experiment with one language pair?", "title": "To Swap or Not to Swap? Exploiting Dependency Word Pairs for Reordering in Statistical Machine Translation"}, {"answers": ["Akbik et al. (2018), Link et al. (2012)", "They compare to Akbik et al. (2018) and Link et al. (2012)."], "context": "Named entity recognition (NER) BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 is the process by which we identify text spans which mention named entities, and to classify them into predefined categories such as person, location, organization etc. NER serves as the basis for a variety of natural language processing (NLP) applications such as relation extraction BIBREF4 , machine translation BIBREF5 , question answering BIBREF6 and knowledge base construction BIBREF7 . Although early NER systems have been successful in producing adequate recognition accuracy, they often require significant human effort in carefully designing rules or features.", "id": 179, "question": "Which other approaches do they compare their model with?", "title": "Fine-Grained Named Entity Recognition using ELMo and Wikidata"}, {"answers": ["F-1 score on the OntoNotes is 88%, and it is 53% on Wiki (gold).", ""], "context": "We evaluate our model on two publicly available datasets. The statistics for both are shown in Table TABREF3 . The details of these datasets are as follows:", "id": 180, "question": "What results do they achieve using their proposed approach?", "title": "Fine-Grained Named Entity Recognition using ELMo and Wikidata"}, {"answers": ["Entities from a deep learning model are linked to the related entities from a knowledge base by a lookup.", ""], "context": "NER involves identifying both entity boundaries and entity types. With \u201cexact-match evaluation\u201d, a named entity is considered correctly recognized only if both the boundaries and type match the ground truth BIBREF8 , BIBREF17 , BIBREF18 . Precision, Recall, and F-1 scores are computed on the number of true positives (TP), false positives (FP), and false negatives (FN). Their formal definitions are as follows:", "id": 181, "question": "How do they combine a deep learning model with a knowledge base?", "title": "Fine-Grained Named Entity Recognition using ELMo and Wikidata"}, {"answers": ["", "For speech synthesis, they build a speech clustergen statistical speech synthesizer BIBREF9. For speech recognition, they use Kaldi BIBREF11. For Machine Translation, they use a Transformer architecture from BIBREF15."], "context": "Recent years have seen unprecedented progress for Natural Language Processing (NLP) on almost every NLP subtask. Even though low-resource settings have also been explored, this progress has overwhelmingly been observed in languages with significant data resources that can be leveraged to train deep neural networks. Low-resource languages still lag behind.", "id": 182, "question": "What are the models used for the baseline of the three NLP tasks?", "title": "A Resource for Computational Experiments on Mapudungun"}, {"answers": ["", "Original transcription was labeled with additional labels in [] brackets with nonstandard pronunciation."], "context": "Mapudungun (iso 639-3: arn) is an indigenous language of the Americas spoken natively in Chile and Argentina, with an estimated 100 to 200 thousand speakers in Chile and 27 to 60 thousand speakers in Argentina BIBREF0. It is an isolate language and is classified as threatened by Ethnologue, hence the critical importance of all documentary efforts. Although the morphology of nouns is relatively simple, Mapudungun verb morphology is highly agglutinative and complex. Some analyses provide as many as 36 verb suffix slots BIBREF1. A typical complex verb form occurring in our corpus of spoken Mapudungun consists of five or six morphemes.", "id": 183, "question": "How is non-standard pronunciation identified?", "title": "A Resource for Computational Experiments on Mapudungun"}, {"answers": ["", ""], "context": "As observed by a recent article of Nature News BIBREF0 , \u201cWikipedia is among the most frequently visited websites in the world and one of the most popular places to tap into the world's scientific and medical information\". Despite the huge amount of consultations, open issues still threaten a fully confident fruition of the popular online open encyclopedia.", "id": 184, "question": "Is it valid to presume a bad medical wikipedia article should not contain much domain-specific jargon?", "title": "A matter of words: NLP for quality evaluation of Wikipedia medical articles"}, {"answers": ["clipped PMI; NNEGPMI", ""], "context": "Dense word vectors (or embeddings) are a key component in modern NLP architectures for tasks such as sentiment analysis, parsing, and machine translation. These vectors can be learned by exploiting the distributional hypothesis BIBREF0, paraphrased by BIBREF1 as \u201ca word is characterized by the company that it keeps\u201d, usually by constructing a cooccurrence matrix over a training corpus, re-weighting it using Pointwise Mutual Information ($\\mathit {PMI}$) BIBREF2, and performing a low-rank factorization to obtain dense vectors.", "id": 185, "question": "What novel PMI variants are introduced?", "title": "Why So Down? The Role of Negative (and Positive) Pointwise Mutual Information in Distributional Semantics"}, {"answers": ["", ""], "context": "There is a long history of studying weightings (also known as association measures) of general (not only word-context) cooccurrence matrices; see BIBREF3, BIBREF4 for an overview and BIBREF5 for comparison of different weightings. BIBREF6 show that word vectors derived from $\\mathit {PPMI}$ matrices perform better than alternative weightings for word-context cooccurrence. In the field of collocation extraction, BIBREF7 address the negative infinity issue with $\\mathit {PMI}$ by introducing the normalized $\\mathit {PMI}$ metric. BIBREF8 show theoretically that the popular Skip-gram model BIBREF9 performs implicit factorization of shifted $\\mathit {PMI}$.", "id": 186, "question": "What semantic and syntactic tasks are used as probes?", "title": "Why So Down? The Role of Negative (and Positive) Pointwise Mutual Information in Distributional Semantics"}, {"answers": ["It may lead to poor rare word representations and word analogies.", ""], "context": "PMI: A cooccurrence matrix $M$ is constructed by sliding a symmetric window over the subsampled BIBREF9 training corpus and for each center word $w$ and context word $c$ within the window, incrementing $M_{wc}$. $\\mathit {PMI}$ is then equal to:", "id": 187, "question": "What are the disadvantages to clipping negative PMI?", "title": "Why So Down? The Role of Negative (and Positive) Pointwise Mutual Information in Distributional Semantics"}, {"answers": ["", "A finite corpora may entirely omit rare word combinations"], "context": "In order to identify the role that $\\mathit {\\texttt {-}PMI}$ and $\\mathit {\\texttt {+}PMI}$ play in distributional semantics, we train LexVec models that skip SGD steps when target cell values are $>0$ or $\\le 0$, respectively. For example, $-\\mathit {CPMI}_{\\texttt {-}2}$ skips steps when $\\mathit {CPMI}_{\\texttt {-}2}(w,c) > 0$. Similarly, the $\\mathit {\\texttt {+}PPMI}$ model skips SGD steps when $\\mathit {PPMI}(w,c) \\le 0$. We compare these to models that include both negative and positive information to see how the two interact.", "id": 188, "question": "Why are statistics from finite corpora unreliable?", "title": "Why So Down? The Role of Negative (and Positive) Pointwise Mutual Information in Distributional Semantics"}, {"answers": ["", ""], "context": "Typical speech-to-text translation systems pipeline automatic speech recognition (ASR) and machine translation (MT) BIBREF0 . But high-quality ASR requires hundreds of hours of transcribed audio, while high-quality MT requires millions of words of parallel text\u2014resources available for only a tiny fraction of the world's estimated 7,000 languages BIBREF1 . Nevertheless, there are important low-resource settings in which even limited speech translation would be of immense value: documentation of endangered languages, which often have no writing system BIBREF2 , BIBREF3 ; and crisis response, for which text applications have proven useful BIBREF4 , but only help literate populations. In these settings, target translations may be available. For example, ad hoc translations may be collected in support of relief operations. Can we do anything at all with this data?", "id": 189, "question": "what is the domain of the corpus?", "title": "Towards speech-to-text translation without speech recognition"}, {"answers": ["", ""], "context": "For UTD we use the Zero Resource Toolkit (ZRTools; Jansen and Van Durme, 2011). ZRTools uses dynamic time warping (DTW) to discover pairs of acoustically similar audio segments, and then uses graph clustering on overlapping pairs to form a hard clustering of the discovered segments. Replacing each discovered segment with its unique cluster label, or pseudoterm, gives us a partial, noisy transcription, or pseudotext (Fig. FIGREF4 ).", "id": 190, "question": "what challenges are identified?", "title": "Towards speech-to-text translation without speech recognition"}, {"answers": ["", ""], "context": "Although we did not have access to a low-resource dataset, there is a corpus of noisy multi-speaker speech that simulates many of the conditions we expect to find in our motivating applications: the CALLHOME Spanish\u2013English speech translation dataset (LDC2014T23; Post el al., 2013). We ran UTD over all 104 telephone calls, which pair 11 hours of audio with Spanish transcripts and their crowdsourced English translations. The transcripts contain 168,195 Spanish word tokens (10,674 types), and the translations contain 159,777 English word tokens (6,723 types). Though our system does not require Spanish transcripts, we use them to evaluate UTD and to simulate a perfect UTD system, called the oracle.", "id": 191, "question": "what is the size of the speech corpus?", "title": "Towards speech-to-text translation without speech recognition"}, {"answers": ["Answer with content missing: (Whole Method and Results sections) Self-paced reading times widely benefit ERP prediction, while eye-tracking data seems to have more limited benefit to just the ELAN, LAN, and PNP ERP components.\nSelect:\n- ELAN, LAN\n- PNP ERP", ""], "context": "The cognitive processes involved in human language comprehension are complex and only partially identified. According to the dual-stream model of speech comprehension BIBREF1 , sound waves are first converted to phoneme-like features and further processed by a ventral stream that maps those features onto words and semantic structures, and a dorsal stream that (among other things) supports audio-short term memory. The mapping of words onto meaning is thought to be subserved by widely distributed regions of the brain that specialize in particular modalities \u2014 for example visual aspects of the word banana reside in the occipital lobe of the brain and are activated when the word banana is heard BIBREF2 \u2014 and the different representation modalities are thought to be integrated into a single coherent latent representation in the anterior temporal lobe BIBREF3 . While this part of meaning representation in human language comprehension is somewhat understood, much less is known about how the meanings of words are integrated together to form the meaning of sentences and discourses. One tool researchers use to study the integration of meaning across words is electroencephelography (EEG), which measures the electrical activity of large numbers of neurons acting in concert. EEG has the temporal resolution necessary to study the processes involved in meaning integration, and certain stereotyped electrical responses to word presentations, known as event-related potentials (ERPs), have been identified with some of the processes thought to contribute to comprehension.", "id": 192, "question": "Which two pairs of ERPs from the literature benefit from joint training?", "title": "Understanding language-elicited EEG data by predicting it from a fine-tuned language model"}, {"answers": ["Answer with content missing: (Whole Method and Results sections) The primary dataset we use is the ERP data collected and computed by Frank et al. (2015), and we also use behavioral data (eye-tracking data and self-paced reading times) from Frank et al. (2013) which were collected on the same set of 205 sentences.\nSelect:\n- ERP data collected and computed by Frank et al. (2015)\n- behavioral data (eye-tracking data and self-paced reading times) from Frank et al. (2013)", ""], "context": "While a full discussion of each ERP component and the features of language thought to trigger each are beyond the scope of this document (for reviews see e.g. BIBREF0 , BIBREF2 , BIBREF4 , BIBREF5 , and BIBREF6 ), we introduce some basic features of ERP components to help in the discussion later. ERP components are electrical potential responses measured with respect to a baseline that are triggered by an event (in our case the presentation of a new word to a participant in an experiment). The name of each ERP component reflects whether the potential is positive or negative relative to the baseline. The N400 is so-named because it is Negative relative to a baseline (the baseline is typically recorded just before a word is presented at an electrode that is not affected by the ERP response) and because it peaks in magnitude at about 400ms after a word is presented to a participant in an experiment. The P600 is Positive relative to a baseline and peaks around 600ms after a word is presented to a participant (though its overall duration is much longer and less specific in time than the N400). The post-N400 positivity is so-named because it is part of a biphasic response; it is a positivity that occurs after the negativity associated with the N400. The early post-N400 positivity (EPNP) is also part of a biphasic response, but the positivity has an eariler onset than the standard PNP. Finally, the LAN and ELAN are the left-anterior negativity and early left-anterior negativity respectively. These are named for their timing, spatial distribution on the scalp, and direction of difference from the baseline. It is important to note that ERP components can potentially cancel and mask each other, and that it is difficult to precisely localize the neural activity that causes the changes in electrical potential at the electrodes where those changes are measured.", "id": 193, "question": "What datasets are used?", "title": "Understanding language-elicited EEG data by predicting it from a fine-tuned language model"}, {"answers": ["Universal Dependencies v1.2 treebanks for the following 16 languages: Bulgarian, Croatian, Czech, Danish, English, French, German,\nIndonesian, Italian, Norwegian, Persian, Polish, Portuguese, Slovenian, Spanish, and Swedish", ""], "context": "Part-of-speech tagging is now a classic task in natural language processing, for which many systems have been developed or adapted for a large variety of languages. Its aim is to associate each \u201cword\u201d with a morphosyntactic tag, whose granularity can range from a simple morphosyntactic category, or part-of-speech (hereafter PoS), to finer categories enriched with morphological features (gender, number, case, tense, mood, etc.).", "id": 194, "question": "which datasets did they experiment with?", "title": "External Lexical Information for Multilingual Part-of-Speech Tagging"}, {"answers": ["", ""], "context": "MElt BIBREF12 is a tagging system based on maximum entropy Markov models (MEMM) BIBREF5 , a class of discriminative models that are suitable for sequence labelling BIBREF5 . The basic set of features used by MElt is given in BIBREF12 . It is a superset of the feature sets used by BIBREF5 and BIBREF24 and includes both local standard features (for example the current word itself and its prefixes and suffixes of length 1 to 4) and contextual standard features (for example the tag just assigned to the preceding word). In particular, with respect to Ratnaparkhi's feature set, MElt's basic feature set lifts the restriction that local standard features used to analyse the internal composition of the current word should only apply to rare words.", "id": 195, "question": "which languages are explored?", "title": "External Lexical Information for Multilingual Part-of-Speech Tagging"}, {"answers": ["", ""], "context": "The polarization of actors' expressed preferences is a fundamental concern for studies of legislatures, court systems, and international politics BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Because preferences are unobservable, scholars must look for signals in the empirical world. Recent progress has been made in parliamentary and court settings through the employment of textual data BIBREF4 and votes and texts in tandem BIBREF5 , BIBREF6 . Many of these advances rely on spatial, scaling, and item-response type models that are intuitive for settings where a small number of parties or ideological divisions influence outcomes. This is less intuitive for the study of state preferences, because international relations is marked by multiple dimensions that span ideological, economic, and security concerns, among others BIBREF0 .", "id": 196, "question": "Do they use number of votes as an indicator of preference?", "title": "Disunited Nations? A Multiplex Network Approach to Detecting Preference Affinity Blocs using Texts and Votes"}, {"answers": ["", ""], "context": "Polarization in IR is defined as \u201cthe degree to which the foreign policies of nations within a single cluster are similar to each other, and the degree to which the foreign policies of nations in different clusters are dissimilar\" BIBREF9 . Therefore, operationalizing a concept of preference polarization broadly involves two steps: an approach to estimate preferences from available data on states' observable behavior; and a method of detecting distinct communities of nations, such that nations belonging to the same community share similar preferences, and nations belonging to different communities have dissimilar preferences.", "id": 197, "question": "What does a node in the network approach repesent?", "title": "Disunited Nations? A Multiplex Network Approach to Detecting Preference Affinity Blocs using Texts and Votes"}, {"answers": ["", ""], "context": "The most widely used source for deriving preferences in IR is UN roll call data BIBREF10 . Voting behavior represent a valuable source of revealed preference information, comparable across states and over time. However, UN roll call votes tend to be a weak signal of underlying preferences in cases where states vote for ceremonial purposes, are constrained by agenda-setting power dynamics, or vote as cohorts to maximize their impact within the UN, such as with regional blocs BIBREF11 .", "id": 198, "question": "Which dataset do they use?", "title": "Disunited Nations? A Multiplex Network Approach to Detecting Preference Affinity Blocs using Texts and Votes"}, {"answers": ["Amitabh Bachchan, Ariana Grande, Barack Obama, Bill Gates, Donald Trump,\nEllen DeGeneres, J K Rowling, Jimmy Fallon, Justin Bieber, Kevin Durant, Kim Kardashian, Lady Gaga, LeBron James,Narendra Modi, Oprah Winfrey", "Celebrities from varioius domains - Acting, Music, Politics, Business, TV, Author, Sports, Modeling. "], "context": "Social media platforms, particularly microblogging services such as Twitter, have become increasingly popular BIBREF0 as a means to express thoughts and opinions. Twitter users emit tweets about a wide variety of topics, which vary in the extent to which they reflect a user's personality, brand and interests. This observation motivates the question we consider here, of how to quantify the degree to which tweets are characteristic of their author?", "id": 199, "question": "What kind of celebrities do they obtain tweets from?", "title": "The Trumpiest Trump? Identifying a Subject's Most Characteristic Tweets"}, {"answers": ["", "Create the negated LAMA dataset and query the pretrained language models with both original LAMA and negated LAMA statements and compare their predictions."], "context": "Pretrained language models like Transformer-XL BIBREF1, ELMo BIBREF2 and BERT BIBREF3 have emerged as universal tools that capture a diverse range of linguistic and factual knowledge.", "id": 200, "question": "How did they extend LAMA evaluation framework to focus on negation?", "title": "Negated LAMA: Birds cannot fly"}, {"answers": ["LSA, TextRank, LexRank and ILP-based summary.", "LSA, TextRank, LexRank"], "context": "Performance appraisal (PA) is an important HR process, particularly for modern organizations that crucially depend on the skills and expertise of their workforce. The PA process enables an organization to periodically measure and evaluate every employee's performance. It also provides a mechanism to link the goals established by the organization to its each employee's day-to-day activities and performance. Design and analysis of PA processes is a lively area of research within the HR community BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 .", "id": 201, "question": "What summarization algorithms did the authors experiment with?", "title": "Mining Supervisor Evaluation and Peer Feedback in Performance Appraisals"}, {"answers": ["", ""], "context": "We first review some work related to sentence classification. Semantically classifying sentences (based on the sentence's purpose) is a much harder task, and is gaining increasing attention from linguists and NLP researchers. McKnight and Srinivasan BIBREF7 and Yamamoto and Takagi BIBREF8 used SVM to classify sentences in biomedical abstracts into classes such as INTRODUCTION, BACKGROUND, PURPOSE, METHOD, RESULT, CONCLUSION. Cohen et al. BIBREF9 applied SVM and other techniques to learn classifiers for sentences in emails into classes, which are speech acts defined by a verb-noun pair, with verbs such as request, propose, amend, commit, deliver and nouns such as meeting, document, committee; see also BIBREF10 . Khoo et al. BIBREF11 uses various classifiers to classify sentences in emails into classes such as APOLOGY, INSTRUCTION, QUESTION, REQUEST, SALUTATION, STATEMENT, SUGGESTION, THANKING etc. Qadir and Riloff BIBREF12 proposes several filters and classifiers to classify sentences on message boards (community QA systems) into 4 speech acts: COMMISSIVE (speaker commits to a future action), DIRECTIVE (speaker expects listener to take some action), EXPRESSIVE (speaker expresses his or her psychological state to the listener), REPRESENTATIVE (represents the speaker's belief of something). Hachey and Grover BIBREF13 used SVM and maximum entropy classifiers to classify sentences in legal documents into classes such as FACT, PROCEEDINGS, BACKGROUND, FRAMING, DISPOSAL; see also BIBREF14 . Deshpande et al. BIBREF15 proposes unsupervised linguistic patterns to classify sentences into classes SUGGESTION, COMPLAINT.", "id": 202, "question": "What evaluation metrics were used for the summarization task?", "title": "Mining Supervisor Evaluation and Peer Feedback in Performance Appraisals"}, {"answers": ["", ""], "context": "In this paper, we used the supervisor assessment and peer feedback text produced during the performance appraisal of 4528 employees in a large multi-national IT company. The corpus of supervisor assessment has 26972 sentences. The summary statistics about the number of words in a sentence is: min:4 max:217 average:15.5 STDEV:9.2 Q1:9 Q2:14 Q3:19.", "id": 203, "question": "What clustering algorithms were used?", "title": "Mining Supervisor Evaluation and Peer Feedback in Performance Appraisals"}, {"answers": ["", ""], "context": "The PA corpus contains several classes of sentences that are of interest. In this paper, we focus on three important classes of sentences viz., sentences that discuss strengths (class STRENGTH), weaknesses of employees (class WEAKNESS) and suggestions for improving her performance (class SUGGESTION). The strengths or weaknesses are mostly about the performance in work carried out, but sometimes they can be about the working style or other personal qualities. The classes WEAKNESS and SUGGESTION are somewhat overlapping; e.g., a suggestion may address a perceived weakness. Following are two example sentences in each class.", "id": 204, "question": "What evaluation metrics are looked at for classification tasks?", "title": "Mining Supervisor Evaluation and Peer Feedback in Performance Appraisals"}, {"answers": ["Logistic Regression, Multinomial Naive Bayes, Random Forest, AdaBoost, Linear SVM, SVM with ADWSK and Pattern-based", "Logistic Regression, Multinomial Naive Bayes, Random Forest, AdaBoost, Linear SVM, SVM with ADWSK, Pattern-based approach"], "context": "We also explored whether a sentiment analyzer can be used as a baseline for identifying the class labels STRENGTH and WEAKNESS. We used an implementation of sentiment analyzer from TextBlob to get a polarity score for each sentence. Table TABREF13 shows the distribution of positive, negative and neutral sentiments across the 3 class labels STRENGTH, WEAKNESS and SUGGESTION. It can be observed that distribution of positive and negative sentiments is almost similar in STRENGTH as well as SUGGESTION sentences, hence we can conclude that the information about sentiments is not much useful for our classification problem.", "id": 205, "question": "What methods were used for sentence classification?", "title": "Mining Supervisor Evaluation and Peer Feedback in Performance Appraisals"}, {"answers": ["", ""], "context": "After identifying sentences in each class, we can now answer question (1) in Section SECREF1 . From 12742 sentences predicted to have label STRENGTH, we extract nouns that indicate the actual strength, and cluster them using a simple clustering algorithm which uses the cosine similarity between word embeddings of these nouns. We repeat this for the 9160 sentences with predicted label WEAKNESS or SUGGESTION as a single class. Tables TABREF15 and TABREF16 show a few representative clusters in strengths and in weaknesses, respectively. We also explored clustering 12742 STRENGTH sentences directly using CLUTO BIBREF19 and Carrot2 Lingo BIBREF20 clustering algorithms. Carrot2 Lingo discovered 167 clusters and also assigned labels to these clusters. We then generated 167 clusters using CLUTO as well. CLUTO does not generate cluster labels automatically, hence we used 5 most frequent words within the cluster as its labels. Table TABREF19 shows the largest 5 clusters by both the algorithms. It was observed that the clusters created by CLUTO were more meaningful and informative as compared to those by Carrot2 Lingo. Also, it was observed that there is some correspondence between noun clusters and sentence clusters. E.g. the nouns cluster motivation expertise knowledge talent skill (Table TABREF15 ) corresponds to the CLUTO sentence cluster skill customer management knowledge team (Table TABREF19 ). But overall, users found the nouns clusters to be more meaningful than the sentence clusters.", "id": 206, "question": "What is the average length of the sentences?", "title": "Mining Supervisor Evaluation and Peer Feedback in Performance Appraisals"}, {"answers": ["", ""], "context": "In many organizations, PA is done from a predefined set of perspectives, which we call attributes. Each attribute covers one specific aspect of the work done by the employees. This has the advantage that we can easily compare the performance of any two employees (or groups of employees) along any given attribute. We can correlate various performance attributes and find dependencies among them. We can also cluster employees in the workforce using their supervisor ratings for each attribute to discover interesting insights into the workforce. The HR managers in the organization considered in this paper have defined 15 attributes (Table TABREF20 ). Each attribute is essentially a work item or work category described at an abstract level. For example, FUNCTIONAL_EXCELLENCE covers any tasks, goals or activities related to the software engineering life-cycle (e.g., requirements analysis, design, coding, testing etc.) as well as technologies such as databases, web services and GUI.", "id": 207, "question": "What is the size of the real-life dataset?", "title": "Mining Supervisor Evaluation and Peer Feedback in Performance Appraisals"}, {"answers": ["", ""], "context": "Neural machine translation (NMT) emerged in the last few years as a very successful paradigm BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . While NMT is generally more fluent than previous statistical systems, adequacy is still a major concern BIBREF4 : common mistakes include dropping source words and repeating words in the generated translation.", "id": 208, "question": "What are the language pairs explored in this paper?", "title": "Sparse and Constrained Attention for Neural Machine Translation"}, {"answers": ["", ""], "context": "Deep learning, a sub-field of machine learning research, has driven the rapid progress in artificial intelligence research, leading to astonishing breakthroughs on long-standing problems in a plethora of fields such as computer vision and natural language processing. Tools powered by deep learning are changing the way movies are made, diseases are diagnosed, and play a growing role in understanding and communicating with humans.", "id": 209, "question": "Do they experiment with the toolkits?", "title": "GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing"}, {"answers": ["", ""], "context": "There is a recent spark of interest in the task of Question Answering (QA) over unstructured textual data, also referred to as Machine Reading Comprehension (MRC). This is mostly due to wide-spread success of advances in various facets of deep learning related research, such as novel architectures BIBREF0, BIBREF1 that allow for efficient optimisation of neural networks consisting of multiple layers, hardware designed for deep learning purposes and software frameworks BIBREF2, BIBREF3 that allow efficient development and testing of novel approaches. These factors enable researchers to produce models that are pre-trained on large scale corpora and provide contextualised word representations BIBREF4 that are shown to be a vital component towards solutions for a variety of natural language understanding tasks, including MRC BIBREF5. Another important factor that led to the recent success in MRC-related tasks is the widespread availability of various large datasets, e.g., SQuAD BIBREF6, that provide sufficient examples for optimising statistical models. The combination of these factors yields notable results, even surpassing human performance BIBREF7.", "id": 210, "question": "Have they made any attempt to correct MRC gold standards according to their findings? ", "title": "A Framework for Evaluation of Machine Reading Comprehension Gold Standards"}, {"answers": ["", ""], "context": "We define the task of machine reading comprehension, the target application of the proposed methodology as follows: Given a paragraph $P$ that consists of tokens (words) $p_1, \\ldots , p_{n_P}$ and a question $Q$ that consists of tokens $q_1 \\ldots q_{n_Q}$, the goal is to retrieve an answer $A$ with tokens $a_1 \\ldots a_{n_A}$. $A$ is commonly constrained to be one of the following cases BIBREF15, illustrated in Figure FIGREF9:", "id": 211, "question": "What features are absent from MRC gold standards that can result in potential lexical ambiguity?", "title": "A Framework for Evaluation of Machine Reading Comprehension Gold Standards"}, {"answers": ["", "MSMARCO, HOTPOTQA, RECORD, MULTIRC, NEWSQA, and DROP."], "context": "In this section we describe a methodology to categorise gold standards according to linguistic complexity, required reasoning and background knowledge, and their factual correctness. Specifically, we use those dimensions as high-level categories of a qualitative annotation schema for annotating question, expected answer and the corresponding context. We further enrich the qualitative annotations by a metric based on lexical cues in order to approximate a lower bound for the complexity of the reading comprehension task. By sampling entries from each gold standard and annotating them, we obtain measurable results and thus are able to make observations about the challenges present in that gold standard data.", "id": 212, "question": "What modern MRC gold standards are analyzed?", "title": "A Framework for Evaluation of Machine Reading Comprehension Gold Standards"}, {"answers": ["", ""], "context": "We are interested in different types of the expected answer. We differentiate between Span, where an answer is a continuous span taken from the passage, Paraphrasing, where the answer is a paraphrase of a text span, Unanswerable, where there is no answer present in the context, and Generated, if it does not fall into any of the other categories. It is not sufficient for an answer to restate the question or combine multiple Span or Paraphrasing answers to be annotated as Generated. It is worth mentioning that we focus our investigations on answerable questions. For a complementary qualitative analysis that categorises unanswerable questions, the reader is referred to Yatskar2019.", "id": 213, "question": "How does proposed qualitative annotation schema looks like?", "title": "A Framework for Evaluation of Machine Reading Comprehension Gold Standards"}, {"answers": ["", ""], "context": "With the advent of social media platforms, increasing user base address their grievances over these platforms, in the form of complaints. According to BIBREF0, complaint is considered to be a basic speech act used to express negative mismatch between the expectation and reality. Transportation and its related logistics industries are the backbones of every economy. Many transport organizations rely on complaints gathered via these platforms to improve their services, hence understanding these are important for: (1) linguists to identify human expressions of criticism and (2) organizations to improve their query response time and address concerns effectively.", "id": 214, "question": "How many tweets were collected?", "title": "An Iterative Approach for Identifying Complaint Based Tweets in Social Media Platforms"}, {"answers": ["", "English language"], "context": "We aimed to mimic the presence of sparse/noisy content distribution, mandating the need to curate a novel dataset via specific lexicons. We scraped 500 random posts from recognized transport forum. A pool of 50 uni/bi-grams was created based on tf-idf representations, extracted from the posts, which was further pruned by annotators. Querying posts on Twitter with extracted lexicons led to a collection of $19,300$ tweets. In order to have lexical diversity, we added 2500 randomly sampled tweets to our dataset. In spite of the sparse nature of these posts, the lexical characteristics act as information cues.", "id": 215, "question": "What language is explored in this paper?", "title": "An Iterative Approach for Identifying Complaint Based Tweets in Social Media Platforms"}, {"answers": ["", "", ""], "context": "Speech-to-Text translation (ST) is essential for a wide range of scenarios: for example in emergency calls, where agents have to respond emergent requests in a foreign language BIBREF0; or in online courses, where audiences and speakers use different languages BIBREF1. To tackle this problem, existing approaches can be categorized into cascaded method BIBREF2, BIBREF3, where a machine translation (MT) model translates outputs of an automatic speech recognition (ASR) system into target language, and end-to-end method BIBREF4, BIBREF5, where a single model learns acoustic frames to target word sequence mappings in one step towards the final objective of interest. Although the cascaded model remains the dominant approach due to its better performance, the end-to-end method becomes more and more popular because it has lower latency by avoiding inferences with two models and rectifies the error propagation in theory.", "id": 216, "question": "What are the baselines?", "title": "Bridging the Gap between Pre-Training and Fine-Tuning for End-to-End Speech Translation"}, {"answers": [""], "context": "End-to-end speech translation aims to translate a piece of audio into a target-language translation in one step. The raw speech signals are usually converted to sequences of acoustic features, e.g. Mel filterbank features. Here, we define the speech feature sequence as $\\mathbf {x} = (x_1, \\cdots , x_{T_x})$.The transcription and translation sequences are denoted as $\\mathbf {y^{s}} = (y_1^{s}, \\cdots , y_{T_s}^{s})$, and $\\mathbf {y^{t}} = (y_1^{t}, \\cdots , y_{T_t}^{t})$ repectively. Each symbol in $\\mathbf {y^{s}}$ or $\\mathbf {y^{t}}$ is an integer index of the symbol in a vocabulary $V_{src}$ or $V_{trg}$ respectively (e.g. $y^s_i=k, k\\in [0, |V_{src}|-1]$). In this work, we suppose that an ASR dataset, an MT dataset, and a ST dataset are available, denoted as $\\mathcal {A} = \\lbrace (\\mathbf {x_i}, \\mathbf {y^{s}_i})\\rbrace _{i=0}^I$, $\\mathcal {M} =\\lbrace (\\mathbf {y^{s}_j}, \\mathbf {y^{t}_j})\\rbrace _{j=0}^J$ and $ \\mathcal {S} =\\lbrace (\\mathbf {x_l}, \\mathbf {y^{t}_l})\\rbrace _{l=0}^L$ respectively. Given a new piece of audio $\\mathbf {x}$, our goal is to learn an end to end model to generate a translation sentence $\\mathbf {y^{t}}$ without generating an intermediate result $\\mathbf {y^{s}}$.", "id": 217, "question": "What is the attention module pretrained on?", "title": "Bridging the Gap between Pre-Training and Fine-Tuning for End-to-End Speech Translation"}, {"answers": ["two previous turns", ""], "context": "Language model plays an important role in many natural language processing systems, such as in automatic speech recognition BIBREF0 , BIBREF1 and machine translation systems BIBREF2 , BIBREF3 . Recurrent neural network (RNN) based models BIBREF4 , BIBREF5 have recently shown success in language modeling, outperforming conventional n-gram based models. Long short-term memory BIBREF6 , BIBREF7 is a widely used RNN variant for language modeling due to its superior performance in capturing longer term dependencies.", "id": 218, "question": "How long of dialog history is captured?", "title": "Dialog Context Language Modeling with Recurrent Neural Networks"}, {"answers": ["", ""], "context": "Question answering (QA) has drawn a lot of attention in the past few years. QA tasks on images BIBREF0 have been widely studied, but most focused on understanding text documents BIBREF1 . A representative dataset in text QA is SQuAD BIBREF1 , in which several end-to-end neural models have accomplished promising performance BIBREF2 . Although there is a significant progress in machine comprehension (MC) on text documents, MC on spoken content is a much less investigated field. In spoken question answering (SQA), after transcribing spoken content into text by automatic speech recognition (ASR), typical approaches use information retrieval (IR) techniques BIBREF3 to find the proper answer from the ASR hypotheses. One attempt towards QA of spoken content is TOEFL listening comprehension by machine BIBREF4 . TOEFL is an English examination that tests the knowledge and skills of academic English for English learners whose native languages are not English. Another SQA corpus is Spoken-SQuAD BIBREF5 , which is automatically generated from SQuAD dataset through Google Text-to-Speech (TTS) system. Recently ODSQA, a SQA corpus recorded by real speakers, is released BIBREF6 .", "id": 219, "question": "What evaluation metrics were used?", "title": "Mitigating the Impact of Speech Recognition Errors on Spoken Question Answering by Adversarial Domain Adaptation"}, {"answers": ["Best results authors obtain is EM 51.10 and F1 63.11", "EM Score of 51.10"], "context": "In SQA, each sample is a triple, INLINEFORM0 , where INLINEFORM1 is a question in either spoken or text form, INLINEFORM2 is a multi-sentence spoken-form document, and INLINEFORM3 is the answer in text from. The task of this work is extractive SQA; that means INLINEFORM4 is a word span from the reference transcription of INLINEFORM5 . An overview framework of SQA is shown in Figure FIGREF1 . In this paper, we frame the source domain as reference transcriptions and the target domain as ASR hypotheses. Hence, we can collect source domain data more easily, and adapt the model to the target domain.", "id": 220, "question": "What was the score of the proposed model?", "title": "Mitigating the Impact of Speech Recognition Errors on Spoken Question Answering by Adversarial Domain Adaptation"}, {"answers": ["", ""], "context": "The used architecture of the QA model is briefly summarized below. Here we choose QANet BIBREF2 as the base model due to the following reasons: 1) it achieves the second best performance on SQuAD, and 2) since there are completely no recurrent networks in QANet, its training speed is 5x faster than BiDAF BIBREF17 when reaching the same performance on SQuAD.", "id": 221, "question": "What was the previous best model?", "title": "Mitigating the Impact of Speech Recognition Errors on Spoken Question Answering by Adversarial Domain Adaptation"}, {"answers": ["", ""], "context": "The main focus of this paper is to apply domain adaptation for SQA. In this approach, we have two SQA models (QANets), one trained from target domain data (ASR hypotheses) and another trained from source domain data (reference transcriptions). Because the two domains share common information, some layers in these two models can be tied in order to model the shared features. Hence, we can choose whether each layer in the QA model should be shared. Tying the weights between the source layer and the target layer in order to learn a symmetric mapping is to project both source and target domain data to a shared common space. Different combinations will be investigated in our experiments.", "id": 222, "question": "Which datasets did they use for evaluation?", "title": "Mitigating the Impact of Speech Recognition Errors on Spoken Question Answering by Adversarial Domain Adaptation"}, {"answers": ["Dimension size, window size, architecture, algorithm, epochs, hidden dimension size, learning rate, loss function, optimizer algorithm.", "Hyperparameters explored were: dimension size, window size, architecture, algorithm and epochs."], "context": "There have been many implementations of the word2vec model in either of the two architectures it provides: continuous skipgram and CBoW (BIBREF0). Similar distributed models of word or subword embeddings (or vector representations) find usage in sota, deep neural networks like BERT and its successors (BIBREF1, BIBREF2, BIBREF3). These deep networks generate contextual representations of words after been trained for extended periods on large corpora, unsupervised, using the attention mechanisms (BIBREF4).", "id": 223, "question": "What hyperparameters are explored?", "title": "Word2Vec: Optimal Hyper-Parameters and Their Impact on NLP Downstream Tasks"}, {"answers": ["", ""], "context": "Breaking away from the non-distributed (high-dimensional, sparse) representations of words, typical of traditional bag-of-words or one-hot-encoding (BIBREF15), BIBREF0 created word2vec. Word2Vec consists of two shallow neural network architectures: continuous skipgram and CBoW. It uses distributed (low-dimensional, dense) representations of words that group similar words. This new model traded the complexity of deep neural network architectures, by other researchers, for more efficient training over large corpora. Its architectures have two training algorithms: negative sampling and hierarchical softmax (BIBREF16). The released model was trained on Google news dataset of 100 billion words. Implementations of the model have been undertaken by researchers in the programming languages Python and C++, though the original was done in C (BIBREF17).", "id": 224, "question": "What Named Entity Recognition dataset is used?", "title": "Word2Vec: Optimal Hyper-Parameters and Their Impact on NLP Downstream Tasks"}, {"answers": ["", ""], "context": "The models were generated in a shared cluster running Ubuntu 16 with 32 CPUs of 32x Intel Xeon 4110 at 2.1GHz. Gensim (BIBREF17) python library implementation of word2vec was used with parallelization to utilize all 32 CPUs. The downstream experiments were run on a Tesla GPU on a shared DGX cluster running Ubuntu 18. Pytorch deep learning framework was used. Gensim was chosen because of its relative stability, popular support and to minimize the time required in writing and testing a new implementation in python from scratch.", "id": 225, "question": "What sentiment analysis dataset is used?", "title": "Word2Vec: Optimal Hyper-Parameters and Their Impact on NLP Downstream Tasks"}, {"answers": ["", ""], "context": "Table TABREF5 summarizes key results from the intrinsic evaluations for 300 dimensions. Table TABREF6 reveals the training time (in hours) and average embedding loading time (in seconds) representative of the various models used. Tables TABREF11 and TABREF12 summarize key results for the extrinsic evaluations. Figures FIGREF7, FIGREF9, FIGREF10, FIGREF13 and FIGREF14 present line graph of the eight combinations for different dimension sizes for Simple Wiki, trend of Simple Wiki and Billion Word corpora over several dimension sizes, analogy score comparison for models across datasets, NER mean F1 scores on the GMB dataset and SA mean F1 scores on the IMDb dataset, respectively. Combination of the skipgram using hierarchical softmax and window size of 8 for 300 dimensions outperformed others in analogy scores for the Wiki Abstract. However, its results are so poor, because of the tiny file size, they're not worth reporting here. Hence, we'll focus on results from the Simple Wiki and Billion Word corpora.", "id": 226, "question": "Do they test both skipgram and c-bow?", "title": "Word2Vec: Optimal Hyper-Parameters and Their Impact on NLP Downstream Tasks"}, {"answers": ["", ""], "context": "Table-to-text generation is an important and challenging task in natural language processing, which aims to produce the summarization of numerical table BIBREF0, BIBREF1. The related methods can be empirically divided into two categories, pipeline model and end-to-end model. The former consists of content selection, document planning and realisation, mainly for early industrial applications, such as weather forecasting and medical monitoring, etc. The latter generates text directly from the table through a standard neural encoder-decoder framework to avoid error propagation and has achieved remarkable progress. In this paper, we particularly focus on exploring how to improve the performance of neural methods on table-to-text generation.", "id": 227, "question": "What is the state-of-the-art model for the task?", "title": "Table-to-Text Generation with Effective Hierarchical Encoder on Three Dimensions (Row, Column and Time)"}, {"answers": ["", ""], "context": "The input to the model are tables $S=\\lbrace s^{1}, s^{2}, s^{3}\\rbrace $. $s^{1}$, $s^{2}$, and $s^{3}$ contain records about players' performance in home team, players' performance in visiting team and team's overall performance respectively. We regard each cell in the table as record. Each record $r$ consists of four types of information including value $r.v$ (e.g. 18), entity $r.e$ (e.g. Al Jefferson), type $r.c$ (e.g. POINTS) and a feature $r.f$ (e.g. visiting) which indicate whether a player or a team compete in home court or not. Each player or team takes one row in the table and each column contains a type of record such as points, assists, etc. Also, tables contain the date when the match happened and we let $k$ denote the date of the record. We also create timelines for records. The details of timeline construction is described in Section SECREF4. For simplicity, we omit table id $l$ and record date $k$ in the following sections and let $r_{i,j}$ denotes a record of $i^{th}$ row and $j^{th}$ column in the table. We assume the records come from the same table and $k$ is the date of the mentioned record. Given those information, the model is expected to generate text $y=(y_{1}, ..., y_{t}, ..., y_{T})$ describing these tables. $T$ denotes the length of the text.", "id": 228, "question": "What is the strong baseline?", "title": "Table-to-Text Generation with Effective Hierarchical Encoder on Three Dimensions (Row, Column and Time)"}, {"answers": ["The time devoted to self-coverage, opponent-coverage, and the number of adopted discussion points.", ""], "context": "Public debates are a common platform for presenting and juxtaposing diverging viewpoints As opposed to monologues where speakers are limited to expressing their own beliefs, debates allow for participants to interactively attack their opponents' points while defending their own. The resulting flow of ideas is a key feature of this conversation genre.", "id": 229, "question": "what aspects of conversation flow do they look at?", "title": "Conversational flow in Oxford-style debates"}, {"answers": ["", ""], "context": "In this study we use transcripts and results of Oxford-style debates from the public debate series \u201cIntelligence Squared Debates\u201d (IQ2 for short). These debates are recorded live, and contain motions covering a diversity of topics ranging from foreign policy issues to the benefits of organic food. Each debate consists of two opposing teams\u2014one for the motion and one against\u2014of two or three experts in the topic of the particular motion, along with a moderator. Each debate follows the Oxford-style format and consists of three rounds. In the introduction, each debater is given 7 minutes to lay out their main points. During the discussion, debaters take questions from the moderator and audience, and respond to attacks from the other team. This round lasts around 30 minutes and is highly interactive; teams frequently engage in direct conversation with each other. Finally, in the conclusion, each debater is given 2 minutes to make final remarks.", "id": 230, "question": "what debates dataset was used?", "title": "Conversational flow in Oxford-style debates"}, {"answers": ["Babelfy, DBpedia Spotlight, Entityclassifier.eu, FOX, LingPipe MUC-7, NERD-ML, Stanford NER, TagMe 2"], "context": "Information extraction tasks have become very important not only in the Web, but also for in-house enterprise settings. One of the crucial steps towards understanding natural language is named entity recognition (NER), which aims to extract mentions of entity names in text. NER is necessary for many higher-level tasks such as entity linking, relation extraction, building knowledge graphs, question answering and intent based search. In these scenarios, NER recall is critical, as candidates that are never generated can not be recovered later BIBREF0 .", "id": 231, "question": "what is the state of the art?", "title": "Robust Named Entity Recognition in Idiosyncratic Domains"}, {"answers": ["", "", ""], "context": "We abstract the task of NER as sequential word labeling problem. Figure FIGREF15 illustrates an example for sequential transformation of a sentence into word labels. We express each sentence in a document as a sequence of words: INLINEFORM0 , e.g. INLINEFORM1 Aspirin. We define a mention as the longest possible span of adjacent tokens that refer to a an entity or relevant concept of a real-world object, such as Aspirin (ASA). We further assume that mentions are non-recursive and non-overlapping. To encode boundaries of the mention span, we adapt the idea of ramshaw1995text, which has been adapted as BIO2 standard in the CoNLL2003 shared task BIBREF15 . We assign labels INLINEFORM2 to each token to mark begin, inside and outside of a mention from left to right. We use the input sequence INLINEFORM3 together with a target sequence INLINEFORM4 of the same length that contains a BIO2 label for each word: INLINEFORM5 , e.g. INLINEFORM6 B. To predict the most likely label INLINEFORM7 of a token regarding its context, we utilize recurrent neural networks.", "id": 232, "question": "what standard dataset were used?", "title": "Robust Named Entity Recognition in Idiosyncratic Domains"}, {"answers": ["", ""], "context": "In this digital era, online discussions and interactions has become a vital part of daily life of which a huge part is covered by social media platforms like twitter, facebook, instagram etc. Similar to real life there exist anti-social elements in the cyberspace, who take advantage of the anonymous nature in cyber world and indulge in vulgar and offensive communications. This includes bullying, trolling, harassment BIBREF0, BIBREF1 and has become a growing concern for governments. Youth experiencing such victimization was recorded to have psychological symptoms of anxiety, depression, loneliness BIBREF1. Thus it is important to identify and remove such behaviours at the earliest. One solution to this is the automatic detection using machine learning algorithms.", "id": 233, "question": "Do they perform error analysis?", "title": "Offensive Language Detection: A Comparative Analysis"}, {"answers": ["", ""], "context": "Data pre-processing is a very crucial step which needs to be done before applying any machine learning tasks, because the real time data could be very noisy and unstructured. For the two models used in this work, pre-processing of tweets is done separately:", "id": 234, "question": "How do their results compare to state-of-the-art?", "title": "Offensive Language Detection: A Comparative Analysis"}, {"answers": ["Random Kitchen Sink method uses a kernel function to map data vectors to a space where linear separation is possible.", ""], "context": "Word embeddings are ubiquitous for any NLP problem, as algorithms cannot process the plain text or strings in its raw form. Word emeddings are vectors that captures the semantic and contextual information of words. The word embedding used for this work are:", "id": 235, "question": "What is the Random Kitchen Sink approach?", "title": "Offensive Language Detection: A Comparative Analysis"}, {"answers": ["", ""], "context": "We participated in the WMT 2016 shared news translation task by building neural translation systems for four language pairs: English INLINEFORM0 Czech, English INLINEFORM1 German, English INLINEFORM2 Romanian and English INLINEFORM3 Russian. Our systems are based on an attentional encoder-decoder BIBREF0 , using BPE subword segmentation for open-vocabulary translation with a fixed vocabulary BIBREF1 . We experimented with using automatic back-translations of the monolingual News corpus as additional training data BIBREF2 , pervasive dropout BIBREF3 , and target-bidirectional models.", "id": 236, "question": "what are the baseline systems?", "title": "Edinburgh Neural Machine Translation Systems for WMT 16"}, {"answers": ["", ""], "context": "Equations are an important part of scientific articles, but many existing machine learning methods do not easily handle them. They are challenging to work with because each is unique or nearly unique; most equations occur only once. An automatic understanding of equations, however, would significantly benefit methods for analyzing scientific literature. Useful representations of equations can help draw connections between articles, improve retrieval of scientific texts, and help create tools for exploring and navigating scientific literature.", "id": 237, "question": "What word embeddings do they test?", "title": "Equation Embeddings"}, {"answers": ["By using Euclidean distance computed between the context vector representations of the equations", ""], "context": "Word embeddings were first introduced in BIBREF2 , BIBREF3 and there have been many variants BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . Common for all of them is the idea that words can be represented by latent feature vectors. These feature vectors are optimized to maximize the conditional probability of the dataset. Recently BIBREF1 extended the idea of word embeddings to other types of data. EqEmb expand the idea of word embeddings to a new type of data points \u2013 equations.", "id": 238, "question": "How do they define similar equations?", "title": "Equation Embeddings"}, {"answers": ["", ""], "context": "Automated, or robotic, journalism aims at news generation from structured data sources, either as the final product or as a draft for subsequent post-editing. At present, automated journalism typically focuses on domains such as sports, finance and similar statistics-based reporting, where there is a commercial product potential due to the high volume of news, combined with the expectation of a relatively straightforward task.", "id": 239, "question": "What evaluation criteria and metrics were used to evaluate the generated text?", "title": "Template-free Data-to-Text Generation of Finnish Sports News"}, {"answers": ["", ""], "context": "In recent years, there has been a movement to leverage social medial data to detect, estimate, and track the change in prevalence of disease. For example, eating disorders in Spanish language Twitter tweets BIBREF0 and influenza surveillance BIBREF1 . More recently, social media has been leveraged to monitor social risks such as prescription drug and smoking behaviors BIBREF2 , BIBREF3 , BIBREF4 as well as a variety of mental health disorders including suicidal ideation BIBREF5 , attention deficient hyperactivity disorder BIBREF6 and major depressive disorder BIBREF7 . In the case of major depressive disorder, recent efforts range from characterizing linguistic phenomena associated with depression BIBREF8 and its subtypes e.g., postpartum depression BIBREF5 , to identifying specific depressive symptoms BIBREF9 , BIBREF10 e.g., depressed mood. However, more research is needed to better understand the predictive power of supervised machine learning classifiers and the influence of feature groups and feature sets for efficiently classifying depression-related tweets to support mental health monitoring at the population-level BIBREF11 .", "id": 240, "question": "Do they evaluate only on English datasets?", "title": "Feature Studies to Inform the Classification of Depressive Symptoms from Twitter Data for Population Health"}, {"answers": ["", "reduced the dataset by eliminating features, apply feature selection to select highest ranked features to train and test the model and rank the performance of incrementally adding features."], "context": "Specifically, we conducted a feature ablation study to assess the informativeness of each feature group and a feature elimination study to determine the optimal feature sets for classifying Twitter tweets. We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., \u201cCitizens fear an economic depression\") or evidence of depression (e.g., \u201cdepressed over disappointment\"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., \u201cfeeling down in the dumps\"), disturbed sleep (e.g., \u201canother restless night\"), or fatigue or loss of energy (e.g., \u201cthe fatigue is unbearable\") BIBREF10 . For each class, every annotation (9,473 tweets) is binarized as the positive class e.g., depressed mood=1 or negative class e.g., not depressed mood=0.", "id": 241, "question": "What are the three steps to feature elimination?", "title": "Feature Studies to Inform the Classification of Depressive Symptoms from Twitter Data for Population Health"}, {"answers": ["", "The annotations are based on evidence of depression and further annotated by the depressive symptom if there is evidence of depression"], "context": "Furthermore, this dataset was encoded with 7 feature groups with associated feature values binarized (i.e., present=1 or absent=0) to represent potentially informative features for classifying depression-related classes. We describe the feature groups by type, subtype, and provide one or more examples of words representing the feature subtype from a tweet:", "id": 242, "question": "How is the dataset annotated?", "title": "Feature Studies to Inform the Classification of Depressive Symptoms from Twitter Data for Population Health"}, {"answers": ["", ""], "context": "Feature ablation studies are conducted to assess the informativeness of a feature group by quantifying the change in predictive power when comparing the performance of a classifier trained with the all feature groups versus the performance without a particular feature group. We conducted a feature ablation study by holding out (sans) each feature group and training and testing the support vector model using a linear kernel and 5-fold, stratified cross-validation. We report the average F1-score from our baseline approach (all feature groups) and report the point difference (+ or -) in F1-score performance observed by ablating each feature set.", "id": 243, "question": "What dataset is used for this study?", "title": "Feature Studies to Inform the Classification of Depressive Symptoms from Twitter Data for Population Health"}, {"answers": ["", ""], "context": "Medical search engines are an essential component for many online medical applications, such as online diagnosis systems and medical document databases. A typical online diagnosis system, for instance, relies on a medical search engine. The search engine takes as input a user query that describes some symptoms and then outputs clinical concept entries that provide relevant information to assist in diagnosing the problem. One challenge medical search engines face is the segmentation of individual clinical entities. When a user query consists of multiple clinical entities, a search engine would often fail to recognize them as separate entities. For example, the user query \u201cfever joint pain weight loss headache\u201d contains four separate clinical entities: \u201cfever\u201d, \u201cjoint pain\u201d, \u201cweight loss\u201d, and \u201cheadache\u201d. But when the search engine does not recognize them as separate entities and proceeds to retrieve results for each word in the query, it may find \"pain\" in body locations other than \"joint pain\", or it may miss \"headache\" altogether, for example. Some search engines allow the users to enter a single clinical concept by selecting from an auto-completion pick list. But this could also result in retrieving inaccurate or partial results and lead to poor user experience.", "id": 244, "question": "what were their performance results?", "title": "Extracting clinical concepts from user queries"}, {"answers": ["", ""], "context": "An effective model that has been commonly used for NER problem is a Bi-directional LSTM with a Conditional Random Field (CRF) on the top layer (BiLSTM-CRF), which is described in the next section. Combining LSTM\u2019s power of representing relations between words and CRF\u2019s capability of accounting for tag sequence constraints, Huang et al. BIBREF2 proposed the BiLSTM-CRF model and used handcrafted word features as the input to the model. Lample et al. BIBREF3 used a combination of character-level and word-level word embeddings as the input to BiLSTM-CRF. Since then, similar models with variation in types of word embeddings have been used extensively for clinical CE tasks and produced state-of-the-art results BIBREF4, BIBREF5, BIBREF6, BIBREF7. Word embeddings have become the cornerstone of the neural models in NLP since the famous Word2vec BIBREF8 model demonstrated its power in word analogy tasks. One well-known example is that after training Word2vec on a large amount of news data, we can get word relations such as $vector(^{\\prime }king^{\\prime }) - vector(^{\\prime }queen^{\\prime }) + vector(^{\\prime }woman^{\\prime }) \\approx vector(^{\\prime }man^{\\prime })$. More sophisticated word embedding technique emerged since Word2vec. It has been shown empirically that better quality in word embeddings leads to better performance in many downstream NLP including entity tagging BIBREF9, BIBREF10. Recently, contextualized word embeddings generated by deep learning models, such as ELMo BIBREF11, BERT BIBREF12, and Flair BIBREF13, have been shown to be more effective in various NLP tasks. In our project, we make use of a fine-tuned ELMo model and a fine-tuned Flair model in the medical domain. We experiment with the word embeddings from the two fine-tuned models as the input to the BiLSTM-CRF model separately and compare the results.", "id": 245, "question": "where did they obtain the annotated clinical notes from?", "title": "Extracting clinical concepts from user queries"}, {"answers": ["", "In encoder they use convolutional, NIN and bidirectional LSTM layers and in decoder they use unidirectional LSTM "], "context": "Conventional large-vocabulary continuous speech recognition (LVCSR) systems typically perform multi-level pattern recognition tasks that map the acoustic speech waveform into a hierarchy of speech units such as sub-words (phonemes), words, and strings of words (sentences). Such systems basically consist of several sub-components (feature extractor, acoustic model, pronunciation lexicon, language model) that are trained and tuned separately BIBREF0 . First, the speech signal is processed into a set of observation features based on a carefully hand-crafted feature extractor, such as Mel frequency cepstral coefficients (MFCC) or Mel-scale spectrogram. Then the acoustic model classifies the observation features into sub-unit or phoneme classes. Finally, the search algorithm finds the most probable word sequence based on the evidence of the acoustic model, the lexicon, and the language model. But, it is widely known that information loss in the earlier stage can propagate through the later stages.", "id": 246, "question": "Which architecture do they use for the encoder and decoder?", "title": "Attention-based Wav2Text with Feature Transfer Learning"}, {"answers": ["", "Decoder predicts the sequence of phoneme or grapheme at each time based on the previous output and context information with a beam search strategy"], "context": "The encoder-decoder model is a neural network that directly models conditional probability INLINEFORM0 where INLINEFORM1 is the source sequence with length INLINEFORM2 and INLINEFORM3 is the target sequence with length INLINEFORM4 . It consists of encoder, decoder and attention modules. The encoder task processes an input sequence INLINEFORM5 and outputs representative information INLINEFORM6 for the decoder. The attention module is an extension scheme that assists the decoder to find relevant information on the encoder side based on the current decoder hidden states BIBREF12 , BIBREF13 . Usually, the attention module produces context information INLINEFORM7 at time INLINEFORM8 based on the encoder and decoder hidden states: DISPLAYFORM0 ", "id": 247, "question": "How does their decoder generate text?", "title": "Attention-based Wav2Text with Feature Transfer Learning"}, {"answers": ["", ""], "context": "Deep learning is well known for its ability to learn directly from low-level feature representation such as raw speech BIBREF1 , BIBREF3 . However, in most cases such models are already conditioned on a fixed input size and a single target output (i.e., predicting one phoneme class for each input frame). In the attention-based encoder-decoder model, the training process is not as easy as in a standard neural network model BIBREF10 because the attention-based model needs to jointly optimize three different modules simultaneously: (1) an encoder module for producing representative information from a source sequence; (2) an attention module for calculating the correct alignment; and (3) a decoder module for generating correct transcriptions. If one of these modules has difficulty fulfilling its own tasks, then the model will fail to produce good results.", "id": 248, "question": "Which dataset do they use?", "title": "Attention-based Wav2Text with Feature Transfer Learning"}, {"answers": ["", ""], "context": "Over the past few years, generating text from images and videos has gained a lot of attention in the Computer Vision and Natural Language Processing communities and several related tasks have been proposed, such as image labeling, image and video description and visual question answering. In particular, prominent results have been achieved in image description with various deep neural network architectures, e.g. BIBREF1 , BIBREF2 , BIBREF3 , BIBREF0 . However, the need of generating more narrative texts from images which may reflect experiences, rather than just listing objects and their attributes, has given rise to tasks such as visual storytelling BIBREF4 . This task is about generating a story from a sequence of images. Figure 1 shows the difference between descriptions of images in isolation and stories for images in sequence.", "id": 249, "question": "What model is used to encode the images?", "title": "Contextualize, Show and Tell: A Neural Visual Storyteller"}, {"answers": ["", ""], "context": "The work by BIBREF6 presented probably the first system for generating stories from an album of images. This early approach involved the use of the NYC and Disney datasets mined from blog posts by the authors.", "id": 250, "question": "How is the sequential nature of the story captured?", "title": "Contextualize, Show and Tell: A Neural Visual Storyteller"}, {"answers": ["", ""], "context": "Our model extends the image description model by BIBREF0 , which consists of an encoder-decoder architecture. The encoder is a Convolutional Neural Network (CNN) and the decoder is a Long Short-Term Memory (LSTM) network, as presented in Figure 2 . The image is passed through the encoder generating the image representation that is used by the decoder to know the content of the image and generate the description word by word. In the following, we describe how we extended this model for the visual storytelling task.", "id": 251, "question": "Is the position in the sequence part of the input?", "title": "Contextualize, Show and Tell: A Neural Visual Storyteller"}, {"answers": ["", ""], "context": "The model's first component is a Recurrent Neural Network (RNN), more precisely an LSTM that summarizes the sequence of images. At every timestep $t$ the network takes as input an image $I_i$ where $i\\in \\lbrace 1,2,3,4,5\\rbrace $ from the sequence. At time $t=5$ , the LSTM has encoded the 5 images and provides the sequence's context through its last hidden state denoted by $h_e^{(t)}$ . The representation of the images was obtained through Inception V3.", "id": 252, "question": "Do the decoder LSTMs all have the same weights?", "title": "Contextualize, Show and Tell: A Neural Visual Storyteller"}, {"answers": ["", ""], "context": "Knowledge graphs BIBREF0 enable structured access to world knowledge and form a key component of several applications like search engines, question answering systems and conversational assistants. Knowledge graphs are typically interpreted as comprising of discrete triples of the form (entityA, relationX, entityB) thus representing a relation (relationX) between entityA and entityB. However, one limitation of only a discrete representation of triples is that it does not easily enable one to infer similarities and potential relations among entities which may be missing in the knowledge graph. Consequently, one popular alternative is to learn dense continuous representations of entities and relations by embedding them in latent continuous vector spaces, while seeking to model the inherent structure of the knowledge graph. Most knowledge graph embedding methods can be classified into two major classes: one class which operates purely on triples like RESCAL BIBREF1 , TransE BIBREF2 , DistMult BIBREF3 , TransD BIBREF4 , ComplEx BIBREF5 , ConvE BIBREF6 and the second class which seeks to incorporate additional information (like multi-hops) BIBREF7 . Learning high-quality knowledge graph embeddings can be quite challenging given that (a) they need to effectively model the contextual usages of entities and relations (b) they would need to be useful for a variety of predictive tasks on knowledge graphs.", "id": 253, "question": "Is fine-tuning required to incorporate these embeddings into existing models?", "title": "DOLORES: Deep Contextualized Knowledge Graph Embeddings"}, {"answers": ["", ""], "context": "Extensive work exists on knowledge graph embeddings dating back to Nickel, Tresp, and Kriegel ( BIBREF1 ) who first proposed Rescal based on a matrix factorization approach. Bordes et al. ( BIBREF2 ) advanced this line of work by proposing the first translational model TransE which seeks to relate the head and tail entity embeddings by modeling the relation as a translational vector. This culminated in a long series of new knowledge graph embeddings all based on the translational principle with various refinements BIBREF9 , BIBREF10 , BIBREF4 , BIBREF3 , BIBREF5 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 . Some recently proposed models like ManiFoldE BIBREF17 attempt to learn knowledge graph embeddings as a manifold while embeddings like HolE BIBREF1 derive inspiration from associative memories. Furthermore, with the success of neural models, models based on convolutional neural networks have been proposed like BIBREF6 , BIBREF18 to learn knowledge graph embeddings. Other models in this class of models include ConvKB BIBREF19 and KBGAN BIBREF20 . There has been some work on incorporating additional information like entity types, relation paths etc. to learn knowledge graph representations. Palumbo et al. ( BIBREF21 ) use node2vec to learn embeddings of entities and items in a knowledge graph. A notable class of methods called \u201cpath-ranking\u201d based models directly model paths between entities as features. Examples include Path Ranking Algorithm (PRA) BIBREF22 , PTransE BIBREF10 and models based on recurrent neural networks BIBREF23 , BIBREF24 . Besides, Das et al. ( BIBREF25 ) propose a reinforcement learning method that addresses practical task of answering questions where the relation is known, but only one entity. Hartford et al. ( BIBREF26 ) model interactions across two or more sets of objects using a parameter-sharing scheme. While most of the above models except for the recurrent-neural net abased models above are shallow our model Dolores differs from all of these works and especially that of Palumbo et al. ( BIBREF21 ) in that we learn deep contextualized knowledge graph representations of entities and relations using a deep neural sequential model. The work that is closest to our work is that of Das et al. ( BIBREF24 ) who directly use an RNN-based architecture to model paths to predict missing links. We distinguish our work from this in the following key ways: (a) First, unlike Das et al. ( BIBREF24 ), our focus is not on path reasoning but on learning rich knowledge graph embeddings useful for a variety of predictive tasks. Moreover while Das et al. ( BIBREF24 ) need to use paths generated from PRA that typically correlate with relations, our method has no such restriction and only uses paths generated by generic random walks greatly enhancing the scalability of our method. In fact, we incorporate Dolores embeddings to improve the performance of the model proposed by Das et al. ( BIBREF24 ). (b) Second, and most importantly we learn knowledge graph embeddings at multiple layers each potentially capturing different levels of abstraction. (c) Finally, while we are inspired by the work of Peters et al. ( BIBREF8 ) in learning deep word representations, we build on their ideas by drawing connections between knowledge graphs and language modeling BIBREF8 . In particular, we propose methods to use random walks over knowledge graphs in conjunction with the machinery of deep neural language modeling to learn powerful deep contextualized knowledge graph embeddings that improve the state of the art on various knowledge graph tasks.", "id": 254, "question": "How are meaningful chains in the graph selected?", "title": "DOLORES: Deep Contextualized Knowledge Graph Embeddings"}, {"answers": ["", ""], "context": " This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/.", "id": 255, "question": "Do the authors also analyze transformer-based architectures?", "title": "Neural Network Models for Paraphrase Identification, Semantic Textual Similarity, Natural Language Inference, and Question Answering"}, {"answers": ["", ""], "context": "Earlier studies on stock market prediction are based on the historical stock prices. Later studies have debunked the approach of predicting stock market movements using historical prices. Stock market prices are largely fluctuating. The efficient market hypothesis (EMH) states that financial market movements depend on news, current events and product releases and all these factors will have a significant impact on a company's stock value BIBREF0 . Because of the lying unpredictability in news and current events, stock market prices follow a random walk pattern and cannot be predicted with more than 50% accuracy BIBREF1 .", "id": 256, "question": "Do they remove seasonality from the time series?", "title": "Sentiment Analysis of Twitter Data for Predicting Stock Market Movements"}, {"answers": ["", ""], "context": "The most well-known publication in this area is by Bollen BIBREF10 . They investigated whether the collective mood states of public (Happy, calm, Anxiety) derived from twitter feeds are correlated to the value of the Dow Jones Industrial Index. They used a Fuzzy neural network for their prediction. Their results show that public mood states in twitter are strongly correlated with Dow Jones Industrial Index. Chen and Lazer BIBREF11 derived investment strategies by observing and classifying the twitter feeds. Bing et al. BIBREF12 studied the tweets and concluded the predictability of stock prices based on the type of industry like Finance, IT etc. Zhang BIBREF13 found out a high negative correlation between mood states like hope, fear and worry in tweets with the Dow Jones Average Index. Recently, Brian et al. BIBREF14 investigated the correlation of sentiments of public with stock increase and decreases using Pearson correlation coefficient for stocks. In this paper, we took a novel approach of predicting rise and fall in stock prices based on the sentiments extracted from twitter to find the correlation. The core contribution of our work is the development of a sentiment analyzer which works better than the one in Brian's work and a novel approach to find the correlation. Sentiment analyzer is used to classify the sentiments in tweets extracted.The human annotated dataset in our work is also exhaustive. We have shown that a strong correlation exists between twitter sentiments and the next day stock prices in the results section. We did so by considering the tweets and stock opening and closing prices of Microsoft over a year.", "id": 257, "question": "What is the dimension of the embeddings?", "title": "Sentiment Analysis of Twitter Data for Predicting Stock Market Movements"}, {"answers": ["", "Collected tweets and opening and closing stock prices of Microsoft."], "context": "A total of 2,50,000 tweets over a period of August 31st, 2015 to August 25th,2016 on Microsoft are extracted from twitter API BIBREF15 . Twitter4J is a java application which helps us to extract tweets from twitter. The tweets were collected using Twitter API and filtered using keywords like $ MSFT, # Microsoft, #Windows etc. Not only the opinion of public about the company's stock but also the opinions about products and services offered by the company would have a significant impact and are worth studying. Based on this principle, the keywords used for filtering are devised with extensive care and tweets are extracted in such a way that they represent the exact emotions of public about Microsoft over a period of time. The news on twitter about Microsoft and tweets regarding the product releases were also included. Stock opening and closing prices of Microsoft from August 31st, 2015 to August 25th, 2016 are obtained from Yahoo! Finance BIBREF16 .", "id": 258, "question": "What dataset is used to train the model?", "title": "Sentiment Analysis of Twitter Data for Predicting Stock Market Movements"}, {"answers": ["", ""], "context": "Other than encoder-only pretrained transformer architectures BIBREF2, BIBREF3, BIBREF4, encoder\u2013decoder style pretrained transformers BIBREF0, BIBREF5 have been proven to be effective in text generation tasks as well as comprehension tasks. This paper describes our submission to the commonsense reasoning task leaderboard of the AI2 WinoGrande Challenge BIBREF1, which uses the text-to-text transfer transformer (T5); our approach currently represents the state of the art.", "id": 259, "question": "What is the previous state of the art?", "title": "TTTTTackling WinoGrande Schemas"}, {"answers": ["", ""], "context": "The vast amounts of data collected by healthcare providers in conjunction with modern data analytics techniques present a unique opportunity to improve health service provision and the quality and safety of medical care for patient benefit BIBREF0 . Much of the recent research in this area has been on personalised medicine and its aim to deliver better diagnostics aided by the integration of diverse datasets providing complementary information. Another large source of healthcare data is organisational. In the United Kingdom, the National Health Service (NHS) has a long history of documenting extensively the different aspects of healthcare provision. The NHS is currently in the process of increasing the availability of several databases, properly anonymised, with the aim of leveraging advanced analytics to identify areas of improvement in NHS services.", "id": 260, "question": "Which text embedding methodologies are used?", "title": "From Free Text to Clusters of Content in Health Records: An Unsupervised Graph Partitioning Approach"}, {"answers": ["Females are given higher sentiment intensity when predicting anger, joy or valence, but males are given higher sentiment intensity when predicting fear.\nAfrican American names are given higher score on the tasks of anger, fear, and sadness intensity prediction, but European American names are given higher scores on joy and valence task.", ""], "context": "[0]leftmargin=* [0]leftmargin=*", "id": 261, "question": "Which race and gender are given higher sentiment intensity predictions?", "title": "Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems"}, {"answers": ["Sentences involving at least one race- or gender-associated word, sentence have to be short and grammatically simple, sentence have to include expressions of sentiment and emotion.", ""], "context": "Recent studies have demonstrated that the systems trained on the human-written texts learn human-like biases BIBREF1 , BIBREF6 . In general, any predictive model built on historical data may inadvertently inherit human biases based on gender, ethnicity, race, or religion BIBREF7 , BIBREF8 . Discrimination-aware data mining focuses on measuring discrimination in data as well as on evaluating performance of discrimination-aware predictive models BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 .", "id": 262, "question": "What criteria are used to select the 8,640 English sentences?", "title": "Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems"}, {"answers": ["", "LF-MMI Attention\nSeq2Seq \nRNN-T \nChar E2E LF-MMI \nPhone E2E LF-MMI \nCTC + Gram-CTC"], "context": "Conventional automatic speech recognition (ASR) systems typically consist of several independently learned components: an acoustic model to predict context-dependent sub-phoneme states (senones) from audio, a graph structure to map senones to phonemes, and a pronunciation model to map phonemes to words. Hybrid systems combine hidden Markov models to model state dependencies with neural networks to predict states BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Newer approaches such as end-to-end (E2E) systems reduce the overall complexity of the final system.", "id": 263, "question": "what were the baselines?", "title": "Jasper: An End-to-End Convolutional Neural Acoustic Model"}, {"answers": ["In case of read speech datasets, their best model got the highest nov93 score of 16.1 and the highest nov92 score of 13.3.\nIn case of Conversational Speech, their best model got the highest SWB of 8.3 and the highest CHM of 19.3. ", "On WSJ datasets author's best approach achieves 9.3 and 6.9 WER compared to best results of 7.5 and 4.1 on nov93 and nov92 subsets.\nOn Hub5'00 datasets author's best approach achieves WER of 7.8 and 16.2 compared to best result of 7.3 and 14.2 on Switchboard (SWB) and Callhome (CHM) subsets."], "context": "Jasper is a family of end-to-end ASR models that replace acoustic and pronunciation models with a convolutional neural network. Jasper uses mel-filterbank features calculated from 20ms windows with a 10ms overlap, and outputs a probability distribution over characters per frame. Jasper has a block architecture: a Jasper INLINEFORM0 x INLINEFORM1 model has INLINEFORM2 blocks, each with INLINEFORM3 sub-blocks. Each sub-block applies the following operations: a 1D-convolution, batch norm, ReLU, and dropout. All sub-blocks in a block have the same number of output channels.", "id": 264, "question": "what competitive results did they obtain?", "title": "Jasper: An End-to-End Convolutional Neural Acoustic Model"}, {"answers": ["by 2.3-6.8 points in f1 score for intent recognition and 0.8-3.5 for slot filling", "F1 score increased from 0.89 to 0.92"], "context": "Understanding passenger intents from spoken interactions and car's vision (both inside and outside the vehicle) are important building blocks towards developing contextual dialog systems for natural interactions in autonomous vehicles (AV). In this study, we continued exploring AMIE (Automated-vehicle Multimodal In-cabin Experience), the in-cabin agent responsible for handling certain multimodal passenger-vehicle interactions. When the passengers give instructions to AMIE, the agent should parse such commands properly considering available three modalities (language/text, audio, video) and trigger the appropriate functionality of the AV system. We had collected a multimodal in-cabin dataset with multi-turn dialogues between the passengers and AMIE using a Wizard-of-Oz scheme via realistic scavenger hunt game. In our previous explorations BIBREF0, BIBREF1, we experimented with various RNN-based models to detect utterance-level intents (set destination, change route, go faster, go slower, stop, park, pull over, drop off, open door, and others) along with intent keywords and relevant slots (location, position/direction, object, gesture/gaze, time-guidance, person) associated with the action to be performed in our AV scenarios. In this recent work, we propose to discuss the benefits of multimodal understanding of in-cabin utterances by incorporating verbal/language input (text and speech embeddings) together with the non-verbal/acoustic and visual input from inside and outside the vehicle (i.e., passenger gestures and gaze from in-cabin video stream, referred objects outside of the vehicle from the road view camera stream). Our experimental results outperformed text-only baselines and with multimodality, we achieved improved performances for utterance-level intent detection and slot filling.", "id": 265, "question": "By how much is performance improved with multimodality?", "title": "Towards Multimodal Understanding of Passenger-Vehicle Interactions in Autonomous Vehicles: Intent/Slot Recognition Utilizing Audio-Visual Data"}, {"answers": ["", ""], "context": "We explored leveraging multimodality for the NLU module in the SDS pipeline. As our AMIE in-cabin dataset has video and audio recordings, we investigated 3 modalities for the NLU: text, audio, and video. For text (language) modality, our previous work BIBREF1 presents the details of our best-performing Hierarchical & Joint Bi-LSTM models BIBREF3, BIBREF4, BIBREF5, BIBREF6 (H-Joint-2, see SECREF5) and the results for utterance-level intent recognition and word-level slot filling via transcribed and recognized (ASR output) textual data, using word embeddings (GloVe BIBREF7) as features. This study explores the following multimodal features:", "id": 266, "question": "Is collected multimodal in cabin dataset public?", "title": "Towards Multimodal Understanding of Passenger-Vehicle Interactions in Autonomous Vehicles: Intent/Slot Recognition Utilizing Audio-Visual Data"}, {"answers": ["", ""], "context": "Informal speech is different from formal speech, especially in Vietnamese due to many conjunctive words in this language. Building an ASR model to handle such kind of speech is particularly difficult due to the lack of training data and also cost for data collection. There are two components of an ASR system that contribute the most to the accuracy of it, an acoustic model and a language model. While collecting data for acoustic model is time-consuming and costly, language model data is much easier to collect.", "id": 267, "question": "What is the performance reported for the best models in the VLSP 2018 and VLSP 2019 challenges?", "title": "VAIS ASR: Building a conversational speech recognition system using language model combination"}, {"answers": ["", ""], "context": "In this section, we describe our ASR system, which consists of 2 main components, an acoustic model which models the correlation between phonemes and speech signal; and a language model which guides the search algorithm throughout inference process.", "id": 268, "question": "Is the model tested against any baseline?", "title": "VAIS ASR: Building a conversational speech recognition system using language model combination"}, {"answers": ["", ""], "context": "We adopt a DNN-based acoustic model BIBREF0 with 11 hidden layers and the alignment used to train the model is derived from a HMM-GMM model trained with SAT criterion. In a conventional Gaussian Mixture Model - Hidden Markov Model (GMM-HMM) acoustic model, the state emission log-likelihood of the observation feature vector $o_t$ for certain tied state $s_j$ of HMMs at time $t$ is computed as", "id": 269, "question": "What is the language model combination technique used in the paper?", "title": "VAIS ASR: Building a conversational speech recognition system using language model combination"}, {"answers": ["", ""], "context": "Our language model training pipeline is described in Figure FIGREF6. First, we collect and clean large amount of text data from various sources including news, manual labeled conversation video. Then, the collected data is categorized into domains. This is an important step as the ASR performance is highly depends on the speech domain. After that, the text is fed into a data cleaning pipeline to clean bad tone marks, normalizing numbers and dates.", "id": 270, "question": "What are the deep learning architectures used in the task?", "title": "VAIS ASR: Building a conversational speech recognition system using language model combination"}, {"answers": ["", "The average score improved by 1.4 points over the previous best result."], "context": "The ability of semantic reasoning is essential for advanced natural language understanding (NLU) systems. Many NLU tasks that take sentence pairs as input, such as natural language inference (NLI) and machine reading comprehension (MRC), heavily rely on the ability of sophisticated semantic reasoning. For instance, the NLI task aims to determine whether the hypothesis sentence (e.g., a woman is sleeping) can be inferred from the premise sentence (e.g., a woman is talking on the phone). This requires the model to read and understand sentence pairs to make the specific semantic inference.", "id": 271, "question": "How much is performance improved on NLI?", "title": "Symmetric Regularization based BERT for Pair-wise Semantic Reasoning"}, {"answers": ["", ""], "context": "Many NLU tasks seek to model the relationship between two sentences. Semantic reasoning is performed on the sentence pair for the task-specific inference. Pair-wise semantic reasoning tasks have drawn a lot of attention from the NLP community as they largely require the comprehension ability of the learning systems. Recently, the significant improvement on these benchmarks comes from the pre-training models, e.g., BERT, StructBERT BIBREF3, ERNIE BIBREF4, BIBREF5, RoBERTa BIBREF6 and XLNet BIBREF7. These models learn from unsupervised/self-supervised objectives and perform excellently in the downstream tasks. Among these models, BERT adopts NSP as one of the objectives in the pre-training and shows that the NSP task has a positive effect on the NLI and MRC tasks. Although the primary study of XLNet and RoBERTa suggests that NSP is ineffective when the model is trained with a large sequence length of 512, the effect of NSP on the NLI problems should still be emphasized. The inefficiency of NSP is likely because the expected context length will be halved for Masked LM when taking a sentence pair as the input. The models derived from BERT, e.g., StructBERT and ERNIE 1.0/2.0, aim to incorporating more knowledge by elaborating pre-training objectives. This work aims to enhance the NSP task and verifies whether document-level information is helpful for the pre-training. To probe whether our method achieves a better regularization ability, our approach is also evaluated on the HANS BIBREF0 dataset, which contains hard data samples constructed by three heuristics. Previous advanced models such as BERT fail on the HANS dataset, and the test accuracy can barely exceed 0% in the subset of test examples.", "id": 272, "question": "Do they train their model starting from a checkpoint?", "title": "Symmetric Regularization based BERT for Pair-wise Semantic Reasoning"}, {"answers": ["", ""], "context": "In recent years, many unsupervised pre-training methods have been proposed in the NLP fields to extract knowledge among sentences DBLP:conf/nips/KirosZSZUTF15,DBLP:conf/emnlp/ConneauKSBB17,DBLP:conf/iclr/LogeswaranL18,DBLP:journals/corr/abs-1903-09424. The prediction of surrounding sentences endows the model with the ability to model the sentence-level coherence. Skip-Thought BIBREF8 consists of an encoder and two decoders. When a sentence is given and encoded into a vector by the encoder, the decoders are trained to predict the next sentence and the previous sentence. The goal is to obtain a better sentence representation that is useful for reconstructing the surrounding context. Considering that the estimation of the likelihood of sequences is computationally expensive and time-consuming, the Quick-Thought method BIBREF9 simplifies this in a manner similar to sampled softmax BIBREF10, which classifies the input sentences between surrounding sentences and the other. Note that Quick-Thought does not distinguish between the previous and next sentence as it is functionally rotation invariant. However, BERT is order-dependent, and the discrimination can provide more supervision signal for semantic learning. InferSent BIBREF11 instead pre-trains the model in a manner of supervised learning. It uses a large-scale NLI dataset as the pre-training task to learn the sentence representation. In our work, we focus on designing a more effective document-level objective, extended from the NSP task. The proposed method will be described in the following section and validated by providing extensive experimental results in the experiment part.", "id": 273, "question": "What BERT model do they test?", "title": "Symmetric Regularization based BERT for Pair-wise Semantic Reasoning"}, {"answers": ["", ""], "context": "The use of active learning has received a lot of interest for reducing annotation costs for text classification BIBREF0 , BIBREF1 , BIBREF2 .", "id": 274, "question": "What downstream tasks are evaluated?", "title": "Impact of Batch Size on Stopping Active Learning for Text Classification"}, {"answers": ["A process of training a model when selected unlabeled samples are annotated on each iteration.", "Active learning is a process that selectively determines which unlabeled samples for a machine learning model should be annotated."], "context": "We considered different batch sizes in our experiments, based on percentages of the entire set of training data. The results for batch sizes corresponding to 1%, 5%, and 10% of the training data for the 20Newsgroups dataset are summarized in Table~ SECREF4 .", "id": 275, "question": "What is active learning?", "title": "Impact of Batch Size on Stopping Active Learning for Text Classification"}, {"answers": ["", "M2M Transformer"], "context": "Neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 has enabled end-to-end training of a translation system without needing to deal with word alignments, translation rules, and complicated decoding algorithms, which are the characteristics of phrase-based statistical machine translation (PBSMT) BIBREF3 . Although NMT can be significantly better than PBSMT in resource-rich scenarios, PBSMT performs better in low-resource scenarios BIBREF4 . Only by exploiting cross-lingual transfer learning techniques BIBREF5 , BIBREF6 , BIBREF7 , can the NMT performance approach PBSMT performance in low-resource scenarios.", "id": 276, "question": "what was the baseline?", "title": "Exploiting Out-of-Domain Parallel Data through Multilingual Transfer Learning for Low-Resource Neural Machine Translation"}, {"answers": ["Segmentation quality is evaluated by calculating the precision, recall, and F-score of the automatic segmentations in comparison to the segmentations made by expert annotators from the ANNODIS subcorpus.", ""], "context": "Rhetorical Structure Theory (RST) BIBREF0 is a technique of Natural Language Processing (NLP), in which a document can be structured hierarchically according to its discourse. The generated hierarchy, a tree, provides information associated with the boundaries of the discourse segments and related to their importance and dependencies. The figure FIGREF1 shows an example of such a rethorical tree. In the rethorical parsing process, the text has been divided into five units. In the figure FIGREF1, the arrow that leaves the unit (2) towards the unit (1) symbolizes that the unit (2) is the satellite of the unit (1), which is the core in a \u201cConcession\u201d relationship. In turn, the units (1) and (2) comprise the nucleus of three \u201cDemonstration\u201d relationships.", "id": 277, "question": "How is segmentation quality evaluated?", "title": "Automatic Discourse Segmentation: an evaluation in French"}, {"answers": ["Human evaluators were asked to evaluate on a scale from 1 to 5 the validity of the lexicon annotations made by the experts and crowd contributors.", ""], "context": "Sentiment analysis aims to uncover the emotion conveyed through information. In online social networks, sentiment analysis is mainly performed for political and marketing purposes, product acceptance and feedback systems. This involves the analysis of various social media information types, such as text BIBREF0 , emoticons and hashtags, or multimedia BIBREF1 . However, to perform sentiment analysis, information has to be labelled with a sentiment. This relationship is defined in a lexicon.", "id": 278, "question": "How do they compare lexicons?", "title": "Crowdsourcing for Beyond Polarity Sentiment Analysis A Pure Emotion Lexicon"}, {"answers": ["", ""], "context": "Deep learning systems have shown a lot of promise for extractive Question Answering (QA), with performance comparable to humans when large scale data is available. However, practitioners looking to build QA systems for specific applications may not have the resources to collect tens of thousands of questions on corpora of their choice. At the same time, state-of-the-art machine reading systems do not lend well to low-resource QA settings where the number of labeled question-answer pairs are limited (c.f. Table 2 ). Semi-supervised QA methods like BIBREF0 aim to improve this performance by leveraging unlabeled data which is easier to collect.", "id": 279, "question": "Is it possible to convert a cloze-style questions to a naturally-looking questions?", "title": "Simple and Effective Semi-Supervised Question Answering"}, {"answers": ["By 14 times.", "up to 1.95 times larger"], "context": "Word embeddings are representations of words in numerical form, as vectors of typically several hundred dimensions. The vectors are used as an input to machine learning models; for complex language processing tasks these are typically deep neural networks. The embedding vectors are obtained from specialized learning tasks, based on neural networks, e.g., word2vec BIBREF0, GloVe BIBREF1, FastText BIBREF2, ELMo BIBREF3, and BERT BIBREF4. For training, the embeddings algorithms use large monolingual corpora that encode important information about word meaning as distances between vectors. In order to enable downstream machine learning on text understanding tasks, the embeddings shall preserve semantic relations between words, and this is true even across languages.", "id": 280, "question": "How larger are the training sets of these versions of ELMo compared to the previous ones?", "title": "High Quality ELMo Embeddings for Seven Less-Resourced Languages"}, {"answers": ["5 percent points.", "0.05 F1"], "context": "Typical word embeddings models or representations, such as word2vec BIBREF0, GloVe BIBREF1, or FastText BIBREF2, are fast to train and have been pre-trained for a number of different languages. They do not capture the context, though, so each word is always given the same vector, regardless of its context or meaning. This is especially problematic for polysemous words. ELMo (Embeddings from Language Models) embedding BIBREF3 is one of the state-of-the-art pretrained transfer learning models, that remedies the problem and introduces a contextual component.", "id": 281, "question": "What is the improvement in performance for Estonian in the NER task?", "title": "High Quality ELMo Embeddings for Seven Less-Resourced Languages"}, {"answers": ["CNN-DNN-BLSTM-HMM", ""], "context": "Recent work on convolutional neural network architectures showed that they are competitive with recurrent architectures even on tasks where modeling long-range dependencies is critical, such as language modeling BIBREF0 , machine translation BIBREF1 , BIBREF2 and speech synthesis BIBREF3 . In end-to-end speech recognition however, recurrent architectures are still prevalent for acoustic and/or language modeling BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 .", "id": 282, "question": "what is the state of the art on WSJ?", "title": "Fully Convolutional Speech Recognition"}, {"answers": ["", ""], "context": "Recently, people have started looking at online forums either as a primary or secondary source of counseling services BIBREF0. BIBREF1 reported that over the first five years of operation (2011-2016), ReachOut.com \u2013 Ireland's online youth mental health service \u2013 62% of young people would visit a website for support when going through a tough time. With the expansion of the Internet, there has been a substantial growth in the number of users looking for psychological support online.", "id": 283, "question": "How did they obtain the OSG dataset?", "title": "Affective Behaviour Analysis of On-line User Interactions: Are On-line Support Groups more Therapeutic than Twitter?"}, {"answers": ["", ""], "context": "On-line support groups have been analyzed for various factors before. For instance, BIBREF11 analysed stress reduction in on-line support group chat-rooms, and the effects of on-line social interactions. Such studies mostly relied on questionnaires and were based on a small number of users. Nevertheless, in BIBREF11, the author showed that social support facilitates coping with distress, improves mood and expedites recovery from it. These findings highlight that, overall, on-line discussion boards appear to be therapeutic and constructive for individuals suffering alcohol-abuse.", "id": 284, "question": "How large is the Twitter dataset?", "title": "Affective Behaviour Analysis of On-line User Interactions: Are On-line Support Groups more Therapeutic than Twitter?"}, {"answers": ["", "609"], "context": "", "id": 285, "question": "what is the size of the augmented dataset?", "title": "The Effect of Heterogeneous Data for Alzheimer's Disease Detection from Speech"}, {"answers": ["", ""], "context": "Today, Internet is one of the widest available media worldwide. It has essentially become a huge hit of data that has the potential to serve many information centric applications in our life. Recommendation system takes an essential part of many internet services and online applications, including applications like social-networking and recommendation of products (films, music, articles,..i.e.). Recommendation techniques have been used by the most known companies such as Amazon, Netflix and eBay to recommend releated items or products by estimating the probable preferences of customers. These techniques are profitable to both service provider and user. According to pervious works, two popular approaches for building recommendation systems can be categorized as content-based (CB), collaborative filtering (CF).", "id": 286, "question": "How they utilize LDA and Gibbs sampling to evaluate ISWC and WWW publications?", "title": "Natural Language Processing via LDA Topic Model in Recommendation Systems"}, {"answers": ["", ""], "context": "Antonymy and synonymy represent lexical semantic relations that are central to the organization of the mental lexicon BIBREF0 . While antonymy is defined as the oppositeness between words, synonymy refers to words that are similar in meaning BIBREF1 , BIBREF2 . From a computational point of view, distinguishing between antonymy and synonymy is important for NLP applications such as Machine Translation and Textual Entailment, which go beyond a general notion of semantic relatedness and require to identify specific semantic relations. However, due to interchangeable substitution, antonyms and synonyms often occur in similar contexts, which makes it challenging to automatically distinguish between them.", "id": 287, "question": "What dataset do they use to evaluate their method?", "title": "Distinguishing Antonyms and Synonyms in a Pattern-based Neural Network"}, {"answers": ["Linked entities may be ambiguous or too common", ""], "context": "Text summarization is a task to generate a shorter and concise version of a text while preserving the meaning of the original text. The task can be divided into two subtask based on the approach: extractive and abstractive summarization. Extractive summarization is a task to create summaries by pulling out snippets of text form the original text and combining them to form a summary. Abstractive summarization asks to generate summaries from scratch without the restriction to use the available words from the original text. Due to the limitations of extractive summarization on incoherent texts and unnatural methodology BIBREF0 , the research trend has shifted towards abstractive summarization.", "id": 288, "question": "Why are current ELS's not sufficiently effective?", "title": "Entity Commonsense Representation for Neural Abstractive Summarization"}, {"answers": [""], "context": "Named Entity Recognition (NER) is a foremost NLP task to label each atomic elements of a sentence into specific categories like \"PERSON\", \"LOCATION\", \"ORGANIZATION\" and othersBIBREF0. There has been an extensive NER research on English, German, Dutch and Spanish language BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, and notable research on low resource South Asian languages like HindiBIBREF6, IndonesianBIBREF7 and other Indian languages (Kannada, Malayalam, Tamil and Telugu)BIBREF8. However, there has been no study on developing neural NER for Nepali language. In this paper, we propose a neural based Nepali NER using latest state-of-the-art architecture based on grapheme-level which doesn't require any hand-crafted features and no data pre-processing.", "id": 289, "question": "What is the best model?", "title": "Named Entity Recognition for Nepali Language"}, {"answers": ["3606", ""], "context": "There has been a handful of research on Nepali NER task based on approaches like Support Vector Machine and gazetteer listBIBREF11 and Hidden Markov Model and gazetteer listBIBREF9,BIBREF10.", "id": 290, "question": "How many sentences does the dataset contain?", "title": "Named Entity Recognition for Nepali Language"}, {"answers": ["", ""], "context": "In this section, we describe our approach in building our model. This model is partly inspired from multiple models BIBREF20,BIBREF1, andBIBREF2", "id": 291, "question": "Do the authors train a Naive Bayes classifier on their dataset?", "title": "Named Entity Recognition for Nepali Language"}, {"answers": ["", "Bam et al. SVM, Ma and Hovy w/glove, Lample et al. w/fastText, Lample et al. w/word2vec"], "context": "We used Bi-directional LSTM to capture the word representation in forward as well as reverse direction of a sentence. Generally, LSTMs take inputs from left (past) of the sentence and computes the hidden state. However, it is proven beneficialBIBREF23 to use bi-directional LSTM, where, hidden states are computed based from right (future) of sentence and both of these hidden states are concatenated to produce the final output as $h_t$=[$\\overrightarrow{h_t}$;$\\overleftarrow{h_t}$], where $\\overrightarrow{h_t}$, $\\overleftarrow{h_t}$ = hidden state computed in forward and backward direction respectively.", "id": 292, "question": "What is the baseline?", "title": "Named Entity Recognition for Nepali Language"}, {"answers": ["", ""], "context": "We have used Word2Vec BIBREF16, GloVe BIBREF17 and FastText BIBREF18 word vectors of 300 dimensions. These vectors were trained on the corpus obtained from Nepali National Corpus. This pre-lemmatized corpus consists of 14 million words from books, web-texts and news papers. This corpus was mixed with the texts from the dataset before training CBOW and skip-gram version of word2vec using gensim libraryBIBREF24. This trained model consists of vectors for 72782 unique words.", "id": 293, "question": "Which machine learning models do they explore?", "title": "Named Entity Recognition for Nepali Language"}, {"answers": ["Dataset contains 3606 total sentences and 79087 total entities.", "ILPRL contains 548 sentences, OurNepali contains 3606 sentences"], "context": "BIBREF20 and BIBREF2 successfully presented that the character-level embeddings, extracted using CNN, when combined with word embeddings enhances the NER model performance significantly, as it is able to capture morphological features of a word. Figure FIGREF7 shows the grapheme-level CNN used in our model, where inputs to CNN are graphemes. Character-level CNN is also built in similar fashion, except the inputs are characters. Grapheme or Character -level embeddings are randomly initialized from [0,1] with real values with uniform distribution of dimension 30.", "id": 294, "question": "What is the size of the dataset?", "title": "Named Entity Recognition for Nepali Language"}, {"answers": ["", ""], "context": "Grapheme is atomic meaningful unit in writing system of any languages. Since, Nepali language is highly morphologically inflectional, we compared grapheme-level representation with character-level representation to evaluate its effect. For example, in character-level embedding, each character of a word npAl results into n + + p + A + l has its own embedding. However, in grapheme level, a word npAl is clustered into graphemes, resulting into n + pA + l. Here, each grapheme has its own embedding. This grapheme-level embedding results good scores on par with character-level embedding in highly inflectional languages like Nepali, because graphemes also capture syntactic information similar to characters. We created grapheme clusters using uniseg package which is helpful in unicode text segmentations.", "id": 295, "question": "What is the source of their dataset?", "title": "Named Entity Recognition for Nepali Language"}, {"answers": ["", ""], "context": "We created one-hot encoded vector of POS tags and then concatenated with pre-trained word embeddings before passing it to BiLSTM network. A sample of data is shown in figure FIGREF13.", "id": 296, "question": "Do they try to use byte-pair encoding representations?", "title": "Named Entity Recognition for Nepali Language"}, {"answers": ["OurNepali contains 3 different types of entities, ILPRL contains 4 different types of entities", ""], "context": "Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25.", "id": 297, "question": "How many different types of entities exist in the dataset?", "title": "Named Entity Recognition for Nepali Language"}, {"answers": ["3606 sentences", "Dataset contains 3606 total sentences and 79087 total entities."], "context": "After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23.", "id": 298, "question": "How big is the new Nepali NER dataset?", "title": "Named Entity Recognition for Nepali Language"}, {"answers": ["On OurNepali test dataset Grapheme-level representation model achieves average 0.16% improvement, on ILPRL test dataset it achieves maximum 1.62% improvement", ""], "context": "In this section, we present the details about training our neural network. The neural network architecture are implemented using PyTorch framework BIBREF26. The training is performed on a single Nvidia Tesla P100 SXM2. We first run our experiment on BiLSTM, BiLSTM-CNN, BiLSTM-CRF BiLSTM-CNN-CRF using the hyper-parameters mentioned in Table TABREF30. The training and evaluation was done on sentence-level. The RNN variants are initialized randomly from $(-\\sqrt{k},\\sqrt{k})$ where $k=\\frac{1}{hidden\\_size}$.", "id": 299, "question": "What is the performance improvement of the grapheme-level representation model over the character-level model?", "title": "Named Entity Recognition for Nepali Language"}, {"answers": ["", ""], "context": "Currently, for our experiments we trained our model on IO (Inside, Outside) format for both the dataset, hence the dataset does not contain any B-type annotation unlike in BIO (Beginning, Inside, Outside) scheme.", "id": 300, "question": "Which models are used to solve NER for Nepali?", "title": "Named Entity Recognition for Nepali Language"}, {"answers": ["", ""], "context": "In the era of social media and networking platforms, Twitter has been doomed for abuse and harassment toward users specifically women. In fact, online harassment becomes very common in Twitter and there have been a lot of critics that Twitter has become the platform for many racists, misogynists and hate groups which can express themselves openly. Online harassment is usually in the form of verbal or graphical formats and is considered harassment, because it is neither invited nor has the consent of the receipt. Monitoring the contents including sexism and sexual harassment in traditional media is easier than monitoring on the online social media platforms like Twitter. The main reason is because of the large amount of user generated content in these media. So, the research about the automated detection of content containing sexual harassment is an important issue and could be the basis for removing that content or flagging it for human evaluation. The basic goal of this automatic classification is that it will significantly improve the process of detecting these types of hate speech on social media by reducing the time and effort required by human beings.", "id": 301, "question": "What language(s) is/are represented in the dataset?", "title": "Attention-based method for categorizing different types of online harassment language"}, {"answers": ["", ""], "context": "Waseem et al. BIBREF1 were the first who collected hateful tweets and categorized them into being sexist, racist or neither. However, they did not provide specific definitions for each category. Jha and Mamidi BIBREF0 focused on just sexist tweets and proposed two categories of hostile and benevolent sexism. However, these categories were general as they ignored other types of sexism happening in social media. Sharifirad S. and Matwin S. BIBREF2 proposed complimentary categories of sexist language inspired from social science work. They categorized the sexist tweets into the categories of indirect harassment, information threat, sexual harassment and physical harassment. In the next year the same authors proposed BIBREF3 a more comprehensive categorization of online harassment in social media e.g. twitter into the following categories, indirect harassment, information threat, sexual harassment, physical harassment and not sexist.", "id": 302, "question": "What baseline model is used?", "title": "Attention-based method for categorizing different types of online harassment language"}, {"answers": ["the model with multi-attention mechanism and a projected layer", ""], "context": "The dataset from Twitter that we are using in our work, consists of a train set, a validation set and a test set. It was published for the \"First workshop on categorizing different types of online harassment languages in social media\". The whole dataset is divided into two categories, which are harassment and non-harassment tweets. Moreover, considering the type of the harassment, the tweets are divided into three sub-categories which are indirect harassment, sexual and physical harassment. We can see in Table TABREF1 the class distribution of our dataset. One important issue here is that the categories of indirect and physical harassment seem to be more rare in the train set than in the validation and test sets. To tackle this issue, as we describe in the next section, we are performing data augmentation techniques. However, the dataset is imbalanced and this has a significant impact in our results.", "id": 303, "question": "Which variation provides the best results on this dataset?", "title": "Attention-based method for categorizing different types of online harassment language"}, {"answers": ["classic RNN model, avgRNN model, attentionRNN model and multiattention RNN model with and without a projected layer", ""], "context": "As described before one crucial issue that we are trying to tackle in this work is that the given dataset is imbalanced. Particularly, there are only a few instances from indirect and physical harassment categories respectively in the train set, while there are much more in the validation and test sets for these categories. To tackle this issue we applying a back-translation method BIBREF13, where we translate indirect and physical harassment tweets of the train set from english to german, french and greek. After that, we translate them back to english in order to achieve data augmentation. These \"noisy\" data that have been translated back, increase the number of indirect and physical harassment tweets and boost significantly the performance of our models.", "id": 304, "question": "What are the different variations of the attention-based approach which are examined?", "title": "Attention-based method for categorizing different types of online harassment language"}, {"answers": ["Twitter dataset provided by the organizers", "The dataset from the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference."], "context": "Before training our models we are processing the given tweets using a tweet pre-processor. The scope here is the cleaning and tokenization of the dataset.", "id": 305, "question": "What dataset is used for this work?", "title": "Attention-based method for categorizing different types of online harassment language"}, {"answers": ["", ""], "context": "We are presenting an attention-based approach for the problem of the harassment detection in tweets. In this section, we describe the basic approach of our work. We are using RNN models because of their ability to deal with sequence information. The RNN model is a chain of GRU cells BIBREF15 that transforms the tokens $w_{1}, w_{2},..., w_{k}$ of each tweet to the hidden states $h_{1}, h_{2},..., h_{k}$, followed by an LR Layer that uses $h_{k}$ to classify the tweet as harassment or non-harassment (similarly for the other categories). Given the vocabulary V and a matrix E $\\in $ $R^{d \\times \\vert V \\vert }$ containing d-dimensional word embeddings, an initial $h_{0}$ and a tweet $w = $, the RNN computes $h_{1}, h_{2},..., h_{k}$, with $h_{t} \\in R^{m}$, as follows:", "id": 306, "question": "What types of online harassment are studied?", "title": "Attention-based method for categorizing different types of online harassment language"}, {"answers": [""], "context": "The Embedding Layer is initialized using pre-trained word embeddings of dimension 200 from Twitter data that have been described in a previous sub-section. After the Embedding Layer, we are applying a Spatial Dropout Layer, which drops a certain percentage of dimensions from each word vector in the training sample. The role of Dropout is to improve generalization performance by preventing activations from becoming strongly correlated BIBREF18. Spatial Dropout, which has been proposed in BIBREF19, is an alternative way to use dropout with convolutional neural networks as it is able to dropout entire feature maps from the convolutional layer which are then not used during pooling. After that, the word embeddings are passing through a one-layer MLP, which has tanh as activation function and 128 hidden units, in order to project them in the vector space of our problem considering that they have been pre-trained using text that has a different subject. In the next step the embeddings are fed in a unidirectional GRU having 1 Stacked Layer and size 128. We prefer GRU than LSTM, because it is more efficient computationally. Also the basic advantage of LSTM which is the ability to keep in memory large text documents, does not hold here, because tweets supposed to be not too large text documents. The output states of the GRU are passing through four self-attentions like the one described above BIBREF9, because we are using one attention per category (see Fig. FIGREF7). Finally, a one-layer MLP having 128 nodes and ReLU as activation function computes the final score for each category. At this final stage we have avoided using a softmax function to decide the harassment type considering that the tweet is a harassment, otherwise we had to train our models taking into account only the harassment tweets and this might have been a problem as the dataset is not large enough.", "id": 307, "question": "What was the baseline?", "title": "Attention-based method for categorizing different types of online harassment language"}, {"answers": ["The dataset from the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. ", "Twitter dataset provided by organizers containing harassment and non-harassment tweets"], "context": "In this subsection we are giving the details of the training process of our models. Moreover, we are describing the different models that we compare in our experiments.", "id": 308, "question": "What were the datasets used in this paper?", "title": "Attention-based method for categorizing different types of online harassment language"}, {"answers": ["", ""], "context": "A large portion of the car-buying experience in the United States involves interactions at a car dealership BIBREF0, BIBREF1, BIBREF2. Traditionally, a car dealer listens and understands the needs of the client and helps them find what car is right based on their needs.", "id": 309, "question": "Is car-speak language collection of abstract features that classifier is later trained on?", "title": "Understanding Car-Speak: Replacing Humans in Dealerships"}, {"answers": ["", ""], "context": "There has been some work done in the field of car-sales and dealer interactions. However, this is the first work that specifically focuses on the", "id": 310, "question": "Is order of \"words\" important in car speak language?", "title": "Understanding Car-Speak: Replacing Humans in Dealerships"}, {"answers": ["", ""], "context": "When a potential buyer begins to identify their next car-purchase they begin with identifying their needs. These needs often come in the form of an abstract situation, for instance, \u201cI need a car that goes really fast\u201d. This could mean that they need a car with a V8 engine type or a car that has 500 horsepower, but the buyer does not know that, all they know is that they need a \u201cfast\u201d car.", "id": 311, "question": "What are labels in car speak language dataset?", "title": "Understanding Car-Speak: Replacing Humans in Dealerships"}, {"answers": ["", ""], "context": "We aim to curate a data set of car-speak in order to train a model properly. However, there are a few challenges that present themselves: What is a good source of car-speak? How can we acquire the data? How can we be sure the data set is relevant?", "id": 312, "question": "How big is dataset of car-speak language?", "title": "Understanding Car-Speak: Replacing Humans in Dealerships"}, {"answers": ["", "Using F1 Micro measure, the KNN classifier perform 0.6762, the RF 0.6687, SVM 0.6712 and MLP 0.6778."], "context": "Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers. In order to accomplish our goal of translating and classifying car-speak we need to filter our data set so that we only have the most relevant terms. We then need to be able to weight each word in each review, so that we can determine the most relevant ideas in each document for the purpose of classification. Finally, we need to train various classification models and evaluate them.", "id": 313, "question": "What is the performance of classifiers?", "title": "Understanding Car-Speak: Replacing Humans in Dealerships"}, {"answers": ["KNN\nRF\nSVM\nMLP", ""], "context": "We would like to be able to represent each car with the most relevant car-speak terms. We can do this by filtering each review using the NLTK library BIBREF8, only retaining the most relevant words. First we token-ize each review and then keep only the nouns and adjectives from each review since they are the most salient parts of speech BIBREF9. This leaves us with $10,867$ words across all reviews. Figure FIGREF6 shows the frequency of the top 20 words that remain.", "id": 314, "question": "What classifiers have been trained?", "title": "Understanding Car-Speak: Replacing Humans in Dealerships"}, {"answers": [""], "context": "So far we have compiled the most relevant terms in from the reviews. We now need to weight these terms for each review, so that we know the car-speak terms are most associated with a car. Using TF-IDF (Term Frequency-Inverse Document Frequency) has been used as a reliable metric for finding the relevant terms in a document BIBREF10.", "id": 315, "question": "How does car speak pertains to a car's physical attributes?", "title": "Understanding Car-Speak: Replacing Humans in Dealerships"}, {"answers": ["", ""], "context": " This work is licenced under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/ Deep neural networks have been widely used in text classification and have achieved promising results BIBREF0 , BIBREF1 , BIBREF2 . Most focus on content information and use models such as convolutional neural networks (CNN) BIBREF3 or recursive neural networks BIBREF4 . However, for user-generated posts on social media like Facebook or Twitter, there is more information that should not be ignored. On social media platforms, a user can act either as the author of a post or as a reader who expresses his or her comments about the post.", "id": 316, "question": "What topic is covered in the Chinese Facebook data? ", "title": "UTCNN: a Deep Learning Model of Stance Classificationon on Social Media Text"}, {"answers": ["eight layers"], "context": "In this paper we aim to use text as well as other features to see how they complement each other in a deep learning model. In the stance classification domain, previous work has showed that text features are limited, suggesting that adding extra-linguistic constraints could improve performance BIBREF6 , BIBREF7 , BIBREF8 . For example, Hasan and Ng as well as Thomas et al. require that posts written by the same author have the same stance BIBREF9 , BIBREF10 . The addition of this constraint yields accuracy improvements of 1\u20137% for some models and datasets. Hasan and Ng later added user-interaction constraints and ideology constraints BIBREF7 : the former models the relationship among posts in a sequence of replies and the latter models inter-topic relationships, e.g., users who oppose abortion could be conservative and thus are likely to oppose gay rights.", "id": 317, "question": "How many layers does the UTCNN model have?", "title": "UTCNN: a Deep Learning Model of Stance Classificationon on Social Media Text"}, {"answers": ["", ""], "context": "In recent years neural network models have been applied to document sentiment classification BIBREF13 , BIBREF4 , BIBREF14 , BIBREF15 , BIBREF2 . Text features can be used in deep networks to capture text semantics or sentiment. For example, Dong et al. use an adaptive layer in a recursive neural network for target-dependent Twitter sentiment analysis, where targets are topics such as windows 7 or taylor swift BIBREF16 , BIBREF17 ; recursive neural tensor networks (RNTNs) utilize sentence parse trees to capture sentence-level sentiment for movie reviews BIBREF4 ; Le and Mikolov predict sentiment by using paragraph vectors to model each paragraph as a continuous representation BIBREF18 . They show that performance can thus be improved by more delicate text models.", "id": 318, "question": "What topics are included in the debate data?", "title": "UTCNN: a Deep Learning Model of Stance Classificationon on Social Media Text"}, {"answers": ["", ""], "context": "In this section, we first describe CNN-based document composition, which captures user- and topic-dependent document-level semantic representation from word representations. Then we show how to add comment information to construct the user-topic-comment neural network (UTCNN).", "id": 319, "question": "What is the size of the Chinese data?", "title": "UTCNN: a Deep Learning Model of Stance Classificationon on Social Media Text"}, {"answers": ["", ""], "context": "As shown in Figure FIGREF4 , we use a general CNN BIBREF3 and two semantic transformations for document composition . We are given a document with an engaged user INLINEFORM0 , a topic INLINEFORM1 , and its composite INLINEFORM2 words, each word INLINEFORM3 of which is associated with a word embedding INLINEFORM4 where INLINEFORM5 is the vector dimension. For each word embedding INLINEFORM6 , we apply two dot operations as shown in Equation EQREF6 : DISPLAYFORM0 ", "id": 320, "question": "Did they collected the two datasets?", "title": "UTCNN: a Deep Learning Model of Stance Classificationon on Social Media Text"}, {"answers": ["", "SVM with unigram, bigram, trigram features, with average word embedding, with average transformed word embeddings, CNN and RCNN, SVM, CNN, RCNN with comment information"], "context": "Figure FIGREF10 illustrates the UTCNN model. As more than one user may interact with a given post, we first add a maximum pooling layer after the user matrix embedding layer and user vector embedding layer to form a moderator matrix embedding INLINEFORM0 and a moderator vector embedding INLINEFORM1 for moderator INLINEFORM2 respectively, where INLINEFORM3 is used for the semantic transformation in the document composition process, as mentioned in the previous section. The term moderator here is to denote the pseudo user who provides the overall semantic/sentiment of all the engaged users for one document. The embedding INLINEFORM4 models the moderator stance preference, that is, the pattern of the revealed user stance: whether a user is willing to show his preference, whether a user likes to show impartiality with neutral statements and reasonable arguments, or just wants to show strong support for one stance. Ideally, the latent user stance is modeled by INLINEFORM5 for each user. Likewise, for topic information, a maximum pooling layer is added after the topic matrix embedding layer and topic vector embedding layer to form a joint topic matrix embedding INLINEFORM6 and a joint topic vector embedding INLINEFORM7 for topic INLINEFORM8 respectively, where INLINEFORM9 models the semantic transformation of topic INLINEFORM10 as in users and INLINEFORM11 models the topic stance tendency. The latent topic stance is also modeled by INLINEFORM12 for each topic.", "id": 321, "question": "What are the baselines?", "title": "UTCNN: a Deep Learning Model of Stance Classificationon on Social Media Text"}, {"answers": ["", "", "Semantic Textual Similarity, sentiment prediction, subjectivity prediction, phrase level opinion polarity classification, Stanford Sentiment Treebank, fine grained question-type classification."], "context": "In this publication, we present Sentence-BERT (SBERT), a modification of the BERT network using siamese and triplet networks that is able to derive semantically meaningful sentence embeddings. This enables BERT to be used for certain new tasks, which up-to-now were not applicable for BERT. These tasks include large-scale semantic similarity comparison, clustering, and information retrieval via semantic search.", "id": 322, "question": "What transfer learning tasks are evaluated?", "title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks"}, {"answers": ["", ""], "context": "We first introduce BERT, then, we discuss state-of-the-art sentence embedding methods.", "id": 323, "question": "What metrics are used for the STS tasks?", "title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks"}, {"answers": [""], "context": "SBERT adds a pooling operation to the output of BERT / RoBERTa to derive a fixed sized sentence embedding. We experiment with three pooling strategies: Using the output of the CLS-token, computing the mean of all output vectors (MEAN-strategy), and computing a max-over-time of the output vectors (MAX-strategy). The default configuration is MEAN.", "id": 324, "question": "How much time takes its training?", "title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks"}, {"answers": [""], "context": "We train SBERT on the combination of the SNLI BIBREF13 and the Multi-Genre NLI BIBREF14 dataset. The SNLI is a collection of 570,000 sentence pairs annotated with the labels contradiction, eintailment, and neutral. MultiNLI contains 430,000 sentence pairs and covers a range of genres of spoken and written text. We fine-tune SBERT with a 3-way softmax-classifier objective function for one epoch. We used a batch-size of 16, Adam optimizer with learning rate $2\\mathrm {e}{-5}$, and a linear learning rate warm-up over 10% of the training data. Our default pooling strategy is MEAN.", "id": 325, "question": "How many GPUs are used for the training of SBERT?", "title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks"}, {"answers": ["", ""], "context": "We evaluate the performance of SBERT for common Semantic Textual Similarity (STS) tasks. State-of-the-art methods often learn a (complex) regression function that maps sentence embeddings to a similarity score. However, these regression functions work pair-wise and due to the combinatorial explosion those are often not scalable if the collection of sentences reaches a certain size. Instead, we always use cosine-similarity to compare the similarity between two sentence embeddings. We ran our experiments also with negative Manhatten and negative Euclidean distances as similarity measures, but the results for all approaches remained roughly the same.", "id": 326, "question": "How are the siamese networks trained?", "title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks"}, {"answers": ["GloVe, BERT, Universal Sentence Encoder, TF-IDF, InferSent", "Avg. GloVe embeddings, Avg. fast-text embeddings, Avg. BERT embeddings, BERT CLS-vector, InferSent - GloVe and Universal Sentence Encoder."], "context": "We evaluate the performance of SBERT for STS without using any STS specific training data. We use the STS tasks 2012 - 2016 BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, the STS benchmark BIBREF10, and the SICK-Relatedness dataset BIBREF21. These datasets provide labels between 0 and 5 on the semantic relatedness of sentence pairs. We showed in BIBREF22 that Pearson correlation is badly suited for STS. Instead, we compute the Spearman's rank correlation between the cosine-similarity of the sentence embeddings and the gold labels. The setup for the other sentence embedding methods is equivalent, the similarity is computed by cosine-similarity. The results are depicted in Table TABREF6.", "id": 327, "question": "What other sentence embeddings methods are evaluated?", "title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks"}, {"answers": [""], "context": "The distribution of textual content is typically very fast and catches user attention for only a short period of time BIBREF0 . For this reason, proper wording of the article title may play a significant role in determining the future popularity of the article. The reflection of this phenomenon is the proliferation of click-baits - short snippets of text whose main purpose is to encourage viewers to click on the link embedded in the snippet. Although detection of click-baits is a separate research topic BIBREF1 , in this paper we address a more general problem of predicting popularity of online content based solely on its title.", "id": 328, "question": "What is the average length of the title text?", "title": "Shallow reading with Deep Learning: Predicting popularity of online content using only its title"}, {"answers": ["", ""], "context": "The ever increasing popularity of the Internet as a virtual space to share content inspired research community to analyze different aspects of online information distribution. Various types of content were analyzed, ranging from textual data, such as Twitter posts BIBREF0 or Digg stories BIBREF2 to images BIBREF7 to videos BIBREF8 , BIBREF3 , BIBREF9 . Although several similarities were observed across content domains, e.g. log-normal distribution of data popularity BIBREF10 , in this work we focus only on textual content and, more precisely, on the popularity of news articles and its relation to the article's title.", "id": 329, "question": "Which pretrained word vectors did they use?", "title": "Shallow reading with Deep Learning: Predicting popularity of online content using only its title"}, {"answers": ["", ""], "context": "In this section we present the bidirectional LSTM model for popularity prediction. We start by formulating the problem and follow up with the description of word embeddings used in our approach. We then present the Long Short-Term Memory network that serves as a backbone for our bidirectional LSTM architecture. We conclude this section with our interpretation of hidden bidirectional states and describe how they can be employed for title introspection.", "id": 330, "question": "What evaluation metrics are used?", "title": "Shallow reading with Deep Learning: Predicting popularity of online content using only its title"}, {"answers": ["", "SVM with linear kernel using bag-of-words features"], "context": "We cast the problem of popularity prediction as a binary classification task. We assume our data points contain a string of characters representing article title and a popularity metric, such as number of comments or views. The input of our classification is the character string, while the output is the binary label corresponding to popular or unpopular class. To enable the comparison of the methods on datasets containing content published on different websites and with different audience sizes, we determine that a video is popular if its popularity metric exceeds the median value of the corresponding metric for other points in the set, otherwise - it is labeled as unpopular. The details of the labeling procedure are discussed separately in the Datasets section.", "id": 331, "question": "Which shallow approaches did they experiment with?", "title": "Shallow reading with Deep Learning: Predicting popularity of online content using only its title"}, {"answers": ["", ""], "context": "Since the input of our method is textual data, we follow the approach of BIBREF15 and map the text into a fixed-size vector representation. To this end, we use word embeddings that were successfully applied in other domains. We follow BIBREF5 and use pre-trained GloVe word vectors BIBREF16 to initialize the embedding layer (also known as look-up table). Section SECREF18 discusses the embedding layer in more details.", "id": 332, "question": "Where do they obtain the news videos from?", "title": "Shallow reading with Deep Learning: Predicting popularity of online content using only its title"}, {"answers": ["", ""], "context": "Our method for popularity prediction using article's title is inspired by a bidirectional LSTM architecture. The overview of the model can be seen in Fig. FIGREF8 .", "id": 333, "question": "What is the source of the news articles?", "title": "Shallow reading with Deep Learning: Predicting popularity of online content using only its title"}, {"answers": ["", "Russsian"], "context": "With the steady growth in the commercial websites and social media venues, the access to users' reviews have become easier. As the amount of data that can be mined for opinion increased, commercial companies' interests for sentiment analysis increased as well. Sentiment analysis is an important part of understanding user behavior and opinions on products, places, or services.", "id": 334, "question": "which non-english language had the best performance?", "title": "Multilingual Sentiment Analysis: An RNN-Based Framework for Limited Data"}, {"answers": [""], "context": "There is a rich body of work in sentiment analysis including social media platforms such as Twitter BIBREF5 and Facebook BIBREF4 . One common factor in most of the sentiment analysis work is that features that are specific to sentiment analysis are extracted (e.g., sentiment lexicons) and used in different machine learning models. Lexical resources BIBREF0 , BIBREF1 , BIBREF4 for sentiment analysis such as SentiWordNet BIBREF6 , BIBREF7 , linguistic features and expressions BIBREF8 , polarity dictionaries BIBREF2 , BIBREF3 , other features such as topic-oriented features and syntax BIBREF9 , emotion tokens BIBREF10 , word vectors BIBREF11 , and emographics BIBREF12 are some of the information that are found useful for improving sentiment analysis accuracies. Although these features are beneficial, extracting them requires language-dependent data (e.g., a sentiment dictionary for Spanish is trained on Spanish data instead of using all data from different languages).", "id": 335, "question": "which non-english language was the had the worst results?", "title": "Multilingual Sentiment Analysis: An RNN-Based Framework for Limited Data"}, {"answers": ["", ""], "context": "In order to eliminate the need to find data and build separate models for each language, we propose a multilingual approach where a single model is built in the language where the largest resources are available. In this paper we focus on English as there are several sentiment analysis datasets in English. To make the English sentiment analysis model as generalizable as possible, we first start by training with a large dataset that has product reviews for different categories. Then, using the trained weights from the larger generic dataset, we make the model more specialized for a specific domain. We further train the model with domain-specific English reviews and use this trained model to score reviews that share the same domain from different languages. To be able to employ the trained model, test sets are first translated to English via machine translation and then inference takes place. Figure FIGREF1 shows our multilingual sentiment analysis approach. It is important to note that this approach does not utilize any resource in any of the languages of the test sets (e.g., word embeddings, lexicons, training set).", "id": 336, "question": "what datasets were used in evaluation?", "title": "Multilingual Sentiment Analysis: An RNN-Based Framework for Limited Data"}, {"answers": ["", ""], "context": "To evaluate the proposed approach for multilingual sentiment analysis task, we conducted experiments. This section first presents the corpora used in this study followed by experimental results.", "id": 337, "question": "what are the baselines?", "title": "Multilingual Sentiment Analysis: An RNN-Based Framework for Limited Data"}, {"answers": ["Using Google translation API.", ""], "context": "Two sets of corpora are used in this study, both are publicly available. The first set consists of English reviews and the second set contains restaurant reviews from four different languages (Spanish, Turkish, Dutch, Russian). We focus on polarity detection in reviews, therefore all datasets in this study have two class values (positive, negative).", "id": 338, "question": "how did the authors translate the reviews to other languages?", "title": "Multilingual Sentiment Analysis: An RNN-Based Framework for Limited Data"}, {"answers": ["", ""], "context": "For experimental results, we report majority baseline for each language where the majority baseline corresponds to a model's accuracy if it always predicts the majority class in the dataset. For example, if the dataset has 60% of all reviews positive and 40% negative, majority baseline would be 60% because a model that always predicts \u201cpositive\u201d will be 60% accurate and will make mistakes 40% of the time.", "id": 339, "question": "what dataset was used for training?", "title": "Multilingual Sentiment Analysis: An RNN-Based Framework for Limited Data"}, {"answers": [""], "context": "Decoding intended speech or motor activity from brain signals is one of the major research areas in Brain Computer Interface (BCI) systems BIBREF0 , BIBREF1 . In particular, speech-related BCI technologies attempt to provide effective vocal communication strategies for controlling external devices through speech commands interpreted from brain signals BIBREF2 . Not only do they provide neuro-prosthetic help for people with speaking disabilities and neuro-muscular disorders like locked-in-syndrome, nasopharyngeal cancer, and amytotropic lateral sclerosis (ALS), but also equip people with a better medium to communicate and express thoughts, thereby improving the quality of rehabilitation and clinical neurology BIBREF3 , BIBREF4 . Such devices also have applications in entertainment, preventive treatments, personal communication, games, etc. Furthermore, BCI technologies can be utilized in silent communication, as in noisy environments, or situations where any sort of audio-visual communication is infeasible.", "id": 340, "question": "How do they demonstrate that this type of EEG has discriminative information about the intended articulatory movements responsible for speech?", "title": "Deep Learning the EEG Manifold for Phonological Categorization from Active Thoughts"}, {"answers": ["", "presence/absence of consonants, presence/absence of phonemic nasal, presence/absence of bilabial, presence/absence of high-front vowels, and presence/absence of high-back vowels"], "context": "Cognitive learning process underlying articulatory speech production involves incorporation of intermediate feedback loops and utilization of past information stored in the form of memory as well as hierarchical combination of several feature extractors. To this end, we develop our mixed neural network architecture composed of three supervised and a single unsupervised learning step, discussed in the next subsections and shown in Fig. FIGREF1 . We formulate the problem of categorizing EEG data based on speech imagery as a non-linear mapping INLINEFORM0 of a multivariate time-series input sequence INLINEFORM1 to fixed output INLINEFORM2 , i.e, mathematically INLINEFORM3 : INLINEFORM4 , where c and t denote the EEG channels and time instants respectively.", "id": 341, "question": "What are the five different binary classification tasks?", "title": "Deep Learning the EEG Manifold for Phonological Categorization from Active Thoughts"}, {"answers": ["", "They use four-layered 2D CNN and two fully connected hidden layers on the channel covariance matrix to compute the spatial aspect."], "context": "We follow similar pre-processing steps on raw EEG data as reported in BIBREF17 (ocular artifact removal using blind source separation, bandpass filtering and subtracting mean value from each channel) except that we do not perform Laplacian filtering step since such high-pass filtering may decrease information content from the signals in the selected bandwidth.", "id": 342, "question": "How was the spatial aspect of the EEG signal computed?", "title": "Deep Learning the EEG Manifold for Phonological Categorization from Active Thoughts"}, {"answers": ["", ""], "context": "Multichannel EEG data is high dimensional multivariate time series data whose dimensionality depends on the number of electrodes. It is a major hurdle to optimally encode information from these EEG data into lower dimensional space. In fact, our investigation based on a development set (as we explain later) showed that well-known deep neural networks (e.g., fully connected networks such as convolutional neural networks, recurrent neural networks and autoencoders) fail to individually learn such complex feature representations from single-trial EEG data. Besides, we found that instead of using the raw multi-channel high-dimensional EEG requiring large training times and resource requirements, it is advantageous to first reduce its dimensionality by capturing the information transfer among the electrodes. Instead of the conventional approach of selecting a handful of channels as BIBREF17 , BIBREF18 , we address this by computing the channel cross-covariance, resulting in positive, semi-definite matrices encoding the connectivity of the electrodes. We define channel cross-covariance (CCV) between any two electrodes INLINEFORM0 and INLINEFORM1 as: INLINEFORM2 . Next, we reject the channels which have significantly lower cross-covariance than auto-covariance values (where auto-covariance implies CCV on same electrode). We found this measure to be essential as the higher cognitive processes underlying speech planning and synthesis involve frequent information exchange between different parts of the brain. Hence, such matrices often contain more discriminative features and hidden information than mere raw signals. This is essentially different than our previous work BIBREF16 where we extract per-channel 1-D covariance information and feed it to the networks. We present our sample 2-D EEG cross-covariance matrices (of two individuals) in Fig. FIGREF2 .", "id": 343, "question": "What data was presented to the subjects to elicit event-related responses?", "title": "Deep Learning the EEG Manifold for Phonological Categorization from Active Thoughts"}, {"answers": ["", ""], "context": "In order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers. The INLINEFORM0 feature map at a given CNN layer with input INLINEFORM1 , weight matrix INLINEFORM2 and bias INLINEFORM3 is obtained as: INLINEFORM4 . At this first level of hierarchy, the network is trained with the corresponding labels as target outputs, optimizing a cross-entropy cost function. In parallel, we apply a four-layered recurrent neural network on the channel covariance matrices to explore the hidden temporal features of the electrodes. Namely, we exploit an LSTM BIBREF20 consisting of two fully connected hidden layers, stacked with two LSTM layers and trained in a similar manner as CNN.", "id": 344, "question": "How many electrodes were used on the subject in EEG sessions?", "title": "Deep Learning the EEG Manifold for Phonological Categorization from Active Thoughts"}, {"answers": ["", ""], "context": "As we found the individually-trained parallel networks (CNN and LSTM) to be useful (see Table TABREF12 ), we suspected the combination of these two networks could provide a more powerful discriminative spatial and temporal representation of the data than each independent network. As such, we concatenate the last fully-connected layer from the CNN with its counterpart in the LSTM to compose a single feature vector based on these two penultimate layers. Ultimately, this forms a joint spatio-temporal encoding of the cross-covariance matrix.", "id": 345, "question": "How many subjects does the EEG data come from?", "title": "Deep Learning the EEG Manifold for Phonological Categorization from Active Thoughts"}, {"answers": ["", ""], "context": "Event detection on microblogging platforms such as Twitter aims to detect events preemptively. A main task in event detection is detecting events of predetermined types BIBREF0, such as concerts or controversial events based on microposts matching specific event descriptions. This task has extensive applications ranging from cyber security BIBREF1, BIBREF2 to political elections BIBREF3 or public health BIBREF4, BIBREF5. Due to the high ambiguity and inconsistency of the terms used in microposts, event detection is generally performed though statistical machine learning models, which require a labeled dataset for model training. Data labeling is, however, a long, laborious, and usually costly process. For the case of micropost classification, though positive labels can be collected (e.g., using specific hashtags, or event-related date-time information), there is no straightforward way to generate negative labels useful for model training. To tackle this lack of negative labels and the significant manual efforts in data labeling, BIBREF1 (BIBREF1, BIBREF3) introduced a weak supervision based learning approach, which uses only positively labeled data, accompanied by unlabeled examples by filtering microposts that contain a certain keyword indicative of the event type under consideration (e.g., `hack' for cyber security). Another key technique in this context is expectation regularization BIBREF6, BIBREF7, BIBREF1. Here, the estimated proportion of relevant microposts in an unlabeled dataset containing a keyword is given as a keyword-specific expectation. This expectation is used in the regularization term of the model's objective function to constrain the posterior distribution of the model predictions. By doing so, the model is trained with an expectation on its prediction for microposts that contain the keyword. Such a method, however, suffers from two key problems:", "id": 346, "question": "Do they report results only on English data?", "title": "A Human-AI Loop Approach for Joint Keyword Discovery and Expectation Estimation in Micropost Event Detection"}, {"answers": ["", ""], "context": "Given a set of labeled and unlabeled microposts, our goal is to extract informative keywords and estimate their expectations in order to train a machine learning model. To achieve this goal, our proposed human-AI loop approach comprises two crowdsourcing tasks, i.e., micropost classification followed by keyword discovery, and a unified probabilistic model for expectation inference and model training. Figure FIGREF6 presents an overview of our approach. Next, we describe our approach from a process-centric perspective.", "id": 347, "question": "What type of classifiers are used?", "title": "A Human-AI Loop Approach for Joint Keyword Discovery and Expectation Estimation in Micropost Event Detection"}, {"answers": ["Tweets related to CyberAttack and tweets related to PoliticianDeath", ""], "context": "This section introduces our probabilistic model that infers keyword expectation and trains the target model simultaneously. We start by formalizing the problem and introducing our model, before describing the model learning method.", "id": 348, "question": "Which real-world datasets are used?", "title": "A Human-AI Loop Approach for Joint Keyword Discovery and Expectation Estimation in Micropost Event Detection"}, {"answers": ["By involving humans for post-hoc evaluation of model's interpretability", ""], "context": "First, we introduce an expectation regularization technique for the weakly supervised learning of the target model $p_{\\theta ^{(t)}}(y|x)$. In this setting, the objective function of the target model is composed of two parts, corresponding to the labeled microposts $\\mathcal {L}$ and the unlabeled ones $\\mathcal {U}$.", "id": 349, "question": "How are the interpretability merits of the approach demonstrated?", "title": "A Human-AI Loop Approach for Joint Keyword Discovery and Expectation Estimation in Micropost Event Detection"}, {"answers": ["", "By evaluating the performance of the approach using accuracy and AUC"], "context": "To learn the keyword-specific expectation $e^{(t)}$ and the crowd worker reliability $\\pi ^{(n)}$ ($1\\le n\\le N$), we model the likelihood of the crowd-contributed labels $\\mathbf {A}$ as a function of these parameters. In this context, we view the expectation as the class prior, thus performing expectation inference as the learning of the class prior. By doing so, we connect expectation inference with model training.", "id": 350, "question": "How are the accuracy merits of the approach demonstrated?", "title": "A Human-AI Loop Approach for Joint Keyword Discovery and Expectation Estimation in Micropost Event Detection"}, {"answers": [""], "context": "Integrating model training with expectation inference, the overall objective function of our proposed model is given by:", "id": 351, "question": "How is the keyword specific expectation elicited from the crowd?", "title": "A Human-AI Loop Approach for Joint Keyword Discovery and Expectation Estimation in Micropost Event Detection"}, {"answers": ["", ""], "context": "The rapid growth in speech and small screen interfaces, particularly on mobile devices, has significantly influenced the way users interact with intelligent systems to satisfy their information needs. The growing interest in personal digital assistants, such as Amazon Alexa, Apple Siri, Google Assistant, and Microsoft Cortana, demonstrates the willingness of users to employ conversational interactions BIBREF0. As a result, conversational information seeking (CIS) has been recognized as a major emerging research area in the Third Strategic Workshop on Information Retrieval (SWIRL 2018) BIBREF1.", "id": 352, "question": "Does the paper provide any case studies to illustrate how one can use Macaw for CIS research?", "title": "Macaw: An Extensible Conversational Information Seeking Platform"}, {"answers": ["", ""], "context": "Macaw has a modular design, with the goal of making it easy to configure and add new modules such as a different user interface or different retrieval module. The overall setup also follows a Model-View-Controller (MVC) like architecture. The design decisions have been made to smooth the Macaw's adoptions and extensions. Macaw is implemented in Python, thus machine learning models implemented using PyTorch, Scikit-learn, or TensorFlow can be easily integrated into Macaw. The high-level overview of Macaw is depicted in FIGREF8. The user interacts with the interface and the interface produces a Message object from the current interaction of user. The interaction can be in multi-modal form, such as text, speech, image, and click. Macaw stores all interactions in an \u201cInteraction Database\u201d. For every interaction, Macaw looks for most recent user-system interactions (including the system's responses) to create a list of Messages, called the conversation list. It is then dispatched to multiple information seeking (and related) actions. The actions run in parallel, and each should respond within a pre-defined time interval. The output selection component selects from (or potentially combines) the outputs generated by different actions and creates a Message object as the system's response. This message is logged into the interaction database and is sent to the interface to be presented to the user. Again, the response message can be multi-modal and include text, speech, link, list of options, etc.", "id": 353, "question": "What functionality does Macaw provide?", "title": "Macaw: An Extensible Conversational Information Seeking Platform"}, {"answers": ["", "a setup where the seeker interacts with a real conversational interface and the wizard, an intermediary, performs actions related to the seeker's message"], "context": "The overview of retrieval and question answering actions in Macaw is shown in FIGREF17. These actions consist of the following components:", "id": 354, "question": "What is a wizard of oz setup?", "title": "Macaw: An Extensible Conversational Information Seeking Platform"}, {"answers": ["", ""], "context": "We have implemented the following interfaces for Macaw:", "id": 355, "question": "What interface does Macaw currently have?", "title": "Macaw: An Extensible Conversational Information Seeking Platform"}, {"answers": [""], "context": "The current implementation of Macaw lacks the following actions. We intend to incrementally improve Macaw by supporting more actions and even more advanced techniques for the developed actions.", "id": 356, "question": "What modalities are supported by Macaw?", "title": "Macaw: An Extensible Conversational Information Seeking Platform"}, {"answers": ["", ""], "context": "Macaw is distributed under the MIT License. We welcome contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. This project has adopted the Microsoft Open Source Code of Conduct.", "id": 357, "question": "What are the different modules in Macaw?", "title": "Macaw: An Extensible Conversational Information Seeking Platform"}, {"answers": ["", ""], "context": "Wikipedia is the largest source of open and collaboratively curated knowledge in the world. Introduced in 2001, it has evolved into a reference work with around 5m pages for the English Wikipedia alone. In addition, entities and event pages are updated quickly via collaborative editing and all edits are encouraged to include source citations, creating a knowledge base which aims at being both timely as well as authoritative. As a result, it has become the preferred source of information consumption about entities and events. Moreso, this knowledge is harvested and utilized in building knowledge bases like YAGO BIBREF0 and DBpedia BIBREF1 , and used in applications like text categorization BIBREF2 , entity disambiguation BIBREF3 , entity ranking BIBREF4 and distant supervision BIBREF5 , BIBREF6 .", "id": 358, "question": "Do they report results only on English data?", "title": "Automated News Suggestions for Populating Wikipedia Entity Pages"}, {"answers": ["For Article-Entity placement, they consider two baselines: the first one using only salience-based features, and the second baseline checks if the entity appears in the title of the article. \n\nFor Article-Section Placement, they consider two baselines: the first picks the section with the highest lexical similarity to the article, and the second one picks the most frequent section.", ""], "context": "As we suggest a new problem there is no current work addressing exactly the same task. However, our task has similarities to Wikipedia page generation and knowledge base acceleration. In addition, we take inspiration from Natural Language Processing (NLP) methods for salience detection.", "id": 359, "question": "What baseline model is used?", "title": "Automated News Suggestions for Populating Wikipedia Entity Pages"}, {"answers": ["", ""], "context": "We are interested in named entities mentioned in documents. An entity INLINEFORM0 can be identified by a canonical name, and can be mentioned differently in text via different surface forms. We canonicalize these mentions to entity pages in Wikipedia, a method typically known as entity linking. We denote the set of canonicalized entities extracted and linked from a news article INLINEFORM1 as INLINEFORM2 . For example, in Figure FIGREF7 , entities are canonicalized into Wikipedia entity pages (e.g. Odisha is canonicalized to the corresponding article). For a collection of news articles INLINEFORM3 , we further denote the resulting set of entities by INLINEFORM4 .", "id": 360, "question": "What news article sources are used?", "title": "Automated News Suggestions for Populating Wikipedia Entity Pages"}, {"answers": ["They use a multi-class classifier to determine the section it should be cited"], "context": "We approach the news suggestion problem by decomposing it into two tasks:", "id": 361, "question": "How do they determine the exact section to use the input article?", "title": "Automated News Suggestions for Populating Wikipedia Entity Pages"}, {"answers": ["KL-divergences of language models for the news article and the already added news references", ""], "context": "In this section, we provide an overview of the news suggestion approach to Wikipedia entity pages (see Figure FIGREF7 ). The approach is split into two tasks: (i) article-entity (AEP) and (ii) article-section (ASP) placement. For a Wikipedia snapshot INLINEFORM0 and a news corpus INLINEFORM1 , we first determine which news articles should be suggested to an entity INLINEFORM2 . We will denote our approach for AEP by INLINEFORM3 . Finally, we determine the most appropriate section for the ASP task and we denote our approach with INLINEFORM4 .", "id": 362, "question": "What features are used to represent the novelty of news articles to entity pages?", "title": "Automated News Suggestions for Populating Wikipedia Entity Pages"}, {"answers": ["Salience features positional features, occurrence frequency and the internal POS structure of the entity and the sentence it occurs in.\nThe relative authority of entity features: comparative relevance of the news article to the different entities occurring in it.", ""], "context": "In this step we learn the function INLINEFORM0 to correctly determine whether INLINEFORM1 should be suggested for INLINEFORM2 , basically a binary classification model (0=`non-relevant' and 1=`relevant'). Note that we are mainly interested in finding the relevant pairs in this task. For every news article, the number of disambiguated entities is around 30 (but INLINEFORM3 is suggested for only two of them on average). Therefore, the distribution of `non-relevant' and `relevant' pairs is skewed towards the earlier, and by simply choosing the `non-relevant' label we can achieve a high accuracy for INLINEFORM4 . Finding the relevant pairs is therefore a considerable challenge.", "id": 363, "question": "What features are used to represent the salience and relative authority of entities?", "title": "Automated News Suggestions for Populating Wikipedia Entity Pages"}, {"answers": [""], "context": "The automatic processing of medical texts and documents plays an increasingly important role in the recent development of the digital health area. To enable dedicated Natural Language Processing (NLP) that is highly accurate with respect to medically relevant categories, manually annotated data from this domain is needed. One category of high interest and relevance are medical entities. Only very few annotated corpora in the medical domain exist. Many of them focus on the relation between chemicals and diseases or proteins and diseases, such as the BC5CDR corpus BIBREF0, the Comparative Toxicogenomics Database BIBREF1, the FSU PRotein GEne corpus BIBREF2 or the ADE (adverse drug effect) corpus BIBREF3. The NCBI Disease Corpus BIBREF4 contains condition mention annotations along with annotations of symptoms. Several new corpora of annotated case reports were made available recently. grouin-etal-2019-clinical presented a corpus with medical entity annotations of clinical cases written in French, copdPhenotype presented a corpus focusing on phenotypic information for chronic obstructive pulmonary disease while 10.1093/database/bay143 presented a corpus focusing on identifying main finding sentences in case reports.", "id": 364, "question": "Do they experiment with other tasks?", "title": "Named Entities in Medical Case Reports: Corpus and Experiments"}, {"answers": ["", ""], "context": "Case reports are standardized in the CARE guidelines BIBREF5. They represent a detailed description of the symptoms, signs, diagnosis, treatment, and follow-up of an individual patient. We focus on documents freely available through PubMed Central (PMC). The presentation of the patient's case can usually be found in a dedicated section or the abstract. We perform a manual annotation of all mentions of case entities, conditions, findings, factors and modifiers. The scope of our manual annotation is limited to the presentation of a patient's signs and symptoms. In addition, we annotate the title of the case report.", "id": 365, "question": "What baselines do they introduce?", "title": "Named Entities in Medical Case Reports: Corpus and Experiments"}, {"answers": ["", ""], "context": "We annotate the following entities:", "id": 366, "question": "How large is the corpus?", "title": "Named Entities in Medical Case Reports: Corpus and Experiments"}, {"answers": ["Experienced medical doctors used a linguistic annotation tool to annotate entities.", ""], "context": "We asked medical doctors experienced in extracting knowledge related to medical entities from texts to annotate the entities described above. Initially, we asked four annotators to test our guidelines on two texts. Subsequently, identified issues were discussed and resolved. Following this pilot annotation phase, we asked two different annotators to annotate two case reports according to our guidelines. The same annotators annotated an overall collection of 53 case reports.", "id": 367, "question": "How was annotation performed?", "title": "Named Entities in Medical Case Reports: Corpus and Experiments"}, {"answers": ["", ""], "context": "The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation. The annotators could choose between a pre-annotated version or a blank version of each text. The pre-annotated versions contained suggested entity spans based on string matches from lists of conditions and findings synonym lists. Their quality varied widely throughout the corpus. The blank version was preferred by the annotators. We distribute the corpus in BioC JSON format. BioC was chosen as it allows us to capture the complexities of the annotations in the biomedical domain. It represented each documents properties ranging from full text, individual passages/sentences along with captured annotations and relationships in an organized manner. BioC is based on character offsets of annotations and allows the stacking of different layers.", "id": 368, "question": "How many documents are in the new corpus?", "title": "Named Entities in Medical Case Reports: Corpus and Experiments"}, {"answers": ["", ""], "context": "The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total. However, as mentioned above, only case presentation sections, headings and abstracts are annotated. The numbers of annotated entities are summarized in Table TABREF24.", "id": 369, "question": "What baseline systems are proposed?", "title": "Named Entities in Medical Case Reports: Corpus and Experiments"}, {"answers": ["", ""], "context": "Social media platforms have made the spreading of fake news easier, faster as well as able to reach a wider audience. Social media offer another feature which is the anonymity for the authors, and this opens the door to many suspicious individuals or organizations to utilize these platforms. Recently, there has been an increased number of spreading fake news and rumors over the web and social media BIBREF0. Fake news in social media vary considering the intention to mislead. Some of these news are spread with the intention to be ironic or to deliver the news in an ironic way (satirical news). Others, such as propaganda, hoaxes, and clickbaits, are spread to mislead the audience or to manipulate their opinions. In the case of Twitter, suspicious news annotations should be done on a tweet rather than an account level, since some accounts mix fake with real news. However, these annotations are extremely costly and time consuming \u2013 i.e., due to high volume of available tweets Consequently, a first step in this direction, e.g., as a pre-filtering step, can be viewed as the task of detecting fake news at the account level.", "id": 370, "question": "How did they obtain the dataset?", "title": "FacTweet: Profiling Fake News Twitter Accounts"}, {"answers": ["", "Activation function is hyperparameter. Possible values: relu, selu, tanh."], "context": "Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account. We extract a set of features from each chunk and we feed them into a recurrent neural network to model the sequential flow of the chunks' tweets. We use an attention layer with dropout to attend over the most important tweets in each chunk. Finally, the representation is fed into a softmax layer to produce a probability distribution over the account types and thus predict the factuality of the accounts. Since we have many chunks for each account, the label for an account is obtained by taking the majority class of the account's chunks.", "id": 371, "question": "What activation function do they use in their model?", "title": "FacTweet: Profiling Fake News Twitter Accounts"}, {"answers": ["", ""], "context": "Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset.", "id": 372, "question": "What baselines do they compare to?", "title": "FacTweet: Profiling Fake News Twitter Accounts"}, {"answers": ["Chunks is group of tweets from single account that is consecutive in time - idea is that this group can show secret intention of malicious accounts.", "sequence of $s$ tweets"], "context": "In this paper, we proposed a model that utilizes chunked timelines of tweets and a recurrent neural model in order to infer the factuality of a Twitter news account. Our experimental results indicate the importance of analyzing tweet stream into chunks, as well as the benefits of heterogeneous knowledge source (i.e., lexica as well as text) in order to capture factuality. In future work, we would like to extend this line of research with further in-depth analysis to understand the flow change of the used features in the accounts' streams. Moreover, we would like to take our approach one step further incorporating explicit temporal information, e.g., using timestamps. Crucially, we are also interested in developing a multilingual version of our approach, for instance by leveraging the now ubiquitous cross-lingual embeddings BIBREF22, BIBREF23.", "id": 373, "question": "How are chunks defined?", "title": "FacTweet: Profiling Fake News Twitter Accounts"}, {"answers": [""], "context": "Summarization of large texts is still an open problem in language processing. People nowadays have lesser time and patience to go through large pieces of text which make automatic summarization important. Automatic summarization has significant applications in summarizing large texts like stories, journal papers, news articles and even larger texts like books.", "id": 374, "question": "What is the performance of their method?", "title": "Text Summarization using Abstract Meaning Representation"}, {"answers": ["Quantitative evaluation methods using ROUGE, Recall, Precision and F1.", ""], "context": "AMR was introduced by BIBREF1 with the aim to induce work on statistical Natural Language Understanding and Generation. AMR represents meaning using graphs. AMR graphs are rooted, directed, edge and vertex labeled graphs. Figure FIGREF4 shows the graphical representation of the AMR graph of the sentence \"I looked carefully all around me\" generated by JAMR parser ( BIBREF2 ). The graphical representation was produced using AMRICA BIBREF3 . The nodes in the AMR are labeled with concepts as in Figure FIGREF4 around represents a concept. Edges contains the information regarding the relations between the concepts. In Figure FIGREF4 direction is the relation between the concepts look-01 and around. AMR relies on Propbank for semantic relations (edge labels). Concepts can also be of the form run-01 where the index 01 represents the first sense of the word run. Further details about the AMR can be found in the AMR guidelines BIBREF4 .", "id": 375, "question": "Which evaluation methods are used?", "title": "Text Summarization using Abstract Meaning Representation"}, {"answers": ["", ""], "context": "We used two datasets for the task - AMR Bank BIBREF10 and CNN-Dailymail ( BIBREF11 BIBREF12 ). We use the proxy report section of the AMR Bank, as it is the only one that is relevant for the task because it contains the gold-standard (human generated) AMR graphs for news articles, and the summaries. In the training set the stories and summaries contain 17.5 sentences and 1.5 sentences on an average respectively. The training and test sets contain 298 and 33 summary document pairs respectively.", "id": 376, "question": "What dataset is used in this paper?", "title": "Text Summarization using Abstract Meaning Representation"}, {"answers": ["", ""], "context": "The pipeline consists of three steps, first convert all the given story sentences to there AMR graphs followed by extracting summary graphs from the story sentence graphs and finally generating sentences from these extracted summary graphs. In the following subsections we explain each of the methods in greater detail.", "id": 377, "question": "Which other methods do they compare with?", "title": "Text Summarization using Abstract Meaning Representation"}, {"answers": ["", " Two methods: first is to simply pick initial few sentences, second is to capture the relation between the two most important entities (select the first sentence which contains both these entities)."], "context": "As the first step we convert the story sentences to their Abstract Meaning Representations. We use JAMR-Parser version 2 BIBREF2 as it\u2019s openly available and has a performance close to the state of the art parsers for parsing the CNN-Dailymail corpus. For the AMR-bank we have the gold-standard AMR parses but we still parse the input stories with JAMR-Parser to study the effect of using graphs produced by JAMR-Parser instead of the gold-standard AMR graphs.", "id": 378, "question": "How are sentences selected from the summary graph?", "title": "Text Summarization using Abstract Meaning Representation"}, {"answers": ["", "", ""], "context": "Offensive content has become pervasive in social media and a reason of concern for government organizations, online communities, and social media platforms. One of the most common strategies to tackle the problem is to train systems capable of recognizing offensive content, which then can be deleted or set aside for human moderation. In the last few years, there have been several studies published on the application of computational methods to deal with this problem. Most prior work focuses on a different aspect of offensive language such as abusive language BIBREF0 , BIBREF1 , (cyber-)aggression BIBREF2 , (cyber-)bullying BIBREF3 , BIBREF4 , toxic comments INLINEFORM0 , hate speech BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , and offensive language BIBREF11 . Prior work has focused on these aspects of offensive language in Twitter BIBREF3 , BIBREF7 , BIBREF8 , BIBREF11 , Wikipedia comments, and Facebook posts BIBREF2 .", "id": 379, "question": "What models are used in the experiment?", "title": "Predicting the Type and Target of Offensive Posts in Social Media"}, {"answers": [""], "context": "Different abusive and offense language identification sub-tasks have been explored in the past few years including aggression identification, bullying detection, hate speech, toxic comments, and offensive language.", "id": 380, "question": "What are the differences between this dataset and pre-existing ones?", "title": "Predicting the Type and Target of Offensive Posts in Social Media"}, {"answers": ["", "", ""], "context": "In the OLID dataset, we use a hierarchical annotation model split into three levels to distinguish between whether language is offensive or not (A), and type (B) and target (C) of the offensive language. Each level is described in more detail in the following subsections and examples are shown in Table TABREF10 .", "id": 381, "question": "In what language are the tweets?", "title": "Predicting the Type and Target of Offensive Posts in Social Media"}, {"answers": ["", "14,100 tweets", "Dataset contains total of 14100 annotations."], "context": "Level A discriminates between offensive (OFF) and non-offensive (NOT) tweets.", "id": 382, "question": "What is the size of the new dataset?", "title": "Predicting the Type and Target of Offensive Posts in Social Media"}, {"answers": ["non-targeted profanity and swearing, targeted insults such as cyberbullying, offensive content related to ethnicity, gender or sexual orientation, political affiliation, religious belief, and anything belonging to hate speech", "", ""], "context": "Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats.", "id": 383, "question": "What kinds of offensive content are explored?", "title": "Predicting the Type and Target of Offensive Posts in Social Media"}, {"answers": [""], "context": "Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH).", "id": 384, "question": "What is the best performing model?", "title": "Predicting the Type and Target of Offensive Posts in Social Media"}, {"answers": [""], "context": "The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 .", "id": 385, "question": "How many annotators participated?", "title": "Predicting the Type and Target of Offensive Posts in Social Media"}, {"answers": [""], "context": "We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM.", "id": 386, "question": "What is the definition of offensive language?", "title": "Predicting the Type and Target of Offensive Posts in Social Media"}, {"answers": [""], "context": "The performance on discriminating between offensive (OFF) and non-offensive (NOT) posts is reported in Table TABREF18 . We can see that all systems perform significantly better than chance, with the neural models being substantially better than the SVM. The CNN outperforms the RNN model, achieving a macro-F1 score of 0.80.", "id": 387, "question": "What are the three layers of the annotation scheme?", "title": "Predicting the Type and Target of Offensive Posts in Social Media"}, {"answers": ["Level A: 14100 Tweets\nLevel B: 4640 Tweets\nLevel C: 4089 Tweets"], "context": "In this experiment, the two systems were trained to discriminate between insults and threats (TIN) and untargeted (UNT) offenses, which generally refer to profanity. The results are shown in Table TABREF19 .", "id": 388, "question": "How long is the dataset for each step of hierarchy?", "title": "Predicting the Type and Target of Offensive Posts in Social Media"}, {"answers": ["", ""], "context": "Automatic text summarization has been an active research area in natural language processing for several decades. To compare and evaluate the performance of different summarization systems, the most intuitive approach is assessing the quality of the summaries by human evaluators. However, manual evaluation is expensive and the obtained results are subjective and difficult to reproduce BIBREF0 . To address these problems, automatic evaluation measures for summarization have been proposed. Rouge BIBREF1 is one of the first and most widely used metrics in summarization evaluation. It facilitates evaluation of system generated summaries by comparing them to a set of human written gold-standard summaries. It is inspired by the success of a similar metric Bleu BIBREF2 which is being used in Machine Translation (MT) evaluation. The main success of Rouge is due to its high correlation with human assessment scores on standard benchmarks BIBREF1 . Rouge has been used as one of the main evaluation metrics in later summarization benchmarks such as TAC[1] BIBREF3 .", "id": 389, "question": "Do the authors report results only on English data?", "title": "Revisiting Summarization Evaluation for Scientific Articles"}, {"answers": ["The content relevance between the candidate summary and the human summary is evaluated using information retrieval - using the summaries as search queries and compare the overlaps of the retrieved results. ", ""], "context": "Rouge has been the most widely used family of metrics in summarization evaluation. In the following, we briefly describe the different variants of Rouge:", "id": 390, "question": "In the proposed metric, how is content relevance measured?", "title": "Revisiting Summarization Evaluation for Scientific Articles"}, {"answers": ["", "Using Pearson corelation measure, for example, ROUGE-1-P is 0.257 and ROUGE-3-F 0.878."], "context": "Rouge functions based on the assumption that in order for a summary to be of high quality, it has to share many words or phrases with a human gold summary. However, different terminology may be used to refer to the same concepts and thus relying only on lexical overlaps may underrate content quality scores. To overcome this problem, we propose an approach based on the premise that concepts take meanings from the context they are in, and that related concepts co-occur frequently.", "id": 391, "question": "What different correlations result when using different variants of ROUGE scores?", "title": "Revisiting Summarization Evaluation for Scientific Articles"}, {"answers": ["", ""], "context": "To the best of our knowledge, the only scientific summarization benchmark is from TAC 2014 summarization track. For evaluating the effectiveness of Rouge variants and our metric (Sera), we use this benchmark, which consists of 20 topics each with a biomedical journal article and 4 gold human written summaries.", "id": 392, "question": "What manual Pyramid scores are used?", "title": "Revisiting Summarization Evaluation for Scientific Articles"}, {"answers": [""], "context": "In the TAC 2014 summarization track, Rouge was suggested as the evaluation metric for summarization and no human assessment was provided for the topics. Therefore, to study the effectiveness of the evaluation metrics, we use the semi-manual Pyramid evaluation framework BIBREF7 , BIBREF8 . In the pyramid scoring, the content units in the gold human written summaries are organized in a pyramid. In this pyramid, the content units are organized in tiers and higher tiers of the pyramid indicate higher importance. The content quality of a given candidate summary is evaluated with respect to this pyramid.", "id": 393, "question": "What is the common belief that this paper refutes? (c.f. 'contrary to the common belief, ROUGE is not much [sic] reliable'", "title": "Revisiting Summarization Evaluation for Scientific Articles"}, {"answers": ["", ""], "context": "Natural Language Processing (NLP) models are shown to capture unwanted biases and stereotypes found in the training data which raise concerns about socioeconomic, ethnic and gender discrimination when these models are deployed for public use BIBREF0 , BIBREF1 .", "id": 394, "question": "which existing strategies are compared?", "title": "Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function"}, {"answers": ["", ""], "context": "Recently, the study of bias in NLP applications has received increasing attention from researchers. Most relevant work in this domain can be broadly divided into two categories: word embedding debiasing and data debiasing by preprocessing.", "id": 395, "question": "what dataset was used?", "title": "Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function"}, {"answers": [""], "context": "For the training data, we use Daily Mail news articles released by BIBREF9 . This dataset is composed of 219,506 articles covering a diverse range of topics including business, sports, travel, etc., and is claimed to be biased and sensational BIBREF5 . For manageability, we randomly subsample 5% of the text. The subsample has around 8.25 million tokens in total.", "id": 396, "question": "what kinds of male and female words are looked at?", "title": "Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function"}, {"answers": ["Using INLINEFORM0 and INLINEFORM1", ""], "context": "We use a pre-trained 300-dimensional word embedding, GloVe, by BIBREF10 . We apply random search to the hyperparameter tuning of the LSTM language model. The best hyperparameters are as follows: 2 hidden layers each with 300 units, a sequence length of 35, a learning rate of 20 with an annealing schedule of decay starting from 0.25 to 0.95, a dropout rate of 0.25 and a gradient clip of 0.25. We train our models for 150 epochs, use a batch size of 48, and set early stopping with a patience of 5.", "id": 397, "question": "how is mitigation of gender bias evaluated?", "title": "Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function"}, {"answers": ["", ""], "context": "Language models are usually trained using cross-entropy loss. Cross-entropy loss at time step INLINEFORM0 is INLINEFORM1 ", "id": 398, "question": "what bias evaluation metrics are used?", "title": "Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function"}, {"answers": [""], "context": "A large majority of the human knowledge is recorded through text documents. That is why ability for a system to automatically infer information from text without any structured data has become a major challenge. Answering questions about a given document is a relevant proxy task that has been proposed as a way to evaluate the reading ability of a given model. In this configuration, a text document such as a news article, a document from Wikipedia or any type of text is presented to a machine with an associated set of questions. The system is then expected to answer these questions and evaluated by its accuracy on this task. The machine reading framework is very general and we can imagine a large panel of questions that can possibly handle most of the standard natural language processing tasks. For example, the task of named entities recognition can be formulated as a machine reading one where your document is the sentence and the question would be 'What are the named entities mentioned in this sentence?'. These natural language interactions are an important objective for reading systems.", "id": 399, "question": "What kind of questions are present in the dataset?", "title": "ReviewQA: a relational aspect-based opinion reading dataset"}, {"answers": ["", ""], "context": "ReviewQA is proposed as a novel dataset regarding the collection of the existing ones. Indeed a large panel of available datasets, that evaluate models on different types of documents, can only be valuable for designing efficient models and learning protocols. In this following part, we describe several of these datasets.", "id": 400, "question": "What baselines are presented?", "title": "ReviewQA: a relational aspect-based opinion reading dataset"}, {"answers": ["", "Detection of an aspect in a review, Prediction of the customer general satisfaction, Prediction of the global trend of an aspect in a given review, Prediction of whether the rating of a given aspect is above or under a given value, Prediction of the exact rating of an aspect in a review, Prediction of the list of all the positive/negative aspects mentioned in the review, Comparison between aspects, Prediction of the strengths and weaknesses in a review"], "context": "Sentiment analysis is one of the historical tasks of Natural Language Processing. It is an important challenge for companies, restaurants, hotels that aim to analyze customer satisfaction regarding products and quality of services. Given a text document, the objective is to predict its overall polarity. Generally, it can be positive, negative or neutral. This analysis gives a quick overview of a general sentiment over a set of documents, but this framework tends to be restrictive. Indeed, one document tends to express multiple opinions of different aspects. For instance, in the sentence: The fish was very good but the service was terrible, there is not a general dominant sentiment, and a finer analysis is needed. The task of aspect-based sentiment analysis aims to predict a polarity of a sentence regarding a given aspect. In the previous example a positive polarity should be associated to the aspect food, and on the contrary, a negative sentiment is expressed regarding the quality of the service.", "id": 401, "question": "What tasks were evaluated?", "title": "ReviewQA: a relational aspect-based opinion reading dataset"}, {"answers": ["", ""], "context": "We think that evaluating the task of sentiment analysis through the setup of question-answering is a relevant playground for machine reading research. Indeed natural language questions about the different aspects of the targeted venues are typical kind of questions we want to be able to ask to a system. In this context, we introduce a set of reasoning questions types over the relationships between aspects. We propose ReviewQA, a dataset of natural language questions over hotel reviews. These questions are divided into 8 groups, regarding the competency required to be answered. In this section, we describe each task and the process followed to generate this dataset.", "id": 402, "question": "What language are the reviews in?", "title": "ReviewQA: a relational aspect-based opinion reading dataset"}, {"answers": ["", ""], "context": "We used a set of reviews extracted from the TripAdvisor website and originally proposed in BIBREF10 and BIBREF11 . This corpus is available at http://www.cs.virginia.edu/~hw5x/Data/LARA/TripAdvisor/TripAdvisorJson.tar.bz2. Each review comes with the name of the associated hotel, a title, an overall rating, a comment and a list of rated aspects. From 0 to 7 aspects, among value, room, location, cleanliness, check-in/front desk, service, business service, can possibly be rated in a review. Figure FIGREF8 displays a review extracted from this dataset.", "id": 403, "question": "Where are the hotel reviews from?", "title": "ReviewQA: a relational aspect-based opinion reading dataset"}, {"answers": ["", ""], "context": "Writing errors can occur in many different forms \u2013 from relatively simple punctuation and determiner errors, to mistakes including word tense and form, incorrect collocations and erroneous idioms. Automatically identifying all of these errors is a challenging task, especially as the amount of available annotated data is very limited. Rei2016 showed that while some error detection algorithms perform better than others, it is additional training data that has the biggest impact on improving performance.", "id": 404, "question": "What was the baseline used?", "title": "Artificial Error Generation with Machine Translation and Syntactic Patterns"}, {"answers": ["Combining pattern based and Machine translation approaches gave the best overall F0.5 scores. It was 49.11 for FCE dataset , 21.87 for the first annotation of CoNLL-14, and 30.13 for the second annotation of CoNLL-14. "], "context": "We investigate two alternative methods for AEG. The models receive grammatically correct text as input and modify certain tokens to produce incorrect sequences. The alternative versions of each sentence are aligned using Levenshtein distance, allowing us to identify specific words that need to be marked as errors. While these alignments are not always perfect, we found them to be sufficient for practical purposes, since alternative alignments of similar sentences often result in the same binary labeling. Future work could explore more advanced alignment methods, such as proposed by felice-bryant-briscoe.", "id": 405, "question": "What are their results on both datasets?", "title": "Artificial Error Generation with Machine Translation and Syntactic Patterns"}, {"answers": ["", ""], "context": "We treat AEG as a translation task \u2013 given a correct sentence as input, the system would learn to translate it to contain likely errors, based on a training corpus of parallel data. Existing SMT approaches are already optimised for identifying context patterns that correspond to specific output sequences, which is also required for generating human-like errors. The reverse of this idea, translating from incorrect to correct sentences, has been shown to work well for error correction tasks BIBREF2 , BIBREF3 , and round-trip translation has also been shown to be promising for correcting grammatical errors BIBREF4 .", "id": 406, "question": "What textual patterns are extracted?", "title": "Artificial Error Generation with Machine Translation and Syntactic Patterns"}, {"answers": ["", ""], "context": "We also describe a method for AEG using patterns over words and part-of-speech (POS) tags, extracting known incorrect sequences from a corpus of annotated corrections. This approach is based on the best method identified by Felice2014a, using error type distributions; while they covered only 5 error types, we relax this restriction and learn patterns for generating all types of errors.", "id": 407, "question": "Which annotated corpus did they use?", "title": "Artificial Error Generation with Machine Translation and Syntactic Patterns"}, {"answers": ["", ""], "context": "We construct a neural sequence labeling model for error detection, following the previous work BIBREF12 , BIBREF13 . The model receives a sequence of tokens as input and outputs a prediction for each position, indicating whether the token is correct or incorrect in the current context. The tokens are first mapped to a distributed vector space, resulting in a sequence of word embeddings. Next, the embeddings are given as input to a bidirectional LSTM BIBREF14 , in order to create context-dependent representations for every token. The hidden states from forward- and backward-LSTMs are concatenated for each word position, resulting in representations that are conditioned on the whole sequence. This concatenated vector is then passed through an additional feedforward layer, and a softmax over the two possible labels (correct and incorrect) is used to output a probability distribution for each token. The model is optimised by minimising categorical cross-entropy with respect to the correct labels. We use AdaDelta BIBREF15 for calculating an adaptive learning rate during training, which accounts for a higher baseline performance compared to previous results.", "id": 408, "question": "Which languages are explored in this paper?", "title": "Artificial Error Generation with Machine Translation and Syntactic Patterns"}, {"answers": ["", ""], "context": "Text simplification aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning, which can help children, non-native speakers, and people with cognitive disabilities, to understand text better. One of the methods of automatic text simplification can be generally divided into three categories: lexical simplification (LS) BIBREF0 , BIBREF1 , rule-based BIBREF2 , and machine translation (MT) BIBREF3 , BIBREF4 . LS is mainly used to simplify text by substituting infrequent and difficult words with frequent and easier words. However, there are several challenges for the LS approach: a great number of transformation rules are required for reasonable coverage and should be applied based on the specific context; third, the syntax and semantic meaning of the sentence is hard to retain. Rule-based approaches use hand-crafted rules for lexical and syntactic simplification, for example, substituting difficult words in a predefined vocabulary. However, such approaches need a lot of human-involvement to manually define these rules, and it is impossible to give all possible simplification rules. MT-based approach has attracted great attention in the last several years, which addresses text simplification as a monolingual machine translation problem translating from 'ordinary' and 'simplified' sentences.", "id": 409, "question": "what language does this paper focus on?", "title": "Improving Neural Text Simplification Model with Simplified Corpora"}, {"answers": ["", ""], "context": "Automatic TS is a complicated natural language processing (NLP) task, which consists of lexical and syntactic simplification levels BIBREF12 . It has attracted much attention recently as it could make texts more accessible to wider audiences, and used as a pre-processing step, improve performances of various NLP tasks and systems BIBREF13 , BIBREF14 , BIBREF15 . Usually, hand-crafted, supervised, and unsupervised methods based on resources like English Wikipedia and Simple English Wikipedia (EW-SEW) BIBREF10 are utilized for extracting simplification rules. It is very easy to mix up the automatic TS task and the automatic summarization task BIBREF3 , BIBREF16 , BIBREF6 . TS is different from text summarization as the focus of text summarization is to reduce the length and redundant content.", "id": 410, "question": "what evaluation metrics did they use?", "title": "Improving Neural Text Simplification Model with Simplified Corpora"}, {"answers": ["For the WikiLarge dataset, the improvement over baseline NMT is 2.11 BLEU, 1.7 FKGL and 1.07 SARI.\nFor the WikiSmall dataset, the improvement over baseline NMT is 8.37 BLEU.", ""], "context": "We collected a simplified dataset from Simple English Wikipedia that are freely available, which has been previously used for many text simplification methods BIBREF0 , BIBREF10 , BIBREF3 . The simple English Wikipedia is pretty easy to understand than normal English Wikipedia. We downloaded all articles from Simple English Wikipedia. For these articles, we removed stubs, navigation pages and any article that consisted of a single sentence. We then split them into sentences with the Stanford CorNLP BIBREF21 , and deleted these sentences whose number of words are smaller than 10 or large than 40. After removing repeated sentences, we chose 600K sentences as the simplified data with 11.6M words, and the size of vocabulary is 82K.", "id": 411, "question": "by how much did their model improve?", "title": "Improving Neural Text Simplification Model with Simplified Corpora"}, {"answers": [""], "context": "Our work is built on attention-based NMT BIBREF5 as an encoder-decoder network with recurrent neural networks (RNN), which simultaneously conducts dynamic alignment and generation of the target simplified sentence.", "id": 412, "question": "what state of the art methods did they compare with?", "title": "Improving Neural Text Simplification Model with Simplified Corpora"}, {"answers": ["", "WikiSmall 89 142 sentence pair and WikiLarge 298 761 sentence pairs. "], "context": "We train an auxiliary system using NMT model from the simplified sentence to the ordinary sentence, which is first trained on the available parallel data. For leveraging simplified sentences to improve the quality of NMT model for text simplification, we propose to adapt the back-translation approach proposed by Sennrich et al. BIBREF11 to our scenario. More concretely, Given one sentence in simplified sentences, we use the simplified-ordinary system in translate mode with greedy decoding to translate it to the ordinary sentences, which is denoted as back-translation. This way, we obtain a synthetic parallel simplified-ordinary sentences. Both the synthetic sentences and the available parallel data are used as training data for the original NMT system.", "id": 413, "question": "what are the sizes of both datasets?", "title": "Improving Neural Text Simplification Model with Simplified Corpora"}, {"answers": ["Frequent use of direct animal name calling, using simile and metaphors, through indirect speech like sarcasm, wishing evil to others, name alteration, societal stratification, immoral behavior and sexually related uses.", ""], "context": "Disclaimer: Due to the nature of the paper, some examples contain highly offensive language and hate speech. They don't reflect the views of the authors in any way, and the point of the paper is to help fight such speech. Much recent interest has focused on the detection of offensive language and hate speech in online social media. Such language is often associated with undesirable online behaviors such as trolling, cyberbullying, online extremism, political polarization, and propaganda. Thus, offensive language detection is instrumental for a variety of application such as: quantifying polarization BIBREF0, BIBREF1, trolls and propaganda account detection BIBREF2, detecting the likelihood of hate crimes BIBREF3; and predicting conflict BIBREF4. In this paper, we describe our methodology for building a large dataset of Arabic offensive tweets. Given that roughly 1-2% of all Arabic tweets are offensive BIBREF5, targeted annotation is essential for efficiently building a large dataset. Since our methodology does not use a seed list of offensive words, it is not biased by topic, target, or dialect. Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets. Further, we labeled tweets as vulgar or hate speech. To date, this is the largest available dataset, which we plan to make publicly available along with annotation guidelines. We use this dataset to characterize Arabic offensive language to ascertain the topics, dialects, and users' gender that are most associated with the use of offensive language. Though we suspect that there are common features that span different languages and cultures, some characteristics of Arabic offensive language is language and culture specific. Thus, we conduct a thorough analysis of how Arabic users use offensive language. Next, we use the dataset to train strong Arabic offensive language classifiers using state-of-the-art representations and classification techniques. Specifically, we experiment with static and contextualized embeddings for representation along with a variety of classifiers such as a deep neural network classifier and Support Vector Machine (SVM).", "id": 414, "question": "What are the distinctive characteristics of how Arabic speakers use offensive language?", "title": "Arabic Offensive Language on Twitter: Analysis and Experiments"}, {"answers": [""], "context": "Many recent papers have focused on the detection of offensive language, including hate speech BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. Offensive language can be categorized as: Vulgar, which include explicit and rude sexual references, Pornographic, and Hateful, which includes offensive remarks concerning people\u2019s race, religion, country, etc. BIBREF14. Prior works have concentrated on building annotated corpora and training classification models. Concerning corpora, hatespeechdata.com attempts to maintain an updated list of hate speech corpora for multiple languages including Arabic and English. Further, SemEval 2019 ran an evaluation task targeted at detecting offensive language, which focused exclusively on English BIBREF15. As for classification models, most studies used supervised classification at either word level BIBREF10, character sequence level BIBREF11, and word embeddings BIBREF9. The studies used different classification techniques including using Na\u00efve Bayes BIBREF10, SVM BIBREF11, and deep learning BIBREF6, BIBREF7, BIBREF12 classification. The accuracy of the aforementioned system ranged between 76% and 90%. Earlier work looked at the use of sentiment words as features as well as contextual features BIBREF13.", "id": 415, "question": "How did they analyze which topics, dialects and gender are most associated with tweets?", "title": "Arabic Offensive Language on Twitter: Analysis and Experiments"}, {"answers": ["One", "One experienced annotator tagged all tweets"], "context": "Our target was to build a large Arabic offensive language dataset that is representative of their appearance on Twitter and is hopefully not biased to specific dialects, topics, or targets. One of the main challenges is that offensive tweets constitute a very small portion of overall tweets. To quantify their proportion, we took 3 random samples of tweets from different days, with each sample composed of 1,000 tweets, and we found that between 1% and 2% of them were in fact offensive (including pornographic advertisement). This percentage is consistent with previously reported percentages BIBREF19. Thus, annotating random tweets is grossly inefficient. One way to overcome this problem is to use a seed list of offensive words to filter tweets. However, doing so is problematic as it would skew the dataset to particular types of offensive language or to specific dialects. Offensiveness is often dialect and country specific.", "id": 416, "question": "How many annotators tagged each tweet?", "title": "Arabic Offensive Language on Twitter: Analysis and Experiments"}, {"answers": ["", ""], "context": "We developed the annotation guidelines jointly with an experienced annotator, who is a native Arabic speaker with a good knowledge of various Arabic dialects. We made sure that our guidelines were compatible with those of OffensEval2019. The annotator carried out all annotation. Tweets were given one or more of the following four labels: offensive, vulgar, hate speech, or clean. Since the offensive label covers both vulgar and hate speech and vulgarity and hate speech are not mutually exclusive, a tweet can be just offensive or offensive and vulgar and/or hate speech. The annotation adhered to the following guidelines:", "id": 417, "question": "How many tweets are in the dataset?", "title": "Arabic Offensive Language on Twitter: Analysis and Experiments"}, {"answers": ["It does not use a seed list to gather tweets so the dataset does not skew to specific topics, dialect, targets.", ""], "context": "Offensive tweets contain explicit or implicit insults or attacks against other people, or inappropriate language, such as:", "id": 418, "question": "In what way is the offensive dataset not biased by topic, dialect or target?", "title": "Arabic Offensive Language on Twitter: Analysis and Experiments"}, {"answers": [""], "context": "The irony is a kind of figurative language, which is widely used on social media BIBREF0 . The irony is defined as a clash between the intended meaning of a sentence and its literal meaning BIBREF1 . As an important aspect of language, irony plays an essential role in sentiment analysis BIBREF2 , BIBREF0 and opinion mining BIBREF3 , BIBREF4 .", "id": 419, "question": "What experiments are conducted?", "title": "A Neural Approach to Irony Generation"}, {"answers": ["", ""], "context": "Style Transfer: As irony is a complicated style and hard to model with some specific style attribute words, we mainly focus on studies without editing style attribute words.", "id": 420, "question": "What is the combination of rewards for reinforcement learning?", "title": "A Neural Approach to Irony Generation"}, {"answers": ["", "ironies are often obscure and hard to understand"], "context": "In this section, we describe how we build our dataset with tweets. First, we crawl over 2M tweets from twitter using GetOldTweets-python. We crawl English tweets from 04/09/2012 to /12/18/2018. We first remove all re-tweets and use langdetect to remove all non-English sentences. Then, we remove hashtags attached at the end of the tweets because they are usually not parts of sentences and will confuse our language model. After that, we utilize Ekphrasis to process tweets. We remove URLs and restore remaining hashtags, elongated words, repeated words, and all-capitalized words. To simplify our dataset, We replace all \u201c INLINEFORM0 money INLINEFORM1 \" and \u201c INLINEFORM2 time INLINEFORM3 \" tokens with \u201c INLINEFORM4 number INLINEFORM5 \" token when using Ekphrasis. And we delete sentences whose lengths are less than 10 or greater than 40. In order to restore abbreviations, we download an abbreviation dictionary from webopedia and restore abbreviations to normal words or phrases according to the dictionary. Finally, we remove sentences which have more than two rare words (appearing less than three times) in order to constrain the size of vocabulary. Finally, we get 662,530 sentences after pre-processing.", "id": 421, "question": "What are the difficulties in modelling the ironic pattern?", "title": "A Neural Approach to Irony Generation"}, {"answers": ["They developed a classifier to find ironic sentences in twitter data", "by crawling"], "context": "Given two non-parallel corpora: non-ironic corpus N={ INLINEFORM0 , INLINEFORM1 , ..., INLINEFORM2 } and ironic corpus I={ INLINEFORM3 , INLINEFORM4 , ..., INLINEFORM5 }, the goal of our irony generation model is to generate an ironic sentence from a non-ironic sentence while preserving the content and sentiment polarity of the source input sentence. We implement an encoder-decoder framework where two encoders are utilized to encode ironic sentences and non-ironic sentences respectively and two decoders are utilized to decode ironic sentences and non-ironic sentences from latent representations respectively. In order to enforce a shared latent space, we share two layers on both the encoder side and the decoder side. Our model architecture is illustrated in Figure FIGREF13 . We denote irony encoder as INLINEFORM6 , irony decoder as INLINEFORM7 and non-irony encoder as INLINEFORM8 , non-irony decoder as INLINEFORM9 . Their parameters are INLINEFORM10 , INLINEFORM11 , INLINEFORM12 and INLINEFORM13 .", "id": 422, "question": "How did the authors find ironic data on twitter?", "title": "A Neural Approach to Irony Generation"}, {"answers": ["Irony accuracy is judged only by human ; senriment preservation and content preservation are judged both by human and using automatic metrics (ACC and BLEU).", ""], "context": "In order to build up our language model and preserve the content, we apply the auto-encoder model. To prevent the model from simply copying the input sentence, we randomly add some noises in the input sentence. Specifically, for every word in the input sentence, there is 10% chance that we delete it, 10 % chance that we duplicate it, 10% chance that we swap it with the next word, or it remains unchanged. We first encode the input sentence INLINEFORM0 or INLINEFORM1 with respective encoder INLINEFORM2 or INLINEFORM3 to obtain its latent representation INLINEFORM4 or INLINEFORM5 and reconstruct the input sentence with the latent representation and respective decoder. So we can get the reconstruction loss for auto-encoder INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1 ", "id": 423, "question": "Who judged the irony accuracy, sentiment preservation and content preservation?", "title": "A Neural Approach to Irony Generation"}, {"answers": ["tweets are annotated with only Favor or Against for two targets - Galatasaray and Fenerbah\u00e7e", ""], "context": "Stance detection (also called stance identification or stance classification) is one of the considerably recent research topics in natural language processing (NLP). It is usually defined as a classification problem where for a text and target pair, the stance of the author of the text for that target is expected as a classification output from the set: {Favor, Against, Neither} BIBREF0 .", "id": 424, "question": "How were the tweets annotated?", "title": "Stance Detection in Turkish Tweets"}, {"answers": [""], "context": "We have decided to consider tweets about popular sports clubs as our domain for stance detection. Considerable amounts of tweets are being published for sports-related events at every instant. Hence we have determined our targets as Galatasaray (namely Target-1) and Fenerbah\u00e7e (namely, Target-2) which are two of the most popular football clubs in Turkey. As is the case for the sentiment analysis tools, the outputs of the stance detection systems on a stream of tweets about these clubs can facilitate the use of the opinions of the football followers by these clubs.", "id": 425, "question": "Which SVM approach resulted in the best performance?", "title": "Stance Detection in Turkish Tweets"}, {"answers": ["hashtag features contain whether there is any hashtag in the tweet", ""], "context": "It is emphasized in the related literature that unigram-based methods are reliable for the stance detection task BIBREF2 and similarly unigram-based models have been used as baseline models in studies such as BIBREF0 . In order to be used as a baseline and reference system for further studies on stance detection in Turkish tweets, we have trained two SVM classifiers (one for each target) using unigrams as features. Before the extraction of unigrams, we have employed automated preprocessing to filter out the stopwords in our annotated data set of 700 tweets. The stopword list used is the list presented in BIBREF12 which, in turn, is the slightly extended version of the stopword list provided in BIBREF13 .", "id": 426, "question": "What are hashtag features?", "title": "Stance Detection in Turkish Tweets"}, {"answers": ["", ""], "context": "Future work based on the current study includes the following:", "id": 427, "question": "How many tweets did they collect?", "title": "Stance Detection in Turkish Tweets"}, {"answers": ["", ""], "context": "Stance detection is a considerably new research area in natural language processing and is considered within the scope of the well-studied topic of sentiment analysis. It is the detection of stance within text towards a target which may be explicitly specified in the text or not. In this study, we present a stance-annotated tweet data set in Turkish where the targets of the annotated stances are two popular sports clubs in Turkey. The corresponding annotations are made publicly-available for research purposes. To the best of our knowledge, this is the first stance detection data set for the Turkish language and also the first sports-related stance-annotated data set. Also presented in this study are SVM classifiers (one for each target) utilizing unigram and bigram features in addition to using the existence of hashtags as another feature. 10-fold cross validation results of these classifiers are presented which can be used as reference results by prospective systems. Both the annotated data set and the classifiers with evaluations are significant since they are the initial contributions to stance detection problem in Turkish tweets.", "id": 428, "question": "Which sports clubs are the targets?", "title": "Stance Detection in Turkish Tweets"}, {"answers": ["", ""], "context": "The NLP community is revisiting the role of linguistic structure in applications with the advent of contextual word representations (cwrs) derived from pretraining language models on large corpora BIBREF2, BIBREF3, BIBREF4, BIBREF5. Recent work has shown that downstream task performance may benefit from explicitly injecting a syntactic inductive bias into model architectures BIBREF6, even when cwrs are also used BIBREF7. However, high quality linguistic structure annotation at a large scale remains expensive\u2014a trade-off needs to be made between the quality of the annotations and the computational expense of obtaining them. Shallow syntactic structures (BIBREF8; also called chunk sequences) offer a viable middle ground, by providing a flat, non-hierarchical approximation to phrase-syntactic trees (see Fig. FIGREF1 for an example). These structures can be obtained efficiently, and with high accuracy, using sequence labelers. In this paper we consider shallow syntax to be a proxy for linguistic structure.", "id": 429, "question": "Does this method help in sentiment classification task improvement?", "title": "Shallow Syntax in Deep Water"}, {"answers": ["", "3"], "context": "We briefly review the shallow syntactic structures used in this work, and then present a model architecture to obtain embeddings from shallow Syntactic Context (mSynC).", "id": 430, "question": "For how many probe tasks the shallow-syntax-aware contextual embedding perform better than ELMo\u2019s embedding?", "title": "Shallow Syntax in Deep Water"}, {"answers": ["CCG Supertagging CCGBank , PTB part-of-speech tagging, EWT part-of-speech tagging,\nChunking, Named Entity Recognition, Semantic Tagging, Grammar Error Detection, Preposition Supersense Role, Preposition Supersense Function, Event Factuality Detection", ""], "context": "Base phrase chunking is a cheap sequence-labeling\u2013based alternative to full syntactic parsing, where the sequence consists of non-overlapping labeled segments (Fig. FIGREF1 includes an example.) Full syntactic trees can be converted into such shallow syntactic chunk sequences using a deterministic procedure BIBREF9. BIBREF12 offered a rule-based transformation deriving non-overlapping chunks from phrase-structure trees as found in the Penn Treebank BIBREF13. The procedure percolates some syntactic phrase nodes from a phrase-syntactic tree to the phrase in the leaves of the tree. All overlapping embedded phrases are then removed, and the remainder of the phrase gets the percolated label\u2014this usually corresponds to the head word of the phrase.", "id": 431, "question": "What are the black-box probes used?", "title": "Shallow Syntax in Deep Water"}, {"answers": ["", ""], "context": "Traditional language models are estimated to maximize the likelihood of each word $x_i$ given the words that precede it, $p(x_i \\mid x_{, where INLINEFORM1 stands for non-location named entities, INLINEFORM2 for a location, INLINEFORM3 for event-related keywords, INLINEFORM4 for a date, and each component in the tuple is represented by component-specific representative words.", "id": 438, "question": "How does this model overcome the assumption that all words in a document are generated from a single event?", "title": "Open Event Extraction from Online Text using a Generative Adversarial Network"}, {"answers": ["", ""], "context": "Over the past two decades, the emergence of social media has enabled the proliferation of traceable human behavior. The content posted by users can reflect who their friends are, what topics they are interested in, or which company they are working for. At the same time, users are listing a number of profile fields to define themselves to others. The utilization of such metadata has proven important in facilitating further developments of applications in advertising BIBREF0 , personalization BIBREF1 , and recommender systems BIBREF2 . However, profile information can be limited, depending on the platform, or it is often deliberately omitted BIBREF3 . To uncloak this information, a number of studies have utilized social media users' footprints to approximate their profiles.", "id": 439, "question": "How many users do they look at?", "title": "Predicting the Industry of Users on Social Media"}, {"answers": ["", ""], "context": "Alongside the wide adoption of social media by the public, researchers have been leveraging the newly available data to create and refine models of users' behavior and profiling. There exists a myriad research that analyzes language in order to profile social media users. Some studies sought to characterize users' personality BIBREF9 , BIBREF10 , while others sequenced the expressed emotions BIBREF11 , studied mental disorders BIBREF12 , and the progression of health conditions BIBREF13 . At the same time, a number of researchers sought to predict the social media users' age and/or gender BIBREF14 , BIBREF15 , BIBREF16 , while others targeted and analyzed the ethnicity, nationality, and race of the users BIBREF17 , BIBREF18 , BIBREF19 . One of the profile fields that has drawn a great deal of attention is the location of a user. Among others, Hecht et al. Hecht11 predicted Twitter users' locations using machine learning on nationwide and state levels. Later, Han et al. Han14 identified location indicative words to predict the location of Twitter users down to the city level.", "id": 440, "question": "What do they mean by a person's industry?", "title": "Predicting the Industry of Users on Social Media"}, {"answers": [""], "context": "We compile our industry-annotated dataset by identifying blogger profiles located in the U.S. on the profile finder on http://www.blogger.com, and scraping only those users that had the industry profile element completed.", "id": 441, "question": "What model did they use for their system?", "title": "Predicting the Industry of Users on Social Media"}, {"answers": ["", ""], "context": "After collecting our dataset, we split it into three sets: a train set, a development set, and a test set. The sizes of these sets are 17,880, 2,500, and 2,500 users, respectively, with users randomly assigned to these sets. In all the experiments that follow, we evaluate our classifiers by training them on the train set, configure the parameters and measure performance on the development set, and finally report the prediction accuracy and results on the test set. Note that all the experiments are performed at user level, i.e., all the data for one user is compiled into one instance in our data sets.", "id": 442, "question": "What social media platform did they look at?", "title": "Predicting the Industry of Users on Social Media"}, {"answers": ["technology, religion, fashion, publishing, sports or recreation, real estate, agriculture/environment, law, security/military, tourism, construction, museums or libraries, banking/investment banking, automotive", "Technology, Religion, Fashion, Publishing, Sports coach, Real Estate, Law, Environment, Tourism, Construction, Museums, Banking, Security, Automotive."], "context": "In this section, we seek the effectiveness of using solely textual features obtained from the users' postings to predict their industry.", "id": 443, "question": "What are the industry classes defined in this paper?", "title": "Predicting the Industry of Users on Social Media"}, {"answers": ["", ""], "context": "There have been many advances in machine learning methods which help machines understand human behavior better than ever. One of the most important aspects of human behavior is emotion. If machines could detect human emotional expressions, it could be used to improve on verity of applications such as marketing BIBREF0 , human-computer interactions BIBREF1 , political science BIBREF2 etc.", "id": 444, "question": "Do they report results only on English data?", "title": "Emotion Detection in Text: Focusing on Latent Representation"}, {"answers": ["They use the embedding layer with a size 35 and embedding dimension of 300. They use a dense layer with 70 units and a dropout layer with a rate of 50%."], "context": "A lot of work has been done on detecting emotion in speech or visual data BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . But detecting emotions in textual data is a relatively new area that demands more research. There have been many attempts to detect emotions in text using conventional machine learning techniques and handcrafted features in which given the dataset, the authors try to find the best feature set that represents the most and the best information about the text, then passing the converted text as feature vectors to the classifier for training BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 . During the process of creating the feature set, in these methods, some of the most important information in the text such as the sequential nature of the data, and the context will be lost.", "id": 445, "question": "What are the hyperparameters of the bi-GRU?", "title": "Emotion Detection in Text: Focusing on Latent Representation"}, {"answers": ["", ""], "context": "We compare our approach to two other, the first one uses almost the same tweet data as we use for training, and the second one is the CrowdFlower dataset annotated for emotions.", "id": 446, "question": "What baseline is used?", "title": "Emotion Detection in Text: Focusing on Latent Representation"}, {"answers": ["", ""], "context": "There are not many free datasets available for emotion classification. Most datasets are subject-specific (i.e. news headlines, fairy tails, etc.) and not big enough to train deep neural networks. Here we use the tweet dataset created by Wang et al. As mentioned in the previous section, they have collected over 2 million tweets by using hashtags for labeling their data. They created a list of words associated with 7 emotions (six emotions from BIBREF34 love, joy, surprise, anger, sadness fear plus thankfulness (See Table TABREF3 ), and used the list as their guide to label the sampled tweets with acceptable quality.", "id": 447, "question": "What data is used in experiments?", "title": "Emotion Detection in Text: Focusing on Latent Representation"}, {"answers": ["", ""], "context": "In this section, we introduce the deep neural network architecture that we used to classify emotions in the tweets dataset. Emotional expressions are more complex and context-dependent even compared to other forms of expressions based mostly on the complexity and ambiguity of human emotions and emotional expressions and the huge impact of context on the understanding of the expressed emotion. These complexities are what led us to believe lexicon-based features like is normally used in conventional machine learning approaches are unable to capture the intricacy of emotional expressions.", "id": 448, "question": "What meaningful information does the GRU model capture, which traditional ML models do not?", "title": "Emotion Detection in Text: Focusing on Latent Representation"}, {"answers": ["", ""], "context": "Accurate language identification (LID) is the first step in many natural language processing and machine comprehension pipelines. If the language of a piece of text is known then the appropriate downstream models like parts of speech taggers and language models can be applied as required.", "id": 449, "question": "What is the approach of previous work?", "title": "Short Text Language Identification for Under Resourced Languages"}, {"answers": ["", ""], "context": "The focus of this section is on recently published datasets and LID research applicable to the South African context. An in depth survey of algorithms, features, datasets, shared tasks and evaluation methods may be found in BIBREF0.", "id": 450, "question": "Is the lexicon the same for all languages?", "title": "Short Text Language Identification for Under Resourced Languages"}, {"answers": ["", ""], "context": "The proposed LID algorithm builds on the work in BIBREF8 and BIBREF26. We apply a naive Bayesian classifier with character (2, 4 & 6)-grams, word unigram and word bigram features with a hierarchical lexicon based classifier.", "id": 451, "question": "How do they obtain the lexicon?", "title": "Short Text Language Identification for Under Resourced Languages"}, {"answers": ["", ""], "context": "The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label. Classifying text only by language group or family is a much easier task as reported in BIBREF8.", "id": 452, "question": "What evaluation metric is used?", "title": "Short Text Language Identification for Under Resourced Languages"}, {"answers": ["", ""], "context": "LID of short texts, informal styles and similar languages remains a difficult problem which is actively being researched. The proposed algorithm was evaluated on three existing datasets and compared to the implementations of three public LID implementations as well as to reported results of four other algorithms. It performed well relative to the other methods beating their results. However, the performance is dependent on the support of the lexicon.", "id": 453, "question": "Which languages are similar to each other?", "title": "Short Text Language Identification for Under Resourced Languages"}, {"answers": ["", "labelled features, which are words whose presence strongly indicates a specific class or topic"], "context": "We posses a wealth of prior knowledge about many natural language processing tasks. For example, in text categorization, we know that words such as NBA, player, and basketball are strong indicators of the sports category BIBREF0 , and words like terrible, boring, and messing indicate a negative polarity while words like perfect, exciting, and moving suggest a positive polarity in sentiment classification.", "id": 454, "question": "What background knowledge do they leverage?", "title": "Robustly Leveraging Prior Knowledge in Text Classification"}, {"answers": ["", ""], "context": "We address the robustness problem on top of GE-FL BIBREF0 , a GE method which leverages labeled features as prior knowledge. A labeled feature is a strong indicator of a specific class and is manually provided to the classifier. For example, words like amazing, exciting can be labeled features for class positive in sentiment classification.", "id": 455, "question": "What are the three regularization terms?", "title": "Robustly Leveraging Prior Knowledge in Text Classification"}, {"answers": ["text classification for themes including sentiment, web-page, science, medical and healthcare"], "context": "Generalized expectation (GE) criteria BIBREF7 provides us a natural way to directly constrain the model in the preferred direction. For example, when we know the proportion of each class of the dataset in a classification task, we can guide the model to predict out a pre-specified class distribution.", "id": 456, "question": "What NLP tasks do they consider?", "title": "Robustly Leveraging Prior Knowledge in Text Classification"}, {"answers": ["ability to accurately classify texts even when the amount of prior knowledge for different classes is unbalanced, and when the class distribution of the dataset is unbalanced", "Low sensitivity to bias in prior knowledge"], "context": "Druck et al. ge-fl proposed GE-FL to learn from labeled features using generalized expectation criteria. When given a set of labeled features $K$ , the reference distribution over classes of these features is denoted by $\\hat{p}(y| x_k), k \\in K$ . GE-FL introduces the divergence between this reference distribution and the model predicted distribution $p_\\theta (y | x_k)$ , as a term of the objective function: ", "id": 457, "question": "How do they define robustness of a model?", "title": "Robustly Leveraging Prior Knowledge in Text Classification"}, {"answers": ["Automatic", ""], "context": "Several learner corpora have been compiled for English, such as the International Corpus of Learner English BIBREF0 . The importance of such resources has been increasingly recognized across a variety of research areas, from Second Language Acquisition to Natural Language Processing. Recently, we have seen substantial growth in this area and new corpora for languages other than English have appeared. For Romance languages, there are a several corpora and resources for French, Spanish BIBREF1 , and Italian BIBREF2 .", "id": 458, "question": "Are the annotations automatic or manually created?", "title": "A Portuguese Native Language Identification Dataset"}, {"answers": [""], "context": "NLI has attracted a lot of attention in recent years. Due to the availability of suitable data, as discussed earlier, this attention has been particularly focused on English. The most notable examples are the two editions of the NLI shared task organized in 2013 BIBREF6 and 2017 BIBREF7 .", "id": 459, "question": "Do the errors of the model reflect linguistic similarity between different L1s?", "title": "A Portuguese Native Language Identification Dataset"}, {"answers": ["", ""], "context": "The data was collected from three different learner corpora of Portuguese: (i) COPLE2; (ii) Leiria corpus, and (iii) PEAPL2 as presented in Table 1 .", "id": 460, "question": "Is the dataset balanced between speakers of different L1s?", "title": "A Portuguese Native Language Identification Dataset"}, {"answers": ["204 tokens", ""], "context": "As demonstrated earlier, these learner corpora use different formats. COPLE2 is mainly codified in XML, although it gives the possibility of getting the student version of the essay in TXT format. PEAPL2 and Leiria corpus are compiled in TXT format. In both corpora, the TXT files contain the student version with special annotations from the transcription. For the NLI experiments we were interested in a clean txt version of the students' text, together with versions annotated at different linguistics levels. Therefore, as a first step, we removed all the annotations corresponding to the transcription process in PEAPL2 and Leiria files. As a second step, we proceeded to the linguistic annotation of the texts using different NLP tools.", "id": 461, "question": "How long are the essays on average?", "title": "A Portuguese Native Language Identification Dataset"}, {"answers": ["", ""], "context": "Knowledge graphs have been proved to benefit many artificial intelligence applications, such as relation extraction, question answering and so on. A knowledge graph consists of multi-relational data, having entities as nodes and relations as edges. An instance of fact is represented as a triplet (Head Entity, Relation, Tail Entity), where the Relation indicates a relationship between these two entities. In the past decades, great progress has been made in building large scale knowledge graphs, such as WordNet BIBREF0 , Freebase BIBREF1 . However, most of them have been built either collaboratively or semi-automatically and as a result, they often suffer from incompleteness and sparseness.", "id": 462, "question": "How large are the textual descriptions of entities?", "title": "Knowledge Graph Representation with Jointly Structural and Textual Encoding"}, {"answers": ["NBOW, LSTM, attentive LSTM", ""], "context": "In this section, we briefly introduce the background knowledge about the knowledge graph embedding.", "id": 463, "question": "What neural models are used to encode the text?", "title": "Knowledge Graph Representation with Jointly Structural and Textual Encoding"}, {"answers": ["", ""], "context": "Given an entity in most of the existing knowledge bases, there is always an available corresponding text description with valuable semantic information for this entity, which can provide beneficial supplement for entity representation.", "id": 464, "question": "What baselines are used for comparison?", "title": "Knowledge Graph Representation with Jointly Structural and Textual Encoding"}, {"answers": [""], "context": "A simple and intuitive method is the neural bag-of-words (NBOW) model, in which the representation of text can be generated by summing up its constituent word representations.", "id": 465, "question": "What datasets are used to evaluate this paper?", "title": "Knowledge Graph Representation with Jointly Structural and Textual Encoding"}, {"answers": ["", ""], "context": "In recent years, word embeddings have been successfully used in natural language processing (NLP), the most commonly known models are Word2Vec BIBREF0 and GloveBIBREF1. The reasons for such success are manifold. One key attribute of embedding methods is that word embedding models take into account context information of words, thereby allowing a more compact and manageable representation for wordsBIBREF2, BIBREF3. The embeddings are widely applied in many downstream NLP tasks such as neural machine translation, dialogue system or text summarisation BIBREF4, BIBREF5, BIBREF6, as well as in language modelling for speech recognitionBIBREF7.", "id": 466, "question": "Which approach out of two proposed in the paper performed better in experiments?", "title": "Contextual Joint Factor Acoustic Embeddings"}, {"answers": ["", ""], "context": "", "id": 467, "question": "What classification baselines are used for comparison?", "title": "Contextual Joint Factor Acoustic Embeddings"}, {"answers": ["Once split into 8 subsets (A-H), the test set used are blocks D+H and blocks F+H", ""], "context": "Most interest in acoustic embeddings can be observed on acoustic word embeddings, i.e. projections that map word acoustics into a fixed size vector space. Objective functions are chosen to project different word realisations to close proximity in the embedding space. Different approaches were used in the literature - for both supervised and unsupervised learning. For the supervised case, BIBREF9 introduced a convolutional neural network (CNN) based acoustic word embedding system for speech recognition, where words that sound alike are nearby in Euclidean distance. In their work, a CNN is used to predict a word from the corresponding acoustic signal, the output of the bottleneck layer before the final softmax layer is taken to be the embedding for the corresponding word. Further work used different network architectures to obtain acoustic word embeddings: BIBREF10 introduces a recurrent neural network (RNN) based approach instead.", "id": 468, "question": "What TIMIT datasets are used for testing?", "title": "Contextual Joint Factor Acoustic Embeddings"}, {"answers": [""], "context": "Context information plays a fundamental role in speech processing. Phonemes could be influenced by surrounding frames through coarticulationBIBREF19 - an effect caused by speed limitations and transitions in the movement of articulators. Normally directly neighbouring phonemes have important impact on the sound realisation. Inversely, the surrounding phonemes also provide strong constraints on the phoneme that can be chosen at any given point, subject to to lexical and language constraints. This effect is for example exploited in phoneme recognition, by use of phoneme $n$-gram modelsBIBREF20. Equivalently inter word dependency - derived from linguistic constraints - can be exploited, as is the case in computing word embeddings with the aforementioned word2vecBIBREF0 method. The situation differs for the global latent variables, such as speaker properties or acoustic environment information. Speaker properties remains constant - and environments can also be assumed stationary over longer periods of time. Hence these variables are common between among neighbouring frames and windows. Modelling context information is helpful for identifying such information BIBREF21.", "id": 469, "question": "How does this approach compares to the state-of-the-art results on these tasks?", "title": "Contextual Joint Factor Acoustic Embeddings"}, {"answers": ["F1 score of 92.19 on homographic pun detection, 80.19 on homographic pun location, 89.76 on heterographic pun detection.", "for the homographic dataset F1 score of 92.19 and 80.19 on detection and location and for the heterographic dataset F1 score of 89.76 on detection"], "context": "There exists a class of language construction known as pun in natural language texts and utterances, where a certain word or other lexical items are used to exploit two or more separate meanings. It has been shown that understanding of puns is an important research question with various real-world applications, such as human-computer interaction BIBREF0 , BIBREF1 and machine translation BIBREF2 . Recently, many researchers show their interests in studying puns, like detecting pun sentences BIBREF3 , locating puns in the text BIBREF4 , interpreting pun sentences BIBREF5 and generating sentences containing puns BIBREF6 , BIBREF7 , BIBREF8 . A pun is a wordplay in which a certain word suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect. Puns can be generally categorized into two groups, namely heterographic puns (where the pun and its latent target are phonologically similar) and homographic puns (where the two meanings of the pun reflect its two distinct senses) BIBREF9 . Consider the following two examples:", "id": 470, "question": "What state-of-the-art results are achieved?", "title": "Joint Detection and Location of English Puns"}, {"answers": ["They compare with the following models: by Pedersen (2017), by Pramanick and Das (2017), by Mikhalkova and Karyakin (2017), by Vadehra (2017), Indurthi and Oota (2017), by Vechtomova (2017), by (Cai et al., 2018), and CRF."], "context": "We first design a simple tagging scheme consisting of two tags { INLINEFORM0 }:", "id": 471, "question": "What baselines do they compare with?", "title": "Joint Detection and Location of English Puns"}, {"answers": ["", "A homographic and heterographic benchmark datasets by BIBREF9."], "context": "Neural models have shown their effectiveness on sequence labeling tasks BIBREF13 , BIBREF14 , BIBREF15 . In this work, we adopt the bidirectional Long Short Term Memory (BiLSTM) BIBREF16 networks on top of the Conditional Random Fields BIBREF17 (CRF) architecture to make labeling decisions, which is one of the classical models for sequence labeling. Our model architecture is illustrated in Figure FIGREF8 with a running example. Given a context/sentence INLINEFORM0 where INLINEFORM1 is the length of the context, we generate the corresponding tag sequence INLINEFORM2 based on our designed tagging schemes and the original annotations for pun detection and location provided by the corpora. Our model is then trained on pairs of INLINEFORM3 .", "id": 472, "question": "What datasets are used in evaluation?", "title": "Joint Detection and Location of English Puns"}, {"answers": ["A new tagging scheme that tags the words before and after the pun as well as the pun words.", ""], "context": "We evaluate our model on two benchmark datasets BIBREF9 . The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun. The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun. We notice there is no standard splitting information provided for both datasets. Thus we apply 10-fold cross validation. To make direct comparisons with prior studies, following BIBREF4 , we accumulated the predictions for all ten folds and calculate the scores in the end.", "id": 473, "question": "What is the tagging scheme employed?", "title": "Joint Detection and Location of English Puns"}, {"answers": ["Using the OpenIE toolbox and applying heuristic rules to select the most relevant relation.", ""], "context": "Question Generation (QG) is the task of automatically creating questions from a range of inputs, such as natural language text BIBREF0, knowledge base BIBREF1 and image BIBREF2. QG is an increasingly important area in NLP with various application scenarios such as intelligence tutor systems, open-domain chatbots and question answering dataset construction. In this paper, we focus on question generation from reading comprehension materials like SQuAD BIBREF3. As shown in Figure FIGREF1, given a sentence in the reading comprehension paragraph and the text fragment (i.e., the answer) that we want to ask about, we aim to generate a question that is asked about the specified answer.", "id": 474, "question": "How they extract \"structured answer-relevant relation\"?", "title": "Improving Question Generation With to the Point Context"}, {"answers": ["Metrics show better results on all metrics compared to baseline except Bleu1 on Zhou split (worse by 0.11 compared to baseline). Bleu1 score on DuSplit is 45.66 compared to best baseline 43.47, other metrics on average by 1"], "context": "In this section, we first introduce the task definition and our protocol to extract structured answer-relevant relations. Then we formalize the task under the encoder-decoder framework with gated attention and dual copy mechanism.", "id": 475, "question": "How big are significant improvements?", "title": "Improving Question Generation With to the Point Context"}, {"answers": ["", ""], "context": "We formalize our task as an answer-aware Question Generation (QG) problem BIBREF8, which assumes answer phrases are given before generating questions. Moreover, answer phrases are shown as text fragments in passages. Formally, given the sentence $S$, the answer $A$, and the answer-relevant relation $M$, the task of QG aims to find the best question $\\overline{Q}$ such that,", "id": 476, "question": "What metrics do they use?", "title": "Improving Question Generation With to the Point Context"}, {"answers": ["", ""], "context": "We utilize an off-the-shelf toolbox of OpenIE to the derive structured answer-relevant relations from sentences as to the point contexts. Relations extracted by OpenIE can be represented either in a triple format or in an n-ary format with several secondary arguments, and we employ the latter to keep the extractions as informative as possible and avoid extracting too many similar relations in different granularities from one sentence. We join all arguments in the extracted n-ary relation into a sequence as our to the point context. Figure FIGREF5 shows n-ary relations extracted from OpenIE. As we can see, OpenIE extracts multiple relations for complex sentences. Here we select the most informative relation according to three criteria in the order of descending importance: (1) having the maximal number of overlapped tokens between the answer and the relation; (2) being assigned the highest confidence score by OpenIE; (3) containing maximum non-stop words. As shown in Figure FIGREF5, our criteria can select answer-relevant relations (waved in Figure FIGREF5), which is especially useful for sentences with extraneous information. In rare cases, OpenIE cannot extract any relation, we treat the sentence itself as the to the point context.", "id": 477, "question": "On what datasets are experiments performed?", "title": "Improving Question Generation With to the Point Context"}, {"answers": [""], "context": "BioASQ is a biomedical document classification, document retrieval, and question answering competition, currently in its seventh year. We provide an overview of our submissions to semantic question answering task (7b, Phase B) of BioASQ 7 (except for 'ideal answer' test, in which we did not participate this year). In this task systems are provided with biomedical questions and are required to submit ideal and exact answers to those questions. We have used BioBERT BIBREF0 based system , see also Bidirectional Encoder Representations from Transformers(BERT) BIBREF1, and we fine tuned it for the biomedical question answering task. Our system scored near the top for factoid questions for all the batches of the challenge. More specifially, in the third test batch set, our system achieved highest \u2018MRR\u2019 score for Factoid Question Answering task. Also, for List-type question answering task our system achieved highest recall score in the fourth test batch set. Along with our detailed approach, we present the results for our submissions and also highlight identified downsides for our current approach and ways to improve them in our future experiments. In last test batch results we placed 4th for List-type questions and 3rd for Factoid-type questions.)", "id": 478, "question": "What was the baseline model?", "title": "UNCC Biomedical Semantic Question Answering Systems. BioASQ: Task-7B, Phase-B"}, {"answers": ["BioASQ dataset", "A dataset provided by BioASQ consisting of questions, gold standard documents, snippets, concepts and ideal and ideal answers."], "context": "Sharma et al. BIBREF3 describe a system with two stage process for factoid and list type question answering. Their system extracts relevant entities and then runs supervised classifier to rank the entities. Wiese et al. BIBREF4 propose neural network based model for Factoid and List-type question answering task. The model is based on Fast QA and predicts the answer span in the passage for a given question. The model is trained on SQuAD data set and fine tuned on the BioASQ data. Dimitriadis et al. BIBREF5 proposed two stage process for Factoid question answering task. Their system uses general purpose tools such as Metamap, BeCas to identify candidate sentences. These candidate sentences are represented in the form of features, and are then ranked by the binary classifier. Classifier is trained on candidate sentences extracted from relevant questions, snippets and correct answers from BioASQ challenge. For factoid question answering task highest \u2018MRR\u2019 achieved in the 6th edition of BioASQ competition is \u20180.4325\u2019. Our system is a neural network model based on contextual word embeddings BIBREF1 and achieved a \u2018MRR\u2019 score \u20180.6103\u2019 in one of the test batches for Factoid Question Answering task.", "id": 479, "question": "What dataset did they use?", "title": "UNCC Biomedical Semantic Question Answering Systems. BioASQ: Task-7B, Phase-B"}, {"answers": ["", ""], "context": "BERT stands for \"Bidirectional Encoder Representations from Transformers\" BIBREF1 is a contextual word embedding model. Given a sentence as an input, contextual embedding for the words are returned. The BERT model was designed so it can be fine tuned for 11 different tasks BIBREF1, including question answering tasks. For a question answering task, question and paragraph (context) are given as an input. A BERT standard is that question text and paragraph text are separated by a separator [Sep]. BERT question-answering fine tuning involves adding softmax layer. Softmax layer takes contextual word embeddings from BERT as input and learns to identity answer span present in the paragraph (context). This process is represented in Figure FIGREF4.", "id": 480, "question": "What was their highest recall score?", "title": "UNCC Biomedical Semantic Question Answering Systems. BioASQ: Task-7B, Phase-B"}, {"answers": ["0.5115", ""], "context": "A \u2018word embedding\u2019 is a learned representation. It is represented in the form of vector where words that have the same meaning have a similar vector representation. Consider a word embedding model 'word2vec' BIBREF6 trained on a corpus. Word embeddings generated from the model are context independent that is, word embeddings are returned regardless of where the words appear in a sentence and regardless of e.g. the sentiment of the sentence. However, contextual word embedding models like BERT also takes context of the word into consideration.", "id": 481, "question": "What was their highest MRR score?", "title": "UNCC Biomedical Semantic Question Answering Systems. BioASQ: Task-7B, Phase-B"}, {"answers": [""], "context": "Neural machine translation (NMT) has achieved the state-of-the-art results on a mass of language pairs with varying structural differences, such as English-French BIBREF0, BIBREF1 and Chinese-English BIBREF2. However, so far not much is known about how and why NMT works, which pose great challenges for debugging NMT models and designing optimal architectures.", "id": 482, "question": "Does their model suffer exhibit performance drops when incorporating word importance?", "title": "Towards Understanding Neural Machine Translation with Word Importance"}, {"answers": ["They measured the under-translated words with low word importance score as calculated by Attribution.\nmethod", ""], "context": "Our main contributions are:", "id": 483, "question": "How do they measure which words are under-translated by NMT models?", "title": "Towards Understanding Neural Machine Translation with Word Importance"}, {"answers": ["", "They compute the gradient of the output at each time step with respect to the input words to decide the importance."], "context": "Interpretability of Seq2Seq models has recently been explored mainly from two perspectives: interpreting internal representations and understanding input-output behaviors. Most of the existing work focus on the former thread, which analyzes the linguistic information embeded in the learned representations BIBREF3, BIBREF4, BIBREF10 or the hidden units BIBREF6, BIBREF5. Several researchers turn to expose systematic differences between human and NMT translations BIBREF11, BIBREF12, indicating the linguistic properties worthy of investigating. However, the learned representations may depend on the model implementation, which potentially limit the applicability of these methods to a broader range of model architectures. Accordingly, we focus on understanding the input-output behaviors, and validate on different architectures to demonstrate the universality of our findings.", "id": 484, "question": "How do their models decide how much improtance to give to the output words?", "title": "Towards Understanding Neural Machine Translation with Word Importance"}, {"answers": ["", ""], "context": "The intermediate gradients have proven to be useful in interpreting deep learning models, such as NLP models BIBREF14, BIBREF15 and computer vision models BIBREF16, BIBREF9. Among all gradient-based approaches, the integrated gradients BIBREF9 is appealing since it does not need any instrumentation of the architecture and can be computed easily by calling gradient operations. In this work, we employ the IG method to interpret NMT models and reveal several interesting findings, which can potentially help debug NMT models and design better architectures for specific language pairs.", "id": 485, "question": "Which model architectures do they test their word importance approach on?", "title": "Towards Understanding Neural Machine Translation with Word Importance"}, {"answers": ["", ""], "context": "Progress in AI has been driven by, among other things, the development of challenging large-scale benchmarks like ImageNet BIBREF0 in computer vision, and SNLI BIBREF1, SQuAD BIBREF2, and others in natural language processing (NLP). Recently, for natural language understanding (NLU) in particular, the focus has shifted to combined benchmarks like SentEval BIBREF3 and GLUE BIBREF4, which track model performance on multiple tasks and provide a unified platform for analysis.", "id": 486, "question": "Do they compare human-level performance to model performance for their dataset?", "title": "Adversarial NLI: A New Benchmark for Natural Language Understanding"}, {"answers": [""], "context": "The primary aim of this work is to create a new large-scale NLI benchmark on which current state-of-the-art models fail. This constitutes a new target for the field to work towards, and can elucidate model capabilities and limitations. As noted, however, static benchmarks do not last very long these days. If continuously deployed, the data collection procedure we introduce here can pose a dynamic challenge that allows for never-ending learning.", "id": 487, "question": "What are the weaknesses found by non-expert annotators of current state-of-the-art NLI models?", "title": "Adversarial NLI: A New Benchmark for Natural Language Understanding"}, {"answers": ["", ""], "context": "To paraphrase the great bard BIBREF21, there is something rotten in the state of the art. We propose Human-And-Model-in-the-Loop Entailment Training (HAMLET), a training procedure to automatically mitigate problems with current dataset collection procedures (see Figure FIGREF1).", "id": 488, "question": "What data sources do they use for creating their dataset?", "title": "Adversarial NLI: A New Benchmark for Natural Language Understanding"}, {"answers": ["", ""], "context": "We employed crowdsourced workers from Mechanical Turk with qualifications. We collected hypotheses via the ParlAI framework. Annotators are presented with a context and a target label\u2014either `entailment', `contradiction', or `neutral'\u2014and asked to write a hypothesis that corresponds to the label. We phrase the label classes as \u201cdefinitely correct\u201d, \u201cdefinitely incorrect\u201d, or \u201cneither definitely correct nor definitely incorrect\u201d given the context, to make the task easier to grasp. Submitted hypotheses are given to the model to make a prediction for the context-hypothesis pair. The probability of each label is returned to the worker as feedback. If the model predicts the label incorrectly, the job is complete. If not, the worker continues to write hypotheses for the given (context, target-label) pair until the model predicts the label incorrectly or the number of tries exceeds a threshold (5 tries in the first round, 10 tries thereafter). To encourage workers, payments increased as rounds became harder. For hypotheses that the model predicted the incorrect label for, but were verified by other humans, we paid an additional bonus on top of the standard rate.", "id": 489, "question": "Do they use active learning to create their dataset?", "title": "Adversarial NLI: A New Benchmark for Natural Language Understanding"}, {"answers": ["", ""], "context": "A hashtag is a keyphrase represented as a sequence of alphanumeric characters plus underscore, preceded by the # symbol. Hashtags play a central role in online communication by providing a tool to categorize the millions of posts generated daily on Twitter, Instagram, etc. They are useful in search, tracking content about a certain topic BIBREF0 , BIBREF1 , or discovering emerging trends BIBREF2 .", "id": 490, "question": "Do the hashtag and SemEval datasets contain only English data?", "title": "Multi-task Pairwise Neural Ranking for Hashtag Segmentation"}, {"answers": ["", ""], "context": "Current approaches for hashtag segmentation can be broadly divided into three categories: (a) gazeteer and rule based BIBREF11 , BIBREF12 , BIBREF13 , (b) word boundary detection BIBREF14 , BIBREF15 , and (c) ranking with language model and other features BIBREF16 , BIBREF10 , BIBREF0 , BIBREF17 , BIBREF18 . Hashtag segmentation approaches draw upon work on compound splitting for languages such as German or Finnish BIBREF19 and word segmentation BIBREF20 for languages with no spaces between words such as Chinese BIBREF21 , BIBREF22 . Similar to our work, BIBREF10 BansalBV15 extract an initial set of candidate segmentations using a sliding window, then rerank them using a linear regression model trained on lexical, bigram and other corpus-based features. The current state-of-the-art approach BIBREF14 , BIBREF15 uses maximum entropy and CRF models with a combination of language model and hand-crafted features to predict if each character in the hashtag is the beginning of a new word.", "id": 491, "question": "What current state of the art method was used for comparison?", "title": "Multi-task Pairwise Neural Ranking for Hashtag Segmentation"}, {"answers": [""], "context": "We propose a multi-task pairwise neural ranking approach to better incorporate and distinguish the relative order between the candidate segmentations of a given hashtag. Our model adapts to address single- and multi-token hashtags differently via a multi-task learning strategy without requiring additional annotations. In this section, we describe the task setup and three variants of pairwise neural ranking models (Figure FIGREF11 ).", "id": 492, "question": "What set of approaches to hashtag segmentation are proposed?", "title": "Multi-task Pairwise Neural Ranking for Hashtag Segmentation"}, {"answers": ["", ""], "context": "The goal of hashtag segmentation is to divide a given hashtag INLINEFORM0 into a sequence of meaningful words INLINEFORM1 . For a hashtag of INLINEFORM2 characters, there are a total of INLINEFORM3 possible segmentations but only one, or occasionally two, of them ( INLINEFORM4 ) are considered correct (Table TABREF9 ).", "id": 493, "question": "How is the dataset of hashtags sourced?", "title": "Multi-task Pairwise Neural Ranking for Hashtag Segmentation"}, {"answers": [""], "context": "Spoken conversations still remain the most natural and effortless means of human communication. Thus a lot of valuable information is conveyed and exchanged in such an unstructured form. In telehealth settings, nurses might call discharged patients who have returned home to continue to monitor their health status. Human language technology that can efficiently and effectively extract key information from such conversations is clinically useful, as it can help streamline workflow processes and digitally document patient medical information to increase staff productivity. In this work, we design and prototype a dialogue comprehension system in the question-answering manner, which is able to comprehend spoken conversations between nurses and patients to extract clinical information.", "id": 494, "question": "How big is their created dataset?", "title": "Fast Prototyping a Dialogue Comprehension System for Nurse-Patient Conversations on Symptom Monitoring"}, {"answers": ["A sample from nurse-initiated telephone conversations for congestive heart failure patients undergoing telepmonitoring, post-discharge from the Health Management Unit at Changi General Hospital", ""], "context": "Machine comprehension of written passages has made tremendous progress recently. Large quantities of supervised training data for reading comprehension (e.g. SQuAD BIBREF0 ), the wide adoption and intense experimentation of neural modeling BIBREF1 , BIBREF2 , and the advancements in vector representations of word embeddings BIBREF3 , BIBREF4 all contribute significantly to the achievements obtained so far. The first factor, the availability of large scale datasets, empowers the latter two factors. To date, there is still very limited well-annotated large-scale data suitable for modeling human-human spoken dialogues. Therefore, it is not straightforward to directly port over the recent endeavors in reading comprehension to dialogue comprehension tasks.", "id": 495, "question": "Which data do they use as a starting point for the dialogue dataset?", "title": "Fast Prototyping a Dialogue Comprehension System for Nurse-Patient Conversations on Symptom Monitoring"}, {"answers": ["", ""], "context": "Human-human spoken conversations are a dynamic and interactive flow of information exchange. While developing technology to comprehend such spoken conversations presents similar technical challenges as machine comprehension of written passages BIBREF6 , the challenges are further complicated by the interactive nature of human-human spoken conversations:", "id": 496, "question": "What labels do they create on their dataset?", "title": "Fast Prototyping a Dialogue Comprehension System for Nurse-Patient Conversations on Symptom Monitoring"}, {"answers": ["1264 instances from simulated data, 1280 instances by adding two out-of-distribution symptoms and 944 instances manually delineated from the symptom checking portions of real-word dialogues", ""], "context": "Figure FIGREF5 (b) illustrates the proposed dialogue comprehension task using a question answering (QA) model. The input are a multi-turn symptom checking dialogue INLINEFORM0 and a query INLINEFORM1 specifying a symptom with one of its attributes; the output is the extracted answer INLINEFORM2 from the given dialogue. A training or test sample is defined as INLINEFORM3 . Five attributes, specifying certain details of clinical significance, are defined to characterize the answer types of INLINEFORM4 : (1) the time the patient has been experiencing the symptom, (2) activities that trigger the symptom (to occur or worsen), (3) the extent of seriousness, (4) the frequency occurrence of the symptom, and (5) the location of symptom. For each symptom/attribute, it can take on different linguistic expressions, defined as entities. Note that if the queried symptom or attribute is not mentioned in the dialogue, the groundtruth output is \u201cNo Answer\u201d, as in BIBREF6 .", "id": 497, "question": "How do they select instances to their hold-out test set?", "title": "Fast Prototyping a Dialogue Comprehension System for Nurse-Patient Conversations on Symptom Monitoring"}, {"answers": ["", ""], "context": "Deep neural networks (DNNs), in particular convolutional and recurrent neural networks, with huge architectures have been proven successful in wide range of tasks including audio processing such as speech to text [1 - 4], emotion recognition [5 - 8], speech/non-speech (e.g., of non-speech include noise, music, etc.,) classification [9 - 12], etc.", "id": 498, "question": "Which models/frameworks do they compare to?", "title": "A Novel Approach for Effective Learning in Low Resourced Scenarios"}, {"answers": ["", ""], "context": "The s2sL approach proposed to address low data resource problem is explained in this Section. In this work, we use MLP (modified to handle our data representation) as the base classifier. Here, we explain the s2sL approach by considering two-class classification task.", "id": 499, "question": "Which classification algorithm do they use for s2sL?", "title": "A Novel Approach for Effective Learning in Low Resourced Scenarios"}, {"answers": ["", ""], "context": "Consider a two-class classification task with INLINEFORM0 denoting the set of class labels, and let INLINEFORM1 and INLINEFORM2 be the number of samples corresponding to INLINEFORM3 and INLINEFORM4 , respectively. In general, to train a classifier, the samples in the train set are provided as input-output pairs as follows. DISPLAYFORM0 ", "id": 500, "question": "Up to how many samples do they experiment with?", "title": "A Novel Approach for Effective Learning in Low Resourced Scenarios"}, {"answers": [""], "context": "MLP, the most commonly used feed forward neural network, is considered as the base classifier to validate our proposed s2s framework. Generally, MLPs are trained using the data format given by eq. INLINEFORM0 . But to train the MLP on our s2s based data representation (as in eq. INLINEFORM1 ), the following modifications are made to the MLP architecture (refer to Figure FIGREF4 ).", "id": 501, "question": "Do they use pretrained models?", "title": "A Novel Approach for Effective Learning in Low Resourced Scenarios"}, {"answers": ["", ""], "context": "Machine Reading Comprehension (MRC), as the name suggests, requires a machine to read a passage and answer its relevant questions. Since the answer to each question is supposed to stem from the corresponding passage, a common MRC solution is to develop a neural-network-based MRC model that predicts an answer span (i.e. the answer start position and the answer end position) from the passage of each given passage-question pair. To facilitate the explorations and innovations in this area, many MRC datasets have been established, such as SQuAD BIBREF0 , MS MARCO BIBREF1 , and TriviaQA BIBREF2 . Consequently, many pioneering MRC models have been proposed, such as BiDAF BIBREF3 , R-NET BIBREF4 , and QANet BIBREF5 . According to the leader board of SQuAD, the state-of-the-art MRC models have achieved the same performance as human beings. However, does this imply that they have possessed the same reading comprehension ability as human beings?", "id": 502, "question": "Do they report results only on English datasets?", "title": "Explicit Utilization of General Knowledge in Machine Reading Comprehension"}, {"answers": ["By evaluating their model on adversarial sets containing misleading sentences", ""], "context": "In this section, we elaborate a WordNet-based data enrichment method, which is aimed at extracting inter-word semantic connections from each passage-question pair in our MRC dataset. The extraction is performed in a controllable manner, and the extracted results are provided as general knowledge to our MRC model.", "id": 503, "question": "How do the authors examine whether a model is robust to noise or not?", "title": "Explicit Utilization of General Knowledge in Machine Reading Comprehension"}, {"answers": [""], "context": "WordNet is a lexical database of English, where words are organized into synsets according to their senses. A synset is a set of words expressing the same sense so that a word having multiple senses belongs to multiple synsets, with each synset corresponding to a sense. Synsets are further related to each other through semantic relations. According to the WordNet interface provided by NLTK BIBREF12 , there are totally sixteen types of semantic relations (e.g. hypernyms, hyponyms, holonyms, meronyms, attributes, etc.). Based on synset and semantic relation, we define a new concept: semantic relation chain. A semantic relation chain is a concatenated sequence of semantic relations, which links a synset to another synset. For example, the synset \u201ckeratin.n.01\u201d is related to the synset \u201cfeather.n.01\u201d through the semantic relation \u201csubstance holonym\u201d, the synset \u201cfeather.n.01\u201d is related to the synset \u201cbird.n.01\u201d through the semantic relation \u201cpart holonym\u201d, and the synset \u201cbird.n.01\u201d is related to the synset \u201cparrot.n.01\u201d through the semantic relation \u201chyponym\u201d, thus \u201csubstance holonym INLINEFORM0 part holonym INLINEFORM1 hyponym\u201d is a semantic relation chain, which links the synset \u201ckeratin.n.01\u201d to the synset \u201cparrot.n.01\u201d. We name each semantic relation in a semantic relation chain as a hop, therefore the above semantic relation chain is a 3-hop chain. By the way, each single semantic relation is equivalent to a 1-hop chain.", "id": 504, "question": "What type of model is KAR?", "title": "Explicit Utilization of General Knowledge in Machine Reading Comprehension"}, {"answers": ["", ""], "context": "The key problem in the data enrichment method is determining whether a word is semantically connected to another word. If so, we say that there exists an inter-word semantic connection between them. To solve this problem, we define another new concept: the extended synsets of a word. Given a word INLINEFORM0 , whose synsets are represented as a set INLINEFORM1 , we use another set INLINEFORM2 to represent its extended synsets, which includes all the synsets that are in INLINEFORM3 or that can be linked to from INLINEFORM4 through semantic relation chains. Theoretically, if there is no limitation on semantic relation chains, INLINEFORM5 will include all the synsets in WordNet, which is meaningless in most situations. Therefore, we use a hyper-parameter INLINEFORM6 to represent the permitted maximum hop count of semantic relation chains. That is to say, only the chains having no more than INLINEFORM7 hops can be used to construct INLINEFORM8 so that INLINEFORM9 becomes a function of INLINEFORM10 : INLINEFORM11 (if INLINEFORM12 , we will have INLINEFORM13 ). Based on the above statements, we formulate a heuristic rule for determining inter-word semantic connections: a word INLINEFORM14 is semantically connected to another word INLINEFORM15 if and only if INLINEFORM16 .", "id": 505, "question": "Do the authors hypothesize that humans' robustness to noise is due to their general knowledge?", "title": "Explicit Utilization of General Knowledge in Machine Reading Comprehension"}, {"answers": ["", "Classification system use n-grams, bag-of-words, common words and hashtags as features and SVM, random forest, extra tree and NB classifiers."], "context": "\u201cLaughter is the best Medicine\u201d is a saying which is popular with most of the people. Humor is a form of communication that bridges the gap between various languages, cultures, ages and demographics. That's why humorous content with funny and witty hashtags are so much popular on social media. It is a very powerful tool to connect with the audience. Automatic Humor Recognition is the task of determining whether a text contains some level of humorous content or not. First conference on Computational humor was organized in 1996, since then many research have been done in this field. kao2016computational does pun detection in one-liners and dehumor detects humor in Yelp reviews. Because of the complex and interesting aspects involved in detecting humor in texts, it is one of the challenging research field in Natural Language Processing BIBREF3 . Identifying humor in a sentence sometimes require a great amount of external knowledge to completely understand it. There are many types of humor, namely anecdotes, fantasy, insult, irony, jokes, quote, self deprecation etc BIBREF4 , BIBREF5 . Most of the times there are different meanings hidden inside a sentence which is grasped differently by individuals, making the task of humor identification difficult, which is why the development of a generalized algorithm to classify different type of humor is a challenging task.", "id": 506, "question": "What type of system does the baseline classification use?", "title": "Humor Detection in English-Hindi Code-Mixed Social Media Content : Corpus and Baseline System"}, {"answers": [""], "context": "In this section, we explain the techniques used in the creation and annotation of the corpus.", "id": 507, "question": "What experiments were carried out on the corpus?", "title": "Humor Detection in English-Hindi Code-Mixed Social Media Content : Corpus and Baseline System"}, {"answers": ["", ""], "context": "Python package twitterscraper is used to scrap tweets from twitter. 10,478 tweets from the past two years from domains like `sports', `politics', `entertainment' were extracted. Among those tweets, we manually removed the tweets which were written either in English or Hindi entirely. There were 4161 tweets written in English and 2774 written in Hindi. Finally, a total of 3543 English-Hindi code-mixed tweets were collected. Table 1 describes the number of tweets and words in each category.", "id": 508, "question": "How many annotators tagged each text?", "title": "Humor Detection in English-Hindi Code-Mixed Social Media Content : Corpus and Baseline System"}, {"answers": ["", ""], "context": "The final code-mixed tweets were forwarded to a group of three annotators who were university students and fluent in both English and Hindi. Approximately 60 hours were spent in tagging tweets for the presence of humor. Tweets which consisted of any anecdotes, fantasy, irony, jokes, insults were annotated as humorous whereas tweets stating any facts, dialogues or speech which did not contain amusement were put in non-humorous class. Following are some examples of code-mixed tweets in the corpus:", "id": 509, "question": "Where did the texts in the corpus come from?", "title": "Humor Detection in English-Hindi Code-Mixed Social Media Content : Corpus and Baseline System"}, {"answers": ["", ""], "context": "Pre-training of language models has been shown to provide large improvements for a range of language understanding tasks BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . The key idea is to train a large generative model on vast corpora and use the resulting representations on tasks for which only limited amounts of labeled data is available. Pre-training of sequence to sequence models has been previously investigated for text classification BIBREF4 but not for text generation. In neural machine translation, there has been work on transferring representations from high-resource language pairs to low-resource settings BIBREF5 .", "id": 510, "question": "What is the previous state-of-the-art in summarization?", "title": "Pre-trained Language Model Representations for Language Generation"}, {"answers": ["", ""], "context": "We consider augmenting a standard sequence to sequence model with pre-trained representations following an ELMo-style regime (\u00a7 SECREF2 ) as well as by fine-tuning the language model (\u00a7 SECREF3 ).", "id": 511, "question": "What dataset do they use?", "title": "Pre-trained Language Model Representations for Language Generation"}, {"answers": [""], "context": "The ELMo approach of BIBREF0 forms contextualized word embeddings based on language model representations without adjusting the actual language model parameters. Specifically, the ELMo module contains a set of parameters INLINEFORM0 to form a linear combination of the INLINEFORM1 layers of the language model: ELMo = INLINEFORM2 where INLINEFORM3 is a learned scalar, INLINEFORM4 is a constant to normalize the INLINEFORM5 to sum to one and INLINEFORM6 is the output of the INLINEFORM7 -th language model layer; the module also considers the input word embeddings of the language model. We also apply layer normalization BIBREF7 to each INLINEFORM8 before computing ELMo vectors.", "id": 512, "question": "What other models do they compare to?", "title": "Pre-trained Language Model Representations for Language Generation"}, {"answers": ["", ""], "context": "Fine-tuning the pre-trained representations adjusts the language model parameters by the learning signal of the end-task BIBREF1 , BIBREF3 . We replace learned input word embeddings in the encoder network with the output of the language model (). Specifically, we use the language model representation of the layer before the softmax and feed it to the encoder. We also add dropout to the language model output. Tuning separate learning rates for the language model and the sequence to sequence model may lead to better performance but we leave this to future work. However, we do tune the number of encoder blocks INLINEFORM0 as we found this important to obtain good accuracy for this setting. We apply the same strategy to the decoder: we input language model representations to the decoder network and fine-tune the language model when training the sequence to sequence model ().", "id": 513, "question": "What language model architectures are used?", "title": "Pre-trained Language Model Representations for Language Generation"}, {"answers": ["Words that a user wants them to appear in the generated output.", "terms common to hosts' descriptions of popular Airbnb properties, like 'subway', 'manhattan', or 'parking'"], "context": "The development of online peer-to-peer markets in the 1990s, galvanized by the launch of sites like eBay, fundamentally shifted the way buyers and sellers could connect [4]. These new markets not only leveraged technology to allow for faster transaction speeds, but in the process also exposed a variety of unprecedented market-designs [4].", "id": 514, "question": "What are the user-defined keywords?", "title": "Using General Adversarial Networks for Marketing: A Case Study of Airbnb"}, {"answers": ["", ""], "context": "Fortunately, we believe that the introduction of unsupervised generative language models presents a way in which to tackle this particular shortcoming of peer-to-peer markets. In 2014, Ian Goodfellow et. al proposed the general adversarial network (GAN) [5]. The group showcased how this generative model could learn to artificially replicate data patterns to an unprecedented realistic degree [5]. Since then, these models have shown tremendous potential in their ability to generate photo-realistic images and coherent text samples [5].", "id": 515, "question": "Does the method achieve sota performance on this dataset?", "title": "Using General Adversarial Networks for Marketing: A Case Study of Airbnb"}, {"answers": ["GloVe vectors trained on Wikipedia Corpus with ensembling, and GloVe vectors trained on Airbnb Data without ensembling", ""], "context": "The data for the project was acquired from Airdna, a data processing service that collaborates with Airbnb to produce high-accuracy data summaries for listings in geographic regions of the United States. For the sake of simplicity, we focus our analysis on Airbnb listings from Manhattan, NY, during the time period of January 1, 2016, to January 1, 2017. The data provided to us contained information for roughly 40,000 Manhattan listings that were posted on Airbnb during this defined time period. For each listing, we were given information of the amenities of the listing (number of bathrooms, number of bedrooms \u2026), the listing\u2019s zip code, the host\u2019s description of the listing, the price of the listing, and the occupancy rate of the listing. Airbnb defines a home's occupancy rate, as the percentage of time that a listing is occupied over the time period that the listing is available. This gives us a reasonable metric for defining popular versus less popular listings.", "id": 516, "question": "What are the baselines used in the paper?", "title": "Using General Adversarial Networks for Marketing: A Case Study of Airbnb"}, {"answers": [""], "context": "Prior to building our generative model, we sought to gain a better understanding of how less and more popular listing descriptions differed in their writing style. We defined a home\u2019s popularity via its occupancy rate metric, which we describe in the Data section. Using this popularity heuristic, we first stratified our dataset into groupings of listings at similar price points (i.e. $0-$30, $30-$60, ...). Importantly, rather than using the home\u2019s quoted price, we relied on the price per bedroom as a better metric for the cost of the listing. Having clustered our listings into these groupings, we then selected the top third of listings by occupancy rate, as part of the \u2018high popularity\u2019 group. Listings in the middle and lowest thirds by occupancy rate were labeled \u2018medium popularity\u2019 and \u2018low popularity\u2019 respectively. We then combined all of the listings with high/medium/low popularity together for our final data set.", "id": 517, "question": "What is the size of the Airbnb?", "title": "Using General Adversarial Networks for Marketing: A Case Study of Airbnb"}, {"answers": ["F1 score of 97.5 on MSR and 95.7 on AS", "MSR: 97.7 compared to 97.5 of baseline\nAS: 95.7 compared to 95.6 of baseline"], "context": "Chinese word segmentation (CWS) is a task for Chinese natural language process to delimit word boundary. CWS is a basic and essential task for Chinese which is written without explicit word delimiters and different from alphabetical languages like English. BIBREF0 treats Chinese word segmentation (CWS) as a sequence labeling task with character position tags, which is followed by BIBREF1, BIBREF2, BIBREF3. Traditional CWS models depend on the design of features heavily which effects the performance of model. To minimize the effort in feature engineering, some CWS models BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11 are developed following neural network architecture for sequence labeling tasks BIBREF12. Neural CWS models perform strong ability of feature representation, employing unigram and bigram character embedding as input and approach good performance.", "id": 518, "question": "How better is performance compared to previous state-of-the-art models?", "title": "Attention Is All You Need for Chinese Word Segmentation"}, {"answers": ["pays attentions to adjacent characters and casts a localness relationship between the characters as a fixed Gaussian weight assuming the weight relies on the distance between characters", ""], "context": "The CWS task is often modelled as one graph model based on an encoder-based scoring model. The model for CWS task is composed of an encoder to represent the input and a decoder based on the encoder to perform actual segmentation. Figure FIGREF6 is the architecture of our model. The model feeds sentence into encoder. Embedding captures the vector $e=(e_1,...,e_n)$ of the input character sequences of $c=(c_1,...,c_n)$. The encoder maps vector sequences of $ {e}=(e_1,..,e_n)$ to two sequences of vector which are $ {v^b}=(v_1^b,...,v_n^b)$ and ${v^f}=(v_1^f,...v_n^f)$ as the representation of sentences. With $v^b$ and $v^f$, the bi-affinal scorer calculates the probability of each segmentation gaps and predicts the word boundaries of input. Similar as the Transformer, the encoder is an attention network with stacked self-attention and point-wise, fully connected layers while our encoder includes three independent directional encoders.", "id": 519, "question": "How does Gaussian-masked directional multi-head attention works?", "title": "Attention Is All You Need for Chinese Word Segmentation"}, {"answers": ["", ""], "context": "In the Transformer, the encoder is composed of a stack of N identical layers and each layer has one multi-head self-attention layer and one position-wise fully connected feed-forward layer. One residual connection is around two sub-layers and followed by layer normalization BIBREF24. This architecture provides the Transformer a good ability to generate representation of sentence.", "id": 520, "question": "What is meant by closed test setting?", "title": "Attention Is All You Need for Chinese Word Segmentation"}, {"answers": ["Baseline models are:\n- Chen et al., 2015a\n- Chen et al., 2015b\n- Liu et al., 2016\n- Cai and Zhao, 2016\n- Cai et al., 2017\n- Zhou et al., 2017\n- Ma et al., 2018\n- Wang et al., 2019"], "context": "Similar as scaled dot-product attention BIBREF24, Gaussian-masked directional attention can be described as a function to map queries and key-value pairs to the representation of input. Here queries, keys and values are all vectors. Standard scaled dot-product attention is calculated by dotting query $Q$ with all keys $K$, dividing each values by $\\sqrt{d_k}$, where $\\sqrt{d_k}$ is the dimension of keys, and apply a softmax function to generate the weights in the attention:", "id": 521, "question": "What are strong baselines model is compared to?", "title": "Attention Is All You Need for Chinese Word Segmentation"}, {"answers": ["", ""], "context": "Abusive language refers to any type of insult, vulgarity, or profanity that debases the target; it also can be anything that causes aggravation BIBREF0 , BIBREF1 . Abusive language is often reframed as, but not limited to, offensive language BIBREF2 , cyberbullying BIBREF3 , othering language BIBREF4 , and hate speech BIBREF5 .", "id": 522, "question": "Does the dataset feature only English language data?", "title": "Comparative Studies of Detecting Abusive Language on Twitter"}, {"answers": ["using tweets that one has replied or quoted to as contextual information", ""], "context": "The research community introduced various approaches on abusive language detection. Razavi et al. razavi2010offensive applied Na\u00efve Bayes, and Warner and Hirschberg warner2012detecting used Support Vector Machine (SVM), both with word-level features to classify offensive language. Xiang et al. xiang2012detecting generated topic distributions with Latent Dirichlet Allocation BIBREF12 , also using word-level features in order to classify offensive tweets.", "id": 523, "question": "What additional features and context are proposed?", "title": "Comparative Studies of Detecting Abusive Language on Twitter"}, {"answers": ["", ""], "context": "This section illustrates our implementations on traditional machine learning classifiers and neural network based models in detail. Furthermore, we describe additional features and variant models investigated.", "id": 524, "question": "What learning models are used on the dataset?", "title": "Comparative Studies of Detecting Abusive Language on Twitter"}, {"answers": [""], "context": "We implement five feature engineering based machine learning classifiers that are most often used for abusive language detection. In data preprocessing, text sequences are converted into Bag Of Words (BOW) representations, and normalized with Term Frequency-Inverse Document Frequency (TF-IDF) values. We experiment with word-level features using n-grams ranging from 1 to 3, and character-level features from 3 to 8-grams. Each classifier is implemented with the following specifications:", "id": 525, "question": "What examples of the difficulties presented by the context-dependent nature of online aggression do they authors give?", "title": "Comparative Studies of Detecting Abusive Language on Twitter"}, {"answers": ["", ""], "context": "", "id": 526, "question": "Do they report results only on English data?", "title": "A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media"}, {"answers": ["The authors showed few tweets where neither and implicit hatred content exist but the model was able to discriminate"], "context": "Here, the existing body of knowledge on online hate speech and offensive language and transfer learning is presented.", "id": 527, "question": "What evidence do the authors present that the model can capture some biases in data annotation and collection?", "title": "A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media"}, {"answers": ["", ""], "context": "Here, we analyze the BERT transformer model on the hate speech detection task. BERT is a multi-layer bidirectional transformer encoder trained on the English Wikipedia and the Book Corpus containing 2,500M and 800M tokens, respectively, and has two models named BERTbase and BERTlarge. BERTbase contains an encoder with 12 layers (transformer blocks), 12 self-attention heads, and 110 million parameters whereas BERTlarge has 24 layers, 16 attention heads, and 340 million parameters. Extracted embeddings from BERTbase have 768 hidden dimensions BIBREF11. As the BERT model is pre-trained on general corpora, and for our hate speech detection task we are dealing with social media content, therefore as a crucial step, we have to analyze the contextual information extracted from BERT' s pre-trained layers and then fine-tune it using annotated datasets. By fine-tuning we update weights using a labelled dataset that is new to an already trained model. As an input and output, BERT takes a sequence of tokens in maximum length 512 and produces a representation of the sequence in a 768-dimensional vector. BERT inserts at most two segments to each input sequence, [CLS] and [SEP]. [CLS] embedding is the first token of the input sequence and contains the special classification embedding which we take the first token [CLS] in the final hidden layer as the representation of the whole sequence in hate speech classification task. The [SEP] separates segments and we will not use it in our classification task. To perform the hate speech detection task, we use BERTbase model to classify each tweet as Racism, Sexism, Neither or Hate, Offensive, Neither in our datasets. In order to do that, we focus on fine-tuning the pre-trained BERTbase parameters. By fine-tuning, we mean training a classifier with different layers of 768 dimensions on top of the pre-trained BERTbase transformer to minimize task-specific parameters.", "id": 528, "question": "Which publicly available datasets are used?", "title": "A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media"}, {"answers": ["", ""], "context": "Different layers of a neural network can capture different levels of syntactic and semantic information. The lower layer of the BERT model may contain more general information whereas the higher layers contain task-specific information BIBREF11, and we can fine-tune them with different learning rates. Here, four different fine-tuning approaches are implemented that exploit pre-trained BERTbase transformer encoders for our classification task. More information about these transformer encoders' architectures are presented in BIBREF11. In the fine-tuning phase, the model is initialized with the pre-trained parameters and then are fine-tuned using the labelled datasets. Different fine-tuning approaches on the hate speech detection task are depicted in Figure FIGREF8, in which $X_{i}$ is the vector representation of token $i$ in a tweet sample, and are explained in more detail as follows:", "id": 529, "question": "What baseline is used?", "title": "A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media"}, {"answers": ["", ""], "context": "We first introduce datasets used in our study and then investigate the different fine-tuning strategies for hate speech detection task. We also include the details of our implementation and error analysis in the respective subsections.", "id": 530, "question": "What new fine-tuning methods are presented?", "title": "A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media"}, {"answers": ["", "sampling tweets from specific keywords create systematic and substancial racial biases in datasets"], "context": "We evaluate our method on two widely-studied datasets provided by Waseem and Hovey BIBREF5 and Davidson et al. BIBREF9. Waseem and Hovy BIBREF5 collected $16k$ of tweets based on an initial ad-hoc approach that searched common slurs and terms related to religious, sexual, gender, and ethnic minorities. They annotated their dataset manually as racism, sexism, or neither. To extend this dataset, Waseem BIBREF23 also provided another dataset containing $6.9k$ of tweets annotated with both expert and crowdsourcing users as racism, sexism, neither, or both. Since both datasets are overlapped partially and they used the same strategy in definition of hateful content, we merged these two datasets following Waseem et al. BIBREF10 to make our imbalance data a bit larger. Davidson et al. BIBREF9 used the Twitter API to accumulate 84.4 million tweets from 33,458 twitter users containing particular terms from a pre-defined lexicon of hate speech words and phrases, called Hatebased.org. To annotate collected tweets as Hate, Offensive, or Neither, they randomly sampled $25k$ tweets and asked users of CrowdFlower crowdsourcing platform to label them. In detail, the distribution of different classes in both datasets will be provided in Subsection SECREF15.", "id": 531, "question": "What are the existing biases?", "title": "A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media"}, {"answers": ["Data annotation biases where tweet containing disrespectful words are annotated as hate or offensive without any presumption about the social context of tweeters"], "context": "We find mentions of users, numbers, hashtags, URLs and common emoticons and replace them with the tokens ,,,,. We also find elongated words and convert them into short and standard format; for example, converting yeeeessss to yes. With hashtags that include some tokens without any with space between them, we replace them by their textual counterparts; for example, we convert hashtag \u201c#notsexist\" to \u201cnot sexist\". All punctuation marks, unknown uni-codes and extra delimiting characters are removed, but we keep all stop words because our model trains the sequence of words in a text directly. We also convert all tweets to lower case.", "id": 532, "question": "What biases does their model capture?", "title": "A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media"}, {"answers": ["", ""], "context": "For the implementation of our neural network, we used pytorch-pretrained-bert library containing the pre-trained BERT model, text tokenizer, and pre-trained WordPiece. As the implementation environment, we use Google Colaboratory tool which is a free research tool with a Tesla K80 GPU and 12G RAM. Based on our experiments, we trained our classifier with a batch size of 32 for 3 epochs. The dropout probability is set to 0.1 for all layers. Adam optimizer is used with a learning rate of 2e-5. As an input, we tokenized each tweet with the BERT tokenizer. It contains invalid characters removal, punctuation splitting, and lowercasing the words. Based on the original BERT BIBREF11, we split words to subword units using WordPiece tokenization. As tweets are short texts, we set the maximum sequence length to 64 and in any shorter or longer length case it will be padded with zero values or truncated to the maximum length.", "id": 533, "question": "What existing approaches do they compare to?", "title": "A BERT-Based Transfer Learning Approach for Hate Speech Detection in Online Social Media"}, {"answers": ["", ""], "context": "Vector-based semantic mapping models are used to represent textual structures (words, phrases, and documents) as high-dimensional meaning vectors. Typically, these models utilize textual corpora and/or Knowledge Bases (KBs) to acquire world knowledge, which is then used to generate a vector representation for the given text in the semantic space. The goal is thus to accurately place semantically similar structures close to each other in that semantic space. On the other hand, dissimilar structures should be far apart.", "id": 534, "question": "What is the benchmark dataset?", "title": "Learning Concept Embeddings for Efficient Bag-of-Concepts Densification"}, {"answers": ["", ""], "context": "Concept/Entity Embeddings: neural embedding models have been proposed to learn distributed representations of concepts/entities. songunsupervised proposed using the popular Word2Vec model BIBREF12 to obtain the embeddings of each concept by averaging the vectors of the concept's individual words. For example, the embeddings of Microsoft Office would be obtained by averaging the embeddings of Microsoft and Office obtained from the Word2Vec model. Clearly, this method disregards the fact that the semantics of multi-word concepts is different from the semantics of their individual words. More robust concept embeddings can be learned from the concept's corresponding article and/or from the structure of the employed KB (e.g., its link graph). Such concept embedding models were proposed by hu2015entity,li2016joint,yamada2016joint who all utilize the skip-gram model BIBREF11 , but differ in how they define the context of the target concept.", "id": 535, "question": "What are the two neural embedding models?", "title": "Learning Concept Embeddings for Efficient Bag-of-Concepts Densification"}, {"answers": ["the CRX model", ""], "context": "A main objective of learning concept embeddings is to overcome the inherent problem of data sparsity associated with the BOC representation. Here we try to learn continuous concept vectors by building upon the skip-gram embedding model BIBREF11 . In the conventional skip-gram model, a set of contexts are generated by sliding a context window of predefined size over sentences of a given text corpus. Vector representation of a target word is learned with the objective to maximize the ability of predicting surrounding words of that target word.", "id": 536, "question": "which neural embedding model works better?", "title": "Learning Concept Embeddings for Efficient Bag-of-Concepts Densification"}, {"answers": ["The number of dimensions can be reduced by up to 212 times."], "context": "In this model, we jointly learn the embeddings of both words and concepts. First, all concept mentions are identified in the given corpus. Second, contexts are generated for both words and concepts from both other surrounding words and other surrounding concepts as well. After generating all the contexts, we use the skip-gram model to jointly learn words and concepts embeddings. Formally, given a training corpus of INLINEFORM0 words INLINEFORM1 . We iterate over the corpus identifying words and concept mentions and thus generating a sequence of INLINEFORM2 tokens INLINEFORM3 where INLINEFORM4 (as multi-word concepts will be counted as one token). Afterwards we train the a skip-gram model aiming to maximize: DISPLAYFORM0 ", "id": 537, "question": "What is the degree of dimension reduction of the efficient aggregation method?", "title": "Learning Concept Embeddings for Efficient Bag-of-Concepts Densification"}, {"answers": ["", "English"], "context": "Low dimensional word representations (embeddings) have become a key component in modern NLP systems for language modeling, parsing, sentiment classification, and many others. These embeddings are usually derived by employing the distributional hypothesis: that similar words appear in similar contexts BIBREF0 .", "id": 538, "question": "For which languages do they build word embeddings for?", "title": "Incorporating Subword Information into Matrix Factorization Word Embeddings"}, {"answers": [""], "context": "Word embeddings that leverage subword information were first introduced by BIBREF14 which represented a word of as the sum of four-gram vectors obtained running an SVD of a four-gram to four-gram co-occurrence matrix. Our model differs by learning the subword vectors and resulting representation jointly as weighted factorization of a word-context co-occurrence matrix is performed.", "id": 539, "question": "How do they evaluate their resulting word embeddings?", "title": "Incorporating Subword Information into Matrix Factorization Word Embeddings"}, {"answers": ["", ""], "context": "The LexVec BIBREF7 model factorizes the PPMI-weighted word-context co-occurrence matrix using stochastic gradient descent. ", "id": 540, "question": "What types of subwords do they incorporate in their model?", "title": "Incorporating Subword Information into Matrix Factorization Word Embeddings"}, {"answers": ["", ""], "context": "Our experiments aim to measure if the incorporation of subword information into LexVec results in similar improvements as observed in moving from Skip-gram to fastText, and whether unsupervised morphemes offer any advantage over n-grams. For IV words, we perform intrinsic evaluation via word similarity and word analogy tasks, as well as downstream tasks. OOV word representation is tested through qualitative nearest-neighbor analysis.", "id": 541, "question": "Which matrix factorization methods do they use?", "title": "Incorporating Subword Information into Matrix Factorization Word Embeddings"}, {"answers": ["", ""], "context": "Distributed word representations, commonly referred to as word embeddings BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , serve as elementary building blocks in the course of algorithm design for an expanding range of applications in natural language processing (NLP), including named entity recognition BIBREF4 , BIBREF5 , parsing BIBREF6 , sentiment analysis BIBREF7 , BIBREF8 , and word-sense disambiguation BIBREF9 . Although the empirical utility of word embeddings as an unsupervised method for capturing the semantic or syntactic features of a certain word as it is used in a given lexical resource is well-established BIBREF10 , BIBREF11 , BIBREF12 , an understanding of what these features mean remains an open problem BIBREF13 , BIBREF14 and as such word embeddings mostly remain a black box. It is desirable to be able to develop insight into this black box and be able to interpret what it means, while retaining the utility of word embeddings as semantically-rich intermediate representations. Other than the intrinsic value of this insight, this would not only allow us to explain and understand how algorithms work BIBREF15 , but also set a ground that would facilitate the design of new algorithms in a more deliberate way.", "id": 542, "question": "Do they report results only on English data?", "title": "Imparting Interpretability to Word Embeddings while Preserving Semantic Structure"}, {"answers": ["Human evaluation for interpretability using the word intrusion test and automated evaluation for interpretability using a semantic category-based approach based on the method and category dataset (SEMCAT).", ""], "context": "Methodologically, our work is related to prior studies that aim to obtain \u201cimproved\u201d word embeddings using external lexical resources, under some performance metric. Previous work in this area can be divided into two main categories: works that i) modify the word embedding learning algorithm to incorporate lexical information, ii) operate on pre-trained embeddings with a post-processing step.", "id": 543, "question": "What experiments do they use to quantify the extent of interpretability?", "title": "Imparting Interpretability to Word Embeddings while Preserving Semantic Structure"}, {"answers": [""], "context": "For the task of unsupervised word embedding extraction, we operate on a discrete collection of lexical units (words) INLINEFORM0 that is part of an input corpus INLINEFORM1 , with number of tokens INLINEFORM2 , sourced from a vocabulary INLINEFORM3 of size INLINEFORM4 . In the setting of distributional semantics, the objective of a word embedding algorithm is to maximize some aggregate utility over the entire corpus so that some measure of \u201ccloseness\u201d is maximized for pairs of vector representations INLINEFORM14 for words which, on the average, appear in proximity to one another. In the GloVe algorithm BIBREF2 , which we base our improvements upon, the following objective function is considered: DISPLAYFORM0 ", "id": 544, "question": "Along which dimension do the semantically related words take larger values?", "title": "Imparting Interpretability to Word Embeddings while Preserving Semantic Structure"}, {"answers": ["The cost function for any one of the words of concept word-groups is modified by the introduction of an additive term to the cost function. . Each embedding vector dimension is first associated with a concept. For a word belonging to any one of the word-groups representing these concepts, the modified cost term favors an increase for the value of this word's embedding vector dimension corresponding to the concept that the particular word belongs to,", "An additive term added to the cost function for any one of the words of concept word-groups"], "context": "Our approach falls into a joint-learning framework where the distributional information extracted from the corpus is allowed to fuse with the external lexicon-based information. Word-groups extracted from Roget's Thesaurus are directly mapped to individual dimensions of word embeddings. Specifically, the vector representations of words that belong to a particular group are encouraged to have deliberately increased values in a particular dimension that corresponds to the word-group under consideration. This can be achieved by modifying the objective function of the embedding algorithm to partially influence vector representation distributions across their dimensions over an input vocabulary. To do this, we propose the following modification to the GloVe objective in ( EQREF6 ): rCl J = i,j=1V f(Xij)[ (wiTwj + bi + bj -Xij)2", "id": 545, "question": "What is the additive modification to the objective function?", "title": "Imparting Interpretability to Word Embeddings while Preserving Semantic Structure"}, {"answers": ["", "", ""], "context": "Twitter is a widely used microblogging platform, where users post and interact with messages, \u201ctweets\u201d. Understanding the semantic representation of tweets can benefit a plethora of applications such as sentiment analysis BIBREF0 , BIBREF1 , hashtag prediction BIBREF2 , paraphrase detection BIBREF3 and microblog ranking BIBREF4 , BIBREF5 . However, tweets are difficult to model as they pose several challenges such as short length, informal words, unusual grammar and misspellings. Recently, researchers are focusing on leveraging unsupervised representation learning methods based on neural networks to solve this problem. Once these representations are learned, we can use off-the-shelf predictors taking the representation as input to solve the downstream task BIBREF6 , BIBREF7 . These methods enjoy several advantages: (1) they are cheaper to train, as they work with unlabelled data, (2) they reduce the dependence on domain level experts, and (3) they are highly effective across multiple applications, in practice.", "id": 546, "question": "Which dataset do they use?", "title": "Improving Distributed Representations of Tweets - Present and Future"}, {"answers": ["", ""], "context": "There are various models spanning across different model architectures and objective functions in the literature to compute tweet representation in an unsupervised fashion. These models work in a semi-supervised way - the representations generated by the model is fed to an off-the-shelf predictor like Support Vector Machines (SVM) to solve a particular downstream task. These models span across a wide variety of neural network based architectures including average of word vectors, convolutional-based, recurrent-based and so on. We believe that the performance of these models is highly dependent on the objective function it optimizes \u2013 predicting adjacent word (within-tweet relationships), adjacent tweet (inter-tweet relationships), the tweet itself (autoencoder), modeling from structured resources like paraphrase databases and weak supervision. In this section, we provide the first of its kind survey of the recent tweet-specific unsupervised models in an organized fashion to understand the literature. Specifically, we categorize each model based on the optimized objective function as shown in Figure FIGREF1 . Next, we study each category one by one.", "id": 547, "question": "Do they evaluate their learned representations on downstream tasks?", "title": "Improving Distributed Representations of Tweets - Present and Future"}, {"answers": [""], "context": "Motivation: Every tweet is assumed to have a latent topic vector, which influences the distribution of the words in the tweet. For example, though the appearance of the phrase catch the ball is frequent in the corpus, if we know that the topic of a tweet is about \u201ctechnology\u201d, we can expect words such as bug or exception after the word catch (ignoring the) instead of the word ball since catch the bug/exception is more plausible under the topic \u201ctechnology\u201d. On the other hand, if the topic of the tweet is about \u201csports\u201d, then we can expect ball after catch. These intuitions indicate that the prediction of neighboring words for a given word strongly relies on the tweet also.", "id": 548, "question": "Which representation learning architecture do they adopt?", "title": "Improving Distributed Representations of Tweets - Present and Future"}, {"answers": ["They group the existing works in terms of the objective function they optimize - within-tweet relationships, inter-tweet relationships, autoencoder, and weak supervision."], "context": "Motivation: To capture rich tweet semantics, researchers are attempting to exploit a type of sentence-level Distributional Hypothesis BIBREF10 , BIBREF13 . The idea is to infer the tweet representation from the content of adjacent tweets in a related stream like users' Twitter timeline, topical, retweet and conversational stream. This approach significantly alleviates the context insufficiency problem caused due to the ambiguous and short nature of tweets BIBREF0 , BIBREF14 .", "id": 549, "question": "How do they encourage understanding of literature as part of their objective function?", "title": "Improving Distributed Representations of Tweets - Present and Future"}, {"answers": ["", ""], "context": "Lexical analysis, syntactic analysis, semantic analysis, disclosure analysis and pragmatic analysis are five main steps in natural language processing BIBREF0 , BIBREF1 . While morphology is a basic task in lexical analysis of English, word segmentation is considered a basic task in lexical analysis of Vietnamese and other East Asian languages processing. This task is to determine borders between words in a sentence. In other words, it is segmenting a list of tokens into a list of words such that words are meaningful.", "id": 550, "question": "What are the limitations of existing Vietnamese word segmentation systems?", "title": "State-of-the-Art Vietnamese Word Segmentation"}, {"answers": ["Acquire very large Vietnamese corpus and build a classifier with it, design a develop a big data warehouse and analytic framework, build a system to incrementally learn new corpora and interactively process feedback.", ""], "context": "Vietnamese, like many languages in continental East Asia, is an isolating language and one branch of Mon-Khmer language group. The most basic linguistic unit in Vietnamese is morpheme, similar with syllable or token in English and \u201ch\u00ecnh v\u1ecb\u201d (phoneme) or \u201cti\u1ebfng\u201d (syllable) in Vietnamese. According to the structured rule of its, Vietnamese can have about 20,000 different syllables (tokens). However, there are about 8,000 syllables used the Vietnamese dictionaries. There are three methods to identify morphemes in Vietnamese text BIBREF10 .", "id": 551, "question": "Why challenges does word segmentation in Vietnamese pose?", "title": "State-of-the-Art Vietnamese Word Segmentation"}, {"answers": ["Their accuracy in word segmentation is about 94%-97%."], "context": "In Vietnamese, not all of meaningful proper names are in the dictionary. Identifying proper names in input text are also important issue in word segmentation. This issue is sometimes included into unknown word issue to be solved. In addition, named entity recognition has to classify it into several types such as person, location, organization, time, money, number, and so on.", "id": 552, "question": "How successful are the approaches used to solve word segmentation in Vietnamese?", "title": "State-of-the-Art Vietnamese Word Segmentation"}, {"answers": ["", ""], "context": "In general, building corpus is carried out through four stages: (1) choose target of corpus and source of raw data; (2) building a guideline based on linguistics knowledge for annotation; (3) annotating or tagging corpus based on rule set in the guideline; and (4) reviewing corpus to check the consistency issue.", "id": 553, "question": "Which approaches have been applied to solve word segmentation in Vietnamese?", "title": "State-of-the-Art Vietnamese Word Segmentation"}, {"answers": ["mainstream news and disinformation", ""], "context": "In recent years there has been increasing interest on the issue of disinformation spreading on online social media. Global concern over false (or \"fake\") news as a threat to modern democracies has been frequently raised\u2013ever since 2016 US Presidential elections\u2013in correspondence of events of political relevance, where the proliferation of manipulated and low-credibility content attempts to drive and influence people opinions BIBREF0BIBREF1BIBREF2BIBREF3.", "id": 554, "question": "Which two news domains are country-independent?", "title": "A multi-layer approach to disinformation detection on Twitter"}, {"answers": ["By assigning a political bias label to each news article and training only on left-biased or right-biased outlets of both disinformation and mainstream domains", ""], "context": "In this work we formulate our classification problem as follows: given two classes of news articles, respectively $D$ (disinformation) and $M$ (mainstream), a set of news articles $A_i$ and associated class labels $C_i \\in \\lbrace D,M\\rbrace $, and a set of tweets $\\Pi _i=\\lbrace T_i^1, T_i^2, ...\\rbrace $ each of which contains an Uniform Resource Locator (URL) pointing explicitly to article $A_i$, predict the class $C_i$ of each article $A_i$. There is huge debate and controversy on a proper taxonomy of malicious and deceptive information BIBREF1BIBREF2BIBREF15BIBREF16BIBREF17BIBREF3BIBREF11. In this work we prefer the term disinformation to the more specific fake news to refer to a variety of misleading and harmful information. Therefore, we follow a source-based approach, a consolidated strategy also adopted by BIBREF6BIBREF16BIBREF2BIBREF1, in order to obtain relevant data for our analysis. We collected:", "id": 555, "question": "How is the political bias of different sources included in the model?", "title": "A multi-layer approach to disinformation detection on Twitter"}, {"answers": ["", ""], "context": "We collected tweets associated to a dozen US mainstream news websites, i.e. most trusted sources described in BIBREF18, with the Streaming API, and we referred to Hoaxy API BIBREF16 for what concerns tweets containing links to 100+ US disinformation outlets. We filtered out articles associated to less than 50 tweets. The resulting dataset contains overall $\\sim $1.7 million tweets for mainstream news, collected in a period of three weeks (February 25th, 2019-March 18th, 2019), which are associated to 6,978 news articles, and $\\sim $1.6 million tweets for disinformation, collected in a period of three months (January 1st, 2019-March 18th, 2019) for sake of balance of the two classes, which hold 5,775 distinct articles. Diffusion censoring effects BIBREF14 were correctly taken into account in both collection procedures. We provide in Figure FIGREF4 the distribution of articles by source and political bias for both news domains.", "id": 556, "question": "What are the two large-scale datasets used?", "title": "A multi-layer approach to disinformation detection on Twitter"}, {"answers": [""], "context": "For what concerns the Italian scenario we first collected tweets with the Streaming API in a 3-week period (April 19th, 2019-May 5th, 2019), filtering those containing URLs pointing to Italian official newspapers websites as described in BIBREF22; these correspond to the list provided by the association for the verification of newspaper circulation in Italy (Accertamenti Diffusione Stampa). We instead referred to the dataset provided by BIBREF23 to obtain a set of tweets, collected continuously since January 2019 using the same Twitter endpoint, which contain URLs to 60+ Italian disinformation websites. In order to get balanced classes (April 5th, 2019-May 5th, 2019), we retained data collected in a longer period w.r.t to mainstream news. In both cases we filtered out articles with less than 50 tweets; overall this dataset contains $\\sim $160k mainstream tweets, corresponding to 227 news articles, and $\\sim $100k disinformation tweets, corresponding to 237 news articles. We provide in Figure FIGREF5 the distribution of articles according to distinct sources for both news domains. As in the US dataset, we took into account censoring effects BIBREF14 by excluding tweets published before (left-censoring) or after two weeks (right-censoring) from the beginning of the collection process.", "id": 557, "question": "What are the global network features which quantify different aspects of the sharing process?", "title": "A multi-layer approach to disinformation detection on Twitter"}, {"answers": ["", ""], "context": "Understanding what a question is asking is one of the first steps that humans use to work towards an answer. In the context of question answering, question classification allows automated systems to intelligently target their inference systems to domain-specific solvers capable of addressing specific kinds of questions and problem solving methods with high confidence and answer accuracy BIBREF0 , BIBREF1 .", "id": 558, "question": "Which datasets are used for evaluation?", "title": "Multi-class Hierarchical Question Classification for Multiple Choice Science Exams"}, {"answers": [""], "context": "Question classification typically makes use of a combination of syntactic, semantic, surface, and embedding methods. Syntactic patterns BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 and syntactic dependencies BIBREF3 have been shown to improve performance, while syntactically or semantically important words are often expanding using Wordnet hypernyms or Unified Medical Language System categories (for the medical domain) to help mitigate sparsity BIBREF22 , BIBREF23 , BIBREF24 . Keyword identification helps identify specific terms useful for classification BIBREF25 , BIBREF3 , BIBREF26 . Similarly, named entity recognizers BIBREF6 , BIBREF27 or lists of semantically related words BIBREF6 , BIBREF24 can also be used to establish broad topics or entity categories and mitigate sparsity, as can word embeddings BIBREF28 , BIBREF29 . Here, we empirically demonstrate many of these existing methods do not transfer to the science domain.", "id": 559, "question": "What previous methods is their model compared to?", "title": "Multi-class Hierarchical Question Classification for Multiple Choice Science Exams"}, {"answers": ["", ""], "context": "Questions: We make use of the 7,787 science exam questions of the Aristo Reasoning Challenge (ARC) corpus BIBREF31 , which contains standardized 3rd to 9th grade science questions from 12 US states from the past decade. Each question is a 4-choice multiple choice question. Summary statistics comparing the complexity of ARC and TREC questions are shown in Table TABREF5 .", "id": 560, "question": "Did they use a crowdsourcing platform?", "title": "Multi-class Hierarchical Question Classification for Multiple Choice Science Exams"}, {"answers": ["from 3rd to 9th grade science questions collected from 12 US states", "Used from science exam questions of the Aristo Reasoning Challenge (ARC) corpus."], "context": "We identified 5 common models in previous work primarily intended for learned classifiers rather than hand-crafted rules. We adapt these models to a multi-label hierarchical classification task by training a series of one-vs-all binary classifiers BIBREF34 , one for each label in the taxonomy. With the exception of the CNN and BERT models, following previous work BIBREF19 , BIBREF3 , BIBREF8 we make use of an SVM classifier using the LIBSvM framework BIBREF35 with a linear kernel. Models are trained and evaluated from coarse to fine levels of taxonomic specificity. At each level of taxonomic evaluation, a set of non-overlapping confidence scores for each binary classifier are generated and sorted to produce a list of ranked label predictions. We evaluate these ranks using Mean Average Precision BIBREF36 . ARC questions are evaluated using the standard 3,370 questions for training, 869 for development, and 3,548 for testing.", "id": 561, "question": "How was the dataset collected?", "title": "Multi-class Hierarchical Question Classification for Multiple Choice Science Exams"}, {"answers": ["", ""], "context": " This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/", "id": 562, "question": "Which datasets do they use?", "title": "Neural Collective Entity Linking"}, {"answers": [""], "context": "We denote INLINEFORM0 as a set of entity mentions in a document INLINEFORM1 , where INLINEFORM2 is either a word INLINEFORM3 or a mention INLINEFORM4 . INLINEFORM5 is the entity graph for document INLINEFORM6 derived from the given knowledge base, where INLINEFORM7 is a set of entities, INLINEFORM8 denotes the relatedness between INLINEFORM9 and higher values indicate stronger relations. Based on INLINEFORM10 , we extract a subgraph INLINEFORM11 for INLINEFORM12 , where INLINEFORM13 denotes the set of candidate entities for INLINEFORM14 . Note that we don't include the relations among candidates of the same mention in INLINEFORM15 because these candidates are mutually exclusive in disambiguation.", "id": 563, "question": "How effective is their NCEL approach overall?", "title": "Neural Collective Entity Linking"}, {"answers": ["By calculating Macro F1 metric at the document level.", "by evaluating their model on five different benchmarks"], "context": "Similar to previous work BIBREF24 , we use the prior probability INLINEFORM0 of entity INLINEFORM1 conditioned on mention INLINEFORM2 both as a local feature and to generate candidate entities: INLINEFORM3 . We compute INLINEFORM4 based on statistics of mention-entity pairs from: (i) Wikipedia page titles, redirect titles and hyperlinks, (ii) the dictionary derived from a large Web Corpus BIBREF27 , and (iii) the YAGO dictionary with a uniform distribution BIBREF22 . We pick up the maximal prior if a mention-entity pair occurs in different resources. In experiments, to optimize for memory and run time, we keep only top INLINEFORM5 entities based on INLINEFORM6 . In the following two sections, we will present the key components of NECL, namely feature extraction and neural network for collective entity linking.", "id": 564, "question": "How do they verify generalization ability?", "title": "Neural Collective Entity Linking"}, {"answers": ["NCEL considers only adjacent mentions.", "More than that in some cases (next to adjacent) "], "context": "The main goal of NCEL is to find a solution for collective entity linking using an end-to-end neural model, rather than to improve the measurements of local textual similarity or global mention/entity relatedness. Therefore, we use joint embeddings of words and entities at sense level BIBREF28 to represent mentions and its contexts for feature extraction. In this section, we give a brief description of our embeddings followed by our features used in the neural model.", "id": 565, "question": "Do they only use adjacent entity mentions or use more than that in some cases (next to adjacent)?", "title": "Neural Collective Entity Linking"}, {"answers": ["", ""], "context": "Deep contextualised representations of linguistic entities (words and/or sentences) are used in many current state-of-the-art NLP systems. The most well-known examples of such models are arguably ELMo BIBREF0 and BERT BIBREF1.", "id": 566, "question": "Do the authors mention any downside of lemmatizing input before training ELMo?", "title": "To lemmatize or not to lemmatize: how word normalisation affects ELMo performance in word sense disambiguation"}, {"answers": ["", ""], "context": "ELMo contextual word representations are learned in an unsupervised way through language modelling BIBREF0. The general architecture consists of a two-layer BiLSTM on top of a convolutional layer which takes character sequences as its input. Since the model uses fully character-based token representations, it avoids the problem of out-of-vocabulary words. Because of this, the authors explicitly recommend not to use any normalisation except tokenization for the input text. However, as we show below, while this is true for English, for other languages feeding ELMo with lemmas instead of raw tokens can improve WSD performance.", "id": 567, "question": "What other examples of morphologically-rich languages do the authors give?", "title": "To lemmatize or not to lemmatize: how word normalisation affects ELMo performance in word sense disambiguation"}, {"answers": ["Advanced neural architectures and contextualized embedding models learn how to handle spelling and morphology variations."], "context": "For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). The RNC texts were added to the Russian Wikipedia dump so as to make the Russian training corpus more comparable in size to the English one (Wikipedia texts would comprise only half of the size). As Table TABREF3 shows, the English Wikipedia is still two times larger, but at least the order is the same.", "id": 568, "question": "Why is lemmatization not necessary in English?", "title": "To lemmatize or not to lemmatize: how word normalisation affects ELMo performance in word sense disambiguation"}, {"answers": ["2174000000, 989000000", "2174 million tokens for English and 989 million tokens for Russian"], "context": "We used two WSD datasets for evaluation:", "id": 569, "question": "How big was the corpora they trained ELMo on?", "title": "To lemmatize or not to lemmatize: how word normalisation affects ELMo performance in word sense disambiguation"}, {"answers": ["", ""], "context": "Rendering natural language descriptions from structured data is required in a wide variety of commercial applications such as generating descriptions of products, hotels, furniture, etc., from a corresponding table of facts about the entity. Such a table typically contains {field, value} pairs where the field is a property of the entity (e.g., color) and the value is a set of possible assignments to this property (e.g., color = red). Another example of this is the recently introduced task of generating one line biography descriptions from a given Wikipedia infobox BIBREF0 . The Wikipedia infobox serves as a table of facts about a person and the first sentence from the corresponding article serves as a one line description of the person. Figure FIGREF2 illustrates an example input infobox which contains fields such as Born, Residence, Nationality, Fields, Institutions and Alma Mater. Each field further contains some words (e.g., particle physics, many-body theory, etc.). The corresponding description is coherent with the information contained in the infobox.", "id": 570, "question": "What metrics are used for evaluation?", "title": "Generating Descriptions from Structured Data Using a Bifocal Attention Mechanism and Gated Orthogonalization"}, {"answers": ["", ""], "context": "Natural Language Generation has always been of interest to the research community and has received a lot of attention in the past. The approaches for NLG range from (i) rule based approaches (e.g., BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 ) (ii) modular statistical approaches which divide the process into three phases (planning, selection and surface realization) and use data driven approaches for one or more of these phases BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 (iii) hybrid approaches which rely on a combination of handcrafted rules and corpus statistics BIBREF20 , BIBREF21 , BIBREF22 and (iv) the more recent neural network based models BIBREF1 .", "id": 571, "question": "Do they use pretrained embeddings?", "title": "Generating Descriptions from Structured Data Using a Bifocal Attention Mechanism and Gated Orthogonalization"}, {"answers": ["English WIKIBIO, French WIKIBIO , German WIKIBIO ", ""], "context": "As input we are given an infobox INLINEFORM0 , which is a set of pairs INLINEFORM1 where INLINEFORM2 corresponds to field names and INLINEFORM3 is the sequence of corresponding values and INLINEFORM4 is the total number of fields in INLINEFORM5 . For example, INLINEFORM6 could be one such pair in this set. Given such an input, the task is to generate a description INLINEFORM7 containing INLINEFORM8 words. A simple solution is to treat the infobox as a sequence of fields followed by the values corresponding to the field in the order of their appearance in the infobox. For example, the infobox could be flattened to produce the following input sequence (the words in bold are field names which act as delimiters)", "id": 572, "question": "What dataset is used?", "title": "Generating Descriptions from Structured Data Using a Bifocal Attention Mechanism and Gated Orthogonalization"}, {"answers": [""], "context": "Intuitively, when a human writes a description from a table she keeps track of information at two levels. At the macro level, it is important to decide which is the appropriate field to attend to next and at a micro level (i.e., within a field) it is important to know which values to attend to next. To capture this behavior, we use a bifocal attention mechanism as described below.", "id": 573, "question": "What is a bifocal attention mechanism?", "title": "Generating Descriptions from Structured Data Using a Bifocal Attention Mechanism and Gated Orthogonalization"}, {"answers": ["", "The expected number of unique outputs a word recognition system assigns to a set of adversarial perturbations ", ""], "context": "Despite the rapid progress of deep learning techniques on diverse supervised learning tasks, these models remain brittle to subtle shifts in the data distribution. Even when the permissible changes are confined to barely-perceptible perturbations, training robust models remains an open challenge. Following the discovery that imperceptible attacks could cause image recognition models to misclassify examples BIBREF0 , a veritable sub-field has emerged in which authors iteratively propose attacks and countermeasures.", "id": 574, "question": "What does the \"sensitivity\" quantity denote?", "title": "Combating Adversarial Misspellings with Robust Word Recognition"}, {"answers": ["Sentiment analysis and paraphrase detection under adversarial attacks"], "context": "Several papers address adversarial attacks on NLP systems. Changes to text, whether word- or character-level, are all perceptible, raising some questions about what should rightly be considered an adversarial example BIBREF8 , BIBREF9 . BIBREF10 address the reading comprehension task, showing that by appending distractor sentences to the end of stories from the SQuAD dataset BIBREF11 , they could cause models to output incorrect answers. Inspired by this work, BIBREF12 demonstrate an attack that breaks entailment systems by replacing a single word with either a synonym or its hypernym. Recently, BIBREF13 investigated the problem of producing natural-seeming adversarial examples, noting that adversarial examples in NLP are often ungrammatical BIBREF14 .", "id": 575, "question": "What end tasks do they evaluate on?", "title": "Combating Adversarial Misspellings with Robust Word Recognition"}, {"answers": ["A semi-character based RNN (ScRNN) treats the first and last characters individually, and is agnostic to the ordering of the internal characters", ""], "context": "To tackle character-level adversarial attacks, we introduce a simple two-stage solution, placing a word recognition model ( $W$ ) before the downstream classifier ( $C$ ). Under this scheme, all inputs are classified by the composed model $C \\circ W$ . This modular approach, with $W$ and $C$ trained separately, offers several benefits: (i) we can deploy the same word recognition model for multiple downstream classification tasks/models; and (ii) we can train the word recognition model with larger unlabeled corpora.", "id": 576, "question": "What is a semicharacter architecture?", "title": "Combating Adversarial Misspellings with Robust Word Recognition"}, {"answers": [""], "context": "We now describe semi-character RNNs for word recognition, explain their limitations, and suggest techniques to improve them.", "id": 577, "question": "Do they experiment with offering multiple candidate corrections and voting on the model output, since this seems highly likely to outperform a one-best correction?", "title": "Combating Adversarial Misspellings with Robust Word Recognition"}, {"answers": ["Adversarial misspellings are a real-world problem"], "context": "In computer vision, an important factor determining the success of an adversary is the norm constraint on the perturbations allowed to an image ( $|| \\bf x - \\bf x^{\\prime }||_{\\infty } < \\epsilon $ ). Higher values of $\\epsilon $ lead to a higher chance of mis-classification for at least one $\\bf x^{\\prime }$ . Defense methods such as quantization BIBREF22 and thermometer encoding BIBREF23 try to reduce the space of perturbations available to the adversary by making the model invariant to small changes in the input.", "id": 578, "question": "Why is the adversarial setting appropriate for misspelling recognition?", "title": "Combating Adversarial Misspellings with Robust Word Recognition"}, {"answers": [""], "context": "Suppose we are given a classifier $C: \\mathcal {S} \\rightarrow \\mathcal {Y}$ which maps natural language sentences $s \\in \\mathcal {S}$ to a label from a predefined set $y \\in \\mathcal {Y}$ . An adversary for this classifier is a function $A$ which maps a sentence $s$ to its perturbed versions $\\lbrace s^{\\prime }_1, s^{\\prime }_2, \\ldots , s^{\\prime }_{n}\\rbrace $ such that each $s^{\\prime }_i$ is close to $s$ under some notion of distance between sentences. We define the robustness of classifier $C$ to the adversary $A$ as: ", "id": 579, "question": "Why do they experiment with RNNs instead of transformers for this task?", "title": "Combating Adversarial Misspellings with Robust Word Recognition"}, {"answers": ["In pass-through, the recognizer passes on the possibly misspelled word, backoff to neutral word backs off to a word with similar distribution across classes and backoff to background model backs off to a more generic word recognition model trained with larger and less specialized corpus.", "Pass-through passes the possibly misspelled word as is, backoff to neutral word backs off to a word with similar distribution across classes and backoff to background model backs off to a more generic word recognition model trained with larger and less specialized corpus.", "Backoff to \"a\" when an UNK-predicted word is encountered, backoff to a more generic word recognition model when the model predicts UNK"], "context": "In this section, we first discuss our experiments on the word recognition systems.", "id": 580, "question": "How do the backoff strategies work?", "title": "Combating Adversarial Misspellings with Robust Word Recognition"}, {"answers": ["", ""], "context": "Semantic Role Labeling (SRL) has emerged as an important task in Natural Language Processing (NLP) due to its applicability in information extraction, question answering, and other NLP tasks. SRL is the problem of finding predicate-argument structure in a sentence, as illustrated below:", "id": 581, "question": "What baseline model is used?", "title": "A Bayesian Model of Multilingual Unsupervised Semantic Role Induction"}, {"answers": ["", ""], "context": "As established in previous work BIBREF7 , BIBREF8 , we use a standard unsupervised SRL setup, consisting of the following steps:", "id": 582, "question": "Which additional latent variables are used in the model?", "title": "A Bayesian Model of Multilingual Unsupervised Semantic Role Induction"}, {"answers": ["", ""], "context": "We use the Bayesian model of garg2012unsupervised as our base monolingual model. The semantic roles are predicate-specific. To model the role ordering and repetition preferences, the role inventory for each predicate is divided into Primary and Secondary roles as follows:", "id": 583, "question": "Which parallel corpora are used?", "title": "A Bayesian Model of Multilingual Unsupervised Semantic Role Induction"}, {"answers": ["", ""], "context": "The multilingual model uses word alignments between sentences in a parallel corpus to exploit role correspondences across languages. We make copies of the monolingual model for each language and add additional crosslingual latent variables (CLVs) to couple the monolingual models, capturing crosslingual semantic role patterns. Concretely, when training on parallel sentences, whenever the head words of the arguments are aligned, we add a CLV as a parent of the two corresponding role variables. Figure FIGREF16 illustrates this model. The generative process, as explained below, remains the same as the monolingual model for the most part, with the exception of aligned roles which are now generated by both the monolingual process as well as the CLV.", "id": 584, "question": "Overall, does having parallel data improve semantic role induction across multiple languages?", "title": "A Bayesian Model of Multilingual Unsupervised Semantic Role Induction"}, {"answers": [""], "context": "The inference problem consists of predicting the role labels and CLVs (the hidden variables) given the predicate, its voice, and syntactic features of all the identified arguments (the visible variables). We use a collapsed Gibbs-sampling based approach to generate samples for the hidden variables (model parameters are integrated out). The sample counts and the priors are then used to calculate the MAP estimate of the model parameters.", "id": 585, "question": "Do they add one latent variable for each language pair in their Bayesian model?", "title": "A Bayesian Model of Multilingual Unsupervised Semantic Role Induction"}, {"answers": [""], "context": "Following the setting of titovcrosslingual, we evaluate only on the arguments that were correctly identified, as the incorrectly identified arguments do not have any gold semantic labels. Evaluation is done using the metric proposed by lang2011unsupervised, which has 3 components: (i) Purity (PU) measures how well an induced cluster corresponds to a single gold role, (ii) Collocation (CO) measures how well a gold role corresponds to a single induced cluster, and (iii) F1 is the harmonic mean of PU and CO. For each predicate, let INLINEFORM0 denote the total number of argument instances, INLINEFORM1 the instances in the induced cluster INLINEFORM2 , and INLINEFORM3 the instances having label INLINEFORM4 in gold annotations. INLINEFORM5 , INLINEFORM6 , and INLINEFORM7 . The score for each predicate is weighted by the number of its argument instances, and a weighted average is computed over all the predicates.", "id": 586, "question": "What does an individual model consist of?", "title": "A Bayesian Model of Multilingual Unsupervised Semantic Role Induction"}, {"answers": ["", ""], "context": "We use the same baseline as used by lang2011unsupervised which has been shown to be difficult to outperform. This baseline assigns a semantic role to a constituent based on its syntactic function, i.e. the dependency relation to its head. If there is a total of INLINEFORM0 clusters, INLINEFORM1 most frequent syntactic functions get a cluster each, and the rest are assigned to the INLINEFORM2 th cluster.", "id": 587, "question": "Do they improve on state-of-the-art semantic role induction?", "title": "A Bayesian Model of Multilingual Unsupervised Semantic Role Induction"}, {"answers": ["", ""], "context": "When people shop for books online in e-book stores such as, e.g., the Amazon Kindle store, they enter search terms with the goal to find e-books that meet their preferences. Such e-books have a variety of metadata such as, e.g., title, author or keywords, which can be used to retrieve e-books that are relevant to the query. As a consequence, from the perspective of e-book publishers and editors, annotating e-books with tags that best describe the content and which meet the vocabulary of users (e.g., when searching and reviewing e-books) is an essential task BIBREF0 .", "id": 588, "question": "how many tags do they look at?", "title": "Evaluating Tag Recommendations for E-Book Annotation Using a Semantic Similarity Metric"}, {"answers": ["A hybrid model consisting of best performing popularity-based approach with the best similarity-based approach"], "context": "In this section, we describe our dataset as well as our tag recommendation approaches we propose to annotate e-books.", "id": 589, "question": "which algorithm was the highest performer?", "title": "Evaluating Tag Recommendations for E-Book Annotation Using a Semantic Similarity Metric"}, {"answers": ["", ""], "context": "Our dataset contains two sources of data, one to generate tag recommendations and another one to evaluate tag recommendations. HGV GmbH has collected all data sources and we provide the dataset statistics in Table TABREF3 .", "id": 590, "question": "how is diversity measured?", "title": "Evaluating Tag Recommendations for E-Book Annotation Using a Semantic Similarity Metric"}, {"answers": ["", ""], "context": "We implement three types of tag recommendation approaches, i.e., (i) popularity-based, (ii) similarity-based (i.e., using content information), and (iii) hybrid approaches. Due to the lack of personalized tags (i.e., we do not know which user has assigned a tag), we do not implement other types of algorithms such as collaborative filtering BIBREF8 . In total, we evaluate 19 different algorithms to recommend tags for annotating e-books.", "id": 591, "question": "how large is the vocabulary?", "title": "Evaluating Tag Recommendations for E-Book Annotation Using a Semantic Similarity Metric"}, {"answers": ["", " E-book annotation data: editor tags, Amazon search terms, and Amazon review keywords."], "context": "In this section, we describe our evaluation protocol as well as the measures we use to evaluate and compare our tag recommendation approaches.", "id": 592, "question": "what dataset was used?", "title": "Evaluating Tag Recommendations for E-Book Annotation Using a Semantic Similarity Metric"}, {"answers": [""], "context": "For evaluation, we use the third set of e-book annotations, namely Amazon review keywords. As described in Section SECREF1 , these review keywords are extracted from the Amazon review texts and thus, reflect the users' vocabulary. We evaluate our approaches for the 2,896 e-books, for whom we got review keywords. To follow common practice for tag recommendation evaluation BIBREF14 , we predict the assigned review keywords (= our test set) for respective e-books.", "id": 593, "question": "what algorithms did they use?", "title": "Evaluating Tag Recommendations for E-Book Annotation Using a Semantic Similarity Metric"}, {"answers": [""], "context": "Since humans amass more and more generally available data in the form of unstructured text it would be very useful to teach machines to read and comprehend such data and then use this understanding to answer our questions. A significant amount of research has recently focused on answering one particular kind of questions the answer to which depends on understanding a context document. These are cloze-style questions BIBREF0 which require the reader to fill in a missing word in a sentence. An important advantage of such questions is that they can be generated automatically from a suitable text corpus which allows us to produce a practically unlimited amount of them. That opens the task to notoriously data-hungry deep-learning techniques which now seem to outperform all alternative approaches.", "id": 594, "question": "How does their ensemble method work?", "title": "Embracing data abundance: BookTest Dataset for Reading Comprehension"}, {"answers": ["", "Answer with content missing: (Table 2) Accuracy of best AS reader results including ensembles are 78.4 and 83.7 when trained on BookTest compared to 71.0 and 68.9 when trained on CBT for Named endity and Common noun respectively."], "context": "A natural way of testing a reader's comprehension of a text is to ask her a question the answer to which can be deduced from the text. Hence the task we are trying to solve consists of answering a cloze-style question, the answer to which depends on the understanding of a context document provided with the question. The model is also provided with a set of possible answers from which the correct one is to be selected. This can be formalized as follows:", "id": 595, "question": "How large are the improvements of the Attention-Sum Reader model when using the BookTest dataset?", "title": "Embracing data abundance: BookTest Dataset for Reading Comprehension"}, {"answers": ["", ""], "context": "We will now briefly review what datasets for text comprehension have been published up to date and look at models which have been recently applied to solving the task we have just described.", "id": 596, "question": "How do they show there is space for further improvement?", "title": "Embracing data abundance: BookTest Dataset for Reading Comprehension"}, {"answers": ["", ""], "context": "The art of argumentation has been studied since the early work of Aristotle, dating back to the 4th century BC BIBREF0 . It has been exhaustively examined from different perspectives, such as philosophy, psychology, communication studies, cognitive science, formal and informal logic, linguistics, computer science, educational research, and many others. In a recent and critically well-acclaimed study, Mercier.Sperber.2011 even claim that argumentation is what drives humans to perform reasoning. From the pragmatic perspective, argumentation can be seen as a verbal activity oriented towards the realization of a goal BIBREF1 or more in detail as a verbal, social, and rational activity aimed at convincing a reasonable critic of the acceptability of a standpoint by putting forward a constellation of one or more propositions to justify this standpoint BIBREF2 .", "id": 597, "question": "Do they report results only on English data?", "title": "Argumentation Mining in User-Generated Web Discourse"}, {"answers": ["", ""], "context": "We create a new corpus which is, to the best of our knowledge, the largest corpus that has been annotated within the argumentation mining field to date. We choose several target domains from educational controversies, such as homeschooling, single-sex education, or mainstreaming. A novel aspect of the corpus is its coverage of different registers of user-generated Web content, such as comments to articles, discussion forum posts, blog posts, as well as professional newswire articles.", "id": 598, "question": "What argument components do the ML methods aim to identify?", "title": "Argumentation Mining in User-Generated Web Discourse"}, {"answers": ["Structural Support Vector Machine", ""], "context": "Let us first present some definitions of the term argumentation itself. [p. 3]Ketcham.1917 defines argumentation as \u201cthe art of persuading others to think or act in a definite way. It includes all writing and speaking which is persuasive in form.\u201d According to MacEwan.1898, \u201cargumentation is the process of proving or disproving a proposition. Its purpose is to induce a new belief, to establish truth or combat error in the mind of another.\u201d [p. 2]Freeley.Steinberg.2008 narrow the scope of argumentation to \u201creason giving in communicative situations by people whose purpose is the justification of acts, beliefs, attitudes, and values.\u201d Although these definitions vary, the purpose of argumentation remains the same \u2013 to persuade others.", "id": 599, "question": "Which machine learning methods are used in experiments?", "title": "Argumentation Mining in User-Generated Web Discourse"}, {"answers": ["", ""], "context": "Despite the missing consensus on the ultimate argumentation theory, various argumentation models have been proposed that capture argumentation on different levels. Argumentation models abstract from the language level to a concept level that stresses the links between the different components of an argument or how arguments relate to each other BIBREF26 . Bentahar.et.al.2010 propose a taxonomy of argumentation models, that is horizontally divided into three categories \u2013 micro-level models, macro-level models, and rhetorical models.", "id": 600, "question": "How is the data in the new corpus come sourced?", "title": "Argumentation Mining in User-Generated Web Discourse"}, {"answers": [""], "context": "The above-mentioned models focus basically only on one dimension of the argument, namely the logos dimension. According to the classical Aristotle's theory BIBREF0 , argument can exist in three dimensions, which are logos, pathos, and ethos. Logos dimension represents a proof by reason, an attempt to persuade by establishing a logical argument. For example, syllogism belongs to this argumentation dimension BIBREF34 , BIBREF25 . Pathos dimension makes use of appealing to emotions of the receiver and impacts its cognition BIBREF35 . Ethos dimension of argument relies on the credibility of the arguer. This distinction will have practical impact later in section SECREF51 which deals with argumentation on the Web.", "id": 601, "question": "What argumentation phenomena encounter in actual data are now accounted for by this work?", "title": "Argumentation Mining in User-Generated Web Discourse"}, {"answers": [""], "context": "We conclude the theoretical section by presenting one (micro-level) argumentation model in detail \u2013 a widely used conceptual model of argumentation introduced by Toulmin.1958, which we will henceforth denote as the Toulmin's original model. This model will play an important role later in the annotation studies (section SECREF51 ) and experimental work (section SECREF108 ). The model consists of six parts, referred as argument components, where each component plays a distinct role.", "id": 602, "question": "What challenges do different registers and domains pose to this task?", "title": "Argumentation Mining in User-Generated Web Discourse"}, {"answers": ["", ""], "context": "Nowadays deep learning techniques outperform the other conventional methods in most of the speech-related tasks. Training robust deep neural networks for each task depends on the availability of powerful processing GPUs, as well as standard and large scale datasets. In text-independent speaker verification, large-scale datasets are available, thanks to the NIST SRE evaluations and other data collection projects such as VoxCeleb BIBREF0.", "id": 603, "question": "who transcribed the corpus?", "title": "A Multi Purpose and Large Scale Speech Corpus in Persian and English for Speaker and Speech Recognition: The Deepmine Database"}, {"answers": ["The speech was collected from respondents using an android application.", ""], "context": "DeepMine is publicly available for everybody with a variety of licenses for different users. It was collected using crowdsourcing BIBREF4. The data collection was done using an Android application. Each respondent installed the application on his/her personal device and recorded several phrases in different sessions. The Android application did various checks on each utterance and if it passed all of them, the respondent was directed to the next phrase. For more information about data collection scenario, please refer to BIBREF4.", "id": 604, "question": "how was the speech collected?", "title": "A Multi Purpose and Large Scale Speech Corpus in Persian and English for Speaker and Speech Recognition: The Deepmine Database"}, {"answers": ["", ""], "context": "In order to clean-up the database, the main post-processing step was to filter out problematic utterances. Possible problems include speaker word insertions (e.g. repeating some part of a phrase), deletions, substitutions, and involuntary disfluencies. To detect these, we implemented an alignment stage, similar to the second alignment stage in the LibriSpeech project BIBREF5. In this method, a custom decoding graph was generated for each phrase. The decoding graph allows for word skipping and word insertion in the phrase.", "id": 605, "question": "what accents are present in the corpus?", "title": "A Multi Purpose and Large Scale Speech Corpus in Persian and English for Speaker and Speech Recognition: The Deepmine Database"}, {"answers": [""], "context": "After processing the database and removing problematic respondents and utterances, 1969 respondents remained in the database, with 1149 of them being male and 820 female. 297 of the respondents could not read English and have therefore read only the Persian prompts. About 13200 sessions were recorded by females and similarly, about 9500 sessions by males, i.e. women are over-represented in terms of sessions, even though their number is 17% smaller than that of males. Other useful statistics related to the database are shown in Table TABREF4.", "id": 606, "question": "what evaluation protocols are provided?", "title": "A Multi Purpose and Large Scale Speech Corpus in Persian and English for Speaker and Speech Recognition: The Deepmine Database"}, {"answers": ["", ""], "context": "The DeepMine database consists of three parts. The first one contains fixed common phrases to perform text-dependent speaker verification. The second part consists of random sequences of words useful for text-prompted speaker verification, and the last part includes phrases with word- and phoneme-level transcription, useful for text-independent speaker verification using a random phrase (similar to Part4 of RedDots). This part can also serve for Persian ASR training. Each part is described in more details below. Table TABREF11 shows the number of unique phrases in each part of the database. For the English text-dependent part, the following phrases were selected from part1 of the RedDots database, hence the RedDots can be used as an additional training set for this part:", "id": 607, "question": "what age range is in the data?", "title": "A Multi Purpose and Large Scale Speech Corpus in Persian and English for Speaker and Speech Recognition: The Deepmine Database"}, {"answers": [""], "context": "This part contains a set of fixed phrases which are used to verify speakers in text-dependent mode. Each speaker utters 5 Persian phrases, and if the speaker can read English, 5 phrases selected from Part1 of the RedDots database are also recorded.", "id": 608, "question": "what is the source of the data?", "title": "A Multi Purpose and Large Scale Speech Corpus in Persian and English for Speaker and Speech Recognition: The Deepmine Database"}, {"answers": ["Demographics Age, DiagnosisHistory, MedicationHistory, ProcedureHistory, Symptoms/Signs, Vitals/Labs, Procedures/Results, Meds/Treatments, Movement, Other.", "Demographics, Diagnosis History, Medication History, Procedure History, Symptoms, Labs, Procedures, Treatments, Hospital movements, and others"], "context": "Summarization of patient information is essential to the practice of medicine. Clinicians must synthesize information from diverse data sources to communicate with colleagues and provide coordinated care. Examples of clinical summarization are abundant in practice; patient handoff summaries facilitate provider shift change, progress notes provide a daily status update for a patient, oral case presentations enable transfer of information from overnight admission to the care team and attending, and discharge summaries provide information about a patient's hospital visit to their primary care physician and other outpatient providers BIBREF0 .", "id": 609, "question": "what topics did they label?", "title": "Extractive Summarization of EHR Discharge Notes"}, {"answers": [""], "context": "In the broader field of summarization, automization was meant to standardize output while also saving time and effort. Pioneering strategies in summarization started by extracting \"significant\" sentences in the whole corpus to build an abstract where \"significant\" sentences were defined by the number of frequently occurring words BIBREF6 . These initial methods did not consider word meaning or syntax at either the sentence or paragraph level, which made them crude at best. More advanced extractive heuristics like topic modeling BIBREF7 , cue word dictionary approaches BIBREF8 , and title methods BIBREF9 for scoring content in a sentence followed soon after. For example, topic modeling extends initial frequency methods by assigning topics scores by frequency of topic signatures, clustering sentences with similar topics, and finally extracting the centroid sentence, which is considered the most representative sentence BIBREF10 . Recently, abstractive summarization approaches using sequence-to-sequence methods have been developed to generate new text that synthesizes original text BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 ; however, the field of abstractive summarization is quite young.", "id": 610, "question": "did they compare with other extractive summarization methods?", "title": "Extractive Summarization of EHR Discharge Notes"}, {"answers": ["", ""], "context": "MIMIC-III is a freely available, deidentified database containing electronic health records of patients admitted to an Intensive Care Unit (ICU) at Beth Israel Deaconess Medical Center between 2001 and 2012. The database contains all of the notes associated with each patient's time spent in the ICU as well as 55,177 discharge reports and 4,475 discharge addendums for 41,127 distinct patients. Only the original discharge reports were included in our analyses. Each discharge summary was divided into sections (Date of Birth, Sex, Chief Complaint, Major Surgical or Invasive Procedure, History of Present Illness, etc.) using a regular expression.", "id": 611, "question": "what datasets were used?", "title": "Extractive Summarization of EHR Discharge Notes"}, {"answers": ["", "Level 1, Level 2 and Level 3."], "context": " This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/", "id": 612, "question": "what levels of document preprocessing are looked at?", "title": "How Document Pre-processing affects Keyphrase Extraction Performance"}, {"answers": ["Answer with content missing: (LVL1, LVL2, LVL3) \n- Stanford CoreNLP\n- Optical Character Recognition (OCR) system, ParsCIT \n- further abridge the input text from level 2 preprocessed documents to the following: title, headers, abstract, introduction, related work, background and conclusion."], "context": "The SemEval-2010 benchmark dataset BIBREF0 is composed of 244 scientific articles collected from the ACM Digital Library (conference and workshop papers). The input papers ranged from 6 to 8 pages and were converted from PDF format to plain text using an off-the-shelf tool. The only preprocessing applied is a systematic dehyphenation at line breaks and removal of author-assigned keyphrases. Scientific articles were selected from four different research areas as defined in the ACM classification, and were equally distributed into training (144 articles) and test (100 articles) sets. Gold standard keyphrases are composed of both author-assigned keyphrases collected from the original PDF files and reader-assigned keyphrases provided by student annotators.", "id": 613, "question": "what keyphrase extraction models were reassessed?", "title": "How Document Pre-processing affects Keyphrase Extraction Performance"}, {"answers": ["", ""], "context": "We re-implemented five keyphrase extraction models : the first two are commonly used as baselines, the third is a resource-lean unsupervised graph-based ranking approach, and the last two were among the top performing systems in the SemEval-2010 keyphrase extraction task BIBREF0 . We note that two of the systems are supervised and rely on the training set to build their classification models. Document frequency counts are also computed on the training set. Stemming is applied to allow more robust matching. The different keyphrase extraction models are briefly described below:", "id": 614, "question": "how many articles are in the dataset?", "title": "How Document Pre-processing affects Keyphrase Extraction Performance"}, {"answers": ["", ""], "context": "With the widespread adoption of electronic health records (EHRs), medical data are being generated and stored digitally in vast quantities BIBREF0. While much EHR data are structured and amenable to analysis, there appears to be limited homogeneity in data completeness and quality BIBREF1, and it is estimated that the majority of healthcare data are being generated in unstructured, text-based format BIBREF2. The generation and storage of these unstructured data come concurrently with policy initiatives that seek to utilize preventative measures to reduce hospital admission and readmission BIBREF3.", "id": 615, "question": "Is this dataset publicly available for commercial use?", "title": "A Corpus for Detecting High-Context Medical Conditions in Intensive Care Patient Notes Focusing on Frequently Readmitted Patients"}, {"answers": ["", "Thirteen different phenotypes are present in the dataset."], "context": "We have created a dataset of discharge summaries and nursing notes, all in the English language, with a focus on frequently readmitted patients, labeled with 15 clinical patient phenotypes believed to be associated with risk of recurrent Intensive Care Unit (ICU) readmission per our domain experts (co-authors LAC, PAT, DAG) as well as the literature. BIBREF10 BIBREF11 BIBREF12", "id": 616, "question": "How many different phenotypes are present in the dataset?", "title": "A Corpus for Detecting High-Context Medical Conditions in Intensive Care Patient Notes Focusing on Frequently Readmitted Patients"}, {"answers": ["Adv. Heart Disease, Adv. Lung Disease, Alcohol Abuse, Chronic Neurologic Dystrophies, Dementia, Depression, Developmental Delay, Obesity, Psychiatric disorders and Substance Abuse"], "context": "Clinical researchers teamed with junior medical residents in collaboration with more senior intensive care physicians to carry out text annotation over the period of one year BIBREF13. Operators were grouped to facilitate the annotation of notes in duplicate, allowing for cases of disagreement between operators. The operators within each team were instructed to work independently on note annotation. Clinical texts were annotated in batches which were time-stamped on their day of creation, when both operators in a team completed annotation of a batch, a new batch was created and transferred to them.", "id": 617, "question": "What are 10 other phenotypes that are annotated?", "title": "A Corpus for Detecting High-Context Medical Conditions in Intensive Care Patient Notes Focusing on Frequently Readmitted Patients"}, {"answers": ["", ""], "context": "Sarcasm is defined as \u201ca sharp, bitter, or cutting expression or remark; a bitter gibe or taunt\u201d. As the fields of affective computing and sentiment analysis have gained increasing popularity BIBREF0 , it is a major concern to detect sarcastic, ironic, and metaphoric expressions. Sarcasm, especially, is key for sentiment analysis as it can completely flip the polarity of opinions. Understanding the ground truth, or the facts about a given event, allows for the detection of contradiction between the objective polarity of the event (usually negative) and its sarcastic characteristic by the author (usually positive), as in \u201cI love the pain of breakup\u201d. Obtaining such knowledge is, however, very difficult.", "id": 618, "question": "What are the state of the art models?", "title": "A Deeper Look into Sarcastic Tweets Using Deep Convolutional Neural Networks"}, {"answers": ["", ""], "context": "NLP research is gradually evolving from lexical to compositional semantics BIBREF10 through the adoption of novel meaning-preserving and context-aware paradigms such as convolutional networks BIBREF11 , recurrent belief networks BIBREF12 , statistical learning theory BIBREF13 , convolutional multiple kernel learning BIBREF14 , and commonsense reasoning BIBREF15 . But while other NLP tasks have been extensively investigated, sarcasm detection is a relatively new research topic which has gained increasing interest only recently, partly thanks to the rise of social media analytics and sentiment analysis. Sentiment analysis BIBREF16 and using multimodal information as a new trend BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF14 is a popular branch of NLP research that aims to understand sentiment of documents automatically using combination of various machine learning approaches BIBREF21 , BIBREF22 , BIBREF20 , BIBREF23 .", "id": 619, "question": "Which benchmark datasets are used?", "title": "A Deeper Look into Sarcastic Tweets Using Deep Convolutional Neural Networks"}, {"answers": [" The features extracted from CNN."], "context": "Sarcasm detection is an important subtask of sentiment analysis BIBREF27 . Since sarcastic sentences are subjective, they carry sentiment and emotion-bearing information. Most of the studies in the literature BIBREF28 , BIBREF29 , BIBREF9 , BIBREF30 include sentiment features in sarcasm detection with the use of a state-of-the-art sentiment lexicon. Below, we explain how sentiment information is key to express sarcastic opinions and the approach we undertake to exploit such information for sarcasm detection.", "id": 620, "question": "What are the network's baseline features?", "title": "A Deeper Look into Sarcastic Tweets Using Deep Convolutional Neural Networks"}, {"answers": ["four machine translation tasks: German -> English, Japanese -> English, Romanian -> English, English -> German", ""], "context": "The Transformer architecture BIBREF0 for deep neural networks has quickly risen to prominence in NLP through its efficiency and performance, leading to improvements in the state of the art of Neural Machine Translation BIBREF1, BIBREF2, as well as inspiring other powerful general-purpose models like BERT BIBREF3 and GPT-2 BIBREF4. At the heart of the Transformer lie multi-head attention mechanisms: each word is represented by multiple different weighted averages of its relevant context. As suggested by recent works on interpreting attention head roles, separate attention heads may learn to look for various relationships between tokens BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9.", "id": 621, "question": "What tasks are used for evaluation?", "title": "Adaptively Sparse Transformers"}, {"answers": ["On the datasets DE-EN, JA-EN, RO-EN, and EN-DE, the baseline achieves 29.79, 21.57, 32.70, and 26.02 BLEU score, respectively. The 1.5-entmax achieves 29.83, 22.13, 33.10, and 25.89 BLEU score, which is a difference of +0.04, +0.56, +0.40, and -0.13 BLEU score versus the baseline. The \u03b1-entmax achieves 29.90, 21.74, 32.89, and 26.93 BLEU score, which is a difference of +0.11, +0.17, +0.19, +0.91 BLEU score versus the baseline."], "context": "In NMT, the Transformer BIBREF0 is a sequence-to-sequence (seq2seq) model which maps an input sequence to an output sequence through hierarchical multi-head attention mechanisms, yielding a dynamic, context-dependent strategy for propagating information within and across sentences. It contrasts with previous seq2seq models, which usually rely either on costly gated recurrent operations BIBREF15, BIBREF16 or static convolutions BIBREF17.", "id": 622, "question": "HOw does the method perform compared with baselines?", "title": "Adaptively Sparse Transformers"}, {"answers": ["", ""], "context": "The softmax mapping (Equation DISPLAY_FORM8) is elementwise proportional to $\\exp $, therefore it can never assign a weight of exactly zero. Thus, unnecessary items are still taken into consideration to some extent. Since its output sums to one, this invariably means less weight is assigned to the relevant items, potentially harming performance and interpretability BIBREF18. This has motivated a line of research on learning networks with sparse mappings BIBREF19, BIBREF20, BIBREF21, BIBREF22. We focus on a recently-introduced flexible family of transformations, $\\alpha $-entmax BIBREF23, BIBREF14, defined as:", "id": 623, "question": "How does their model improve interpretability compared to softmax transformers?", "title": "Adaptively Sparse Transformers"}, {"answers": ["using word2vec to create features that are used as input to the SVM", ""], "context": "Sentiment analysis has recently been one of the hottest topics in natural language processing (NLP). It is used to identify and categorise opinions expressed by reviewers on a topic or an entity. Sentiment analysis can be leveraged in marketing, social media analysis, and customer service. Although many studies have been conducted for sentiment analysis in widely spoken languages, this topic is still immature for Turkish and many other languages.", "id": 624, "question": "What baseline method is used?", "title": "Generating Word and Document Embeddings for Sentiment Analysis"}, {"answers": ["", "one of the Twitter datasets is about Turkish mobile network operators, there are positive, neutral and negative labels and provide the total amount plus the distribution of labels"], "context": "In the literature, the main consensus is that the use of dense word embeddings outperform the sparse embeddings in many tasks. Latent semantic analysis (LSA) used to be the most popular method in generating word embeddings before the invention of the word2vec and other word vector algorithms which are mostly created by shallow neural network models. Although many studies have been employed on generating word vectors including both semantic and sentimental components, generating and analysing the effects of different types of embeddings on different tasks is an emerging field for Turkish.", "id": 625, "question": "What details are given about the Twitter dataset?", "title": "Generating Word and Document Embeddings for Sentiment Analysis"}, {"answers": ["there are 20,244 reviews divided into positive and negative with an average 39 words per review, each one having a star-rating score", ""], "context": "We generate several word vectors, which capture the sentimental, lexical, and contextual characteristics of words. In addition to these mostly original vectors, we also create word2vec embeddings to represent the corpus words by training the embedding model on these datasets. After generating these, we combine them with hand-crafted features to create document vectors and perform classification, as will be explained in Section 3.5.", "id": 626, "question": "What details are given about the movie domain dataset?", "title": "Generating Word and Document Embeddings for Sentiment Analysis"}, {"answers": ["", ""], "context": "Contextual information is informative in the sense that, in general, similar words tend to appear in the same contexts. For example, the word smart is more likely to cooccur with the word hardworking than with the word lazy. This similarity can be defined semantically and sentimentally. In the corpus-based approach, we capture both of these characteristics and generate word embeddings specific to a domain.", "id": 627, "question": "Which hand-crafted features are combined with word2vec?", "title": "Generating Word and Document Embeddings for Sentiment Analysis"}, {"answers": [""], "context": "In Turkish, there do not exist well-established sentiment lexicons as in English. In this approach, we made use of the TDK (T\u00fcrk Dil Kurumu - \u201cTurkish Language Institution\u201d) dictionary to obtain word polarities. Although it is not a sentiment lexicon, combining it with domain-specific polarity scores obtained from the corpus led us to have state-of-the-art results.", "id": 628, "question": "What word-based and dictionary-based feature are used?", "title": "Generating Word and Document Embeddings for Sentiment Analysis"}, {"answers": [""], "context": "Our last component is a simple metric that uses four supervised scores for each word in the corpus. We extract these scores as follows. For a target word in the corpus, we scan through all of its contexts. In addition to the target word's polarity score (the self score), out of all the polarity scores of words occurring in the same contexts as the target word, minimum, maximum, and average scores are taken into consideration. The word polarity scores are computed using (DISPLAY_FORM4). Here, we obtain those scores from the training data.", "id": 629, "question": "How are the supervised scores of the words calculated?", "title": "Generating Word and Document Embeddings for Sentiment Analysis"}, {"answers": [""], "context": "Twitter, a micro-blogging and social networking site has emerged as a platform where people express themselves and react to events in real-time. It is estimated that nearly 500 million tweets are sent per day . Twitter data is particularly interesting because of its peculiar nature where people convey messages in short sentences using hashtags, emoticons, emojis etc. In addition, each tweet has meta data like location and language used by the sender. It's challenging to analyze this data because the tweets might not be grammatically correct and the users tend to use informal and slang words all the time. Hence, this poses an interesting problem for NLP researchers. Any advances in using this abundant and diverse data can help understand and analyze information about a person, an event, a product, an organization or a country as a whole. Many notable use cases of the twitter can be found here.", "id": 630, "question": "what dataset was used?", "title": "Seernet at EmoInt-2017: Tweet Emotion Intensity Estimator"}, {"answers": ["", ""], "context": "The preprocessing step modifies the raw tweets before they are passed to feature extraction. Tweets are processed using tweetokenize tool. Twitter specific features are replaced as follows: username handles to USERNAME, phone numbers to PHONENUMBER, numbers to NUMBER, URLs to URL and times to TIME. A continuous sequence of emojis is broken into individual tokens. Finally, all tokens are converted to lowercase.", "id": 631, "question": "how many total combined features were there?", "title": "Seernet at EmoInt-2017: Tweet Emotion Intensity Estimator"}, {"answers": ["Pretrained word embeddings were not used", ""], "context": "Many tasks related to sentiment or emotion analysis depend upon affect, opinion, sentiment, sense and emotion lexicons. These lexicons associate words to corresponding sentiment or emotion metrics. On the other hand, the semantic meaning of words, sentences, and documents are preserved and compactly represented using low dimensional vectors BIBREF1 instead of one hot encoding vectors which are sparse and high dimensional. Finally, there are traditional NLP features like word N-grams, character N-grams, Part-Of-Speech N-grams and word clusters which are known to perform well on various tasks.", "id": 632, "question": "what pretrained word embeddings were used?", "title": "Seernet at EmoInt-2017: Tweet Emotion Intensity Estimator"}, {"answers": ["precision, recall, F1 and accuracy", "Response time, resource consumption (memory, CPU, network bandwidth), precision, recall, F1, accuracy."], "context": "Back to 42 BC, the philosopher Cicero has raised the issue that although there were many Oratory classes, there were none for Conversational skills BIBREF0 . He highlighted how important they were not only for politics, but also for educational purpose. Among other conversational norms, he claimed that people should be able to know when to talk in a conversation, what to talk depending on the subject of the conversation, and that they should not talk about themselves.", "id": 633, "question": "What evaluation metrics did look at?", "title": "A Hybrid Architecture for Multi-Party Conversational Systems"}, {"answers": ["Custom dataset with user questions; set of documents, twitter posts and news articles, all related to finance.", "a self-collected financial intents dataset in Portuguese"], "context": "There are plenty of challenges in conversation contexts, and even bigger ones when people and machines participate in those contexts. Conversation is a specialized form of interaction, which follows social conventions. Social interaction makes it possible to inform, context, create, ratify, refute, and ascribe, among other things, power, class, gender, ethnicity, and culture BIBREF2 . Social structures are the norms that emerge from the contact people have with others BIBREF7 , for example, the communicative norms of a negotiation, taking turns in a group, the cultural identity of a person, or power relationships in a work context.", "id": 634, "question": "What datasets are used?", "title": "A Hybrid Architecture for Multi-Party Conversational Systems"}, {"answers": [""], "context": "In this section we discuss the state of the art on conversational systems in three perspectives: types of interactions, types of architecture, and types of context reasoning. Then we present a table that consolidates and compares all of them.", "id": 635, "question": "What is the state of the art described in the paper?", "title": "A Hybrid Architecture for Multi-Party Conversational Systems"}, {"answers": ["", ""], "context": "Natural text generation, as a key task in NLP, has been advanced substantially thanks to the flourish of neural models BIBREF0 , BIBREF1 . Typical frameworks such as sequence-to-sequence (seq2seq) have been applied to various generation tasks, including machine translation BIBREF2 and dialogue generation BIBREF3 . The standard paradigm to train such neural models is maximum likelihood estimation (MLE), which maximizes the log-likelihood of observing each word in the text given the ground-truth proceeding context BIBREF4 .", "id": 636, "question": "What GAN models were used as baselines to compare against?", "title": "ARAML: A Stable Adversarial Training Framework for Text Generation"}, {"answers": ["ARAM has achieved improvement over all baseline methods using reverese perplexity and slef-BLEU metric. The maximum reverse perplexity improvement 936,16 is gained for EMNLP2017 WMT dataset and 48,44 for COCO dataset.", "Compared to the baselines, ARAML does not do better in terms of perplexity on COCO and EMNLP 2017 WMT datasets, but it does by up to 0.27 Self-BLEU points on COCO and 0.35 Self-BLEU on EMNLP 2017 WMT. In terms of Grammaticality and Relevance, it scores better than the baselines on up to 75.5% and 73% of the cases respectively."], "context": "Recently, text generation has been widely studied with neural models trained with maximum likelihood estimation BIBREF4 . However, MLE tends to generate universal text BIBREF18 . Various methods have been proposed to enhance the generation quality by refining the objective function BIBREF18 , BIBREF19 or modifying the generation distribution with external information like topic BIBREF20 , sentence type BIBREF21 , emotion BIBREF22 and knowledge BIBREF23 .", "id": 637, "question": "How much improvement is gained from Adversarial Reward Augmented Maximum Likelihood (ARAML)?", "title": "ARAML: A Stable Adversarial Training Framework for Text Generation"}, {"answers": [""], "context": "Text generation can be formulated as follows: given the real data distribution INLINEFORM0 , the task is to train a generative model INLINEFORM1 where INLINEFORM2 can fit INLINEFORM3 well. In this formulation, INLINEFORM4 and INLINEFORM5 denotes a word in the vocabulary INLINEFORM6 .", "id": 638, "question": "Is the discriminator's reward made available at each step to the generator?", "title": "ARAML: A Stable Adversarial Training Framework for Text Generation"}, {"answers": [""], "context": "Natural languages evolve and words have always been subject to semantic change over time BIBREF1. With the rise of large digitized text resources recent NLP technologies have made it possible to capture such change with vector space models BIBREF2, BIBREF3, BIBREF4, BIBREF5, topic models BIBREF6, BIBREF7, BIBREF8, and sense clustering models BIBREF9. However, many approaches for detecting LSC differ profoundly from each other and therefore drawing comparisons between them can be challenging BIBREF10. Not only do architectures for detecting LSC vary, their performance is also often evaluated without access to evaluation data or too sparse data sets. In cases where evaluation data is available, oftentimes LSCD systems are not evaluated on the same data set which hinders the research community to draw comparisons.", "id": 639, "question": "What is the algorithm used to create word embeddings?", "title": "Shared Task: Lexical Semantic Change Detection in German"}, {"answers": ["", ""], "context": "The goal of the shared task was to create an architecture to detect semantic change and to rank words according to their degree of change between two different time periods. Given two corpora Ca and Cb, the target words had to be ranked according to their degree of lexical semantic change between Ca and Cb as annotated by human judges. A competition was set up on Codalab and teams mostly consisting of 2 people were formed to take part in the task. There was one group consisting of 3 team members and two individuals who entered the task on their own. In total there were 12 LSCD systems participating in the shared task.", "id": 640, "question": "What is the corpus used for the task?", "title": "Shared Task: Lexical Semantic Change Detection in German"}, {"answers": ["", ""], "context": "The task, as framed above, requires to detect the semantic change between two corpora. The two corpora used in the shared task correspond to the diachronic corpus pair from BIBREF0: DTA18 and DTA19. They consist of subparts of DTA corpus BIBREF11 which is a freely available lemmatized, POS-tagged and spelling-normalized diachronic corpus of German containing texts from the 16th to the 20th century. DTA18 contains 26 million sentences published between 1750-1799 and DTA19 40 million between 1850-1899. The corpus version used in the task has the following format: \"year [tab] lemma1 lemma2 lemma3 ...\".", "id": 641, "question": "How is evaluation performed?", "title": "Shared Task: Lexical Semantic Change Detection in German"}, {"answers": ["", ""], "context": "How humans process language has become increasingly relevant in natural language processing since physiological data during language understanding is more accessible and recorded with less effort. In this work, we focus on eye-tracking and electroencephalography (EEG) recordings to capture the reading process. On one hand, eye movement data provides millisecond-accurate records about where humans look when they are reading, and is highly correlated with the cognitive load associated with different stages of text processing. On the other hand, EEG records electrical brain activity across the scalp and is a direct measure of physiological processes, including language processing. The combination of both measurement methods enables us to study the language understanding process in a more natural setting, where participants read full sentences at a time, in their own speed. Eye-tracking then permits us to define exact word boundaries in the timeline of a subject reading a sentence, allowing the extraction of brain activity signals for each word.", "id": 642, "question": "What is a normal reading paradigm?", "title": "ZuCo 2.0: A Dataset of Physiological Recordings During Natural Reading and Annotation"}, {"answers": [""], "context": "Some eye-tracking corpora of natural reading (e.g. the Dundee BIBREF2, Provo BIBREF3 and GECO corpus BIBREF4), and a few EEG corpora (for example, the UCL corpus BIBREF5) are available. It has been shown that this type of cognitive processing data is useful for improving and evaluating NLP methods (e.g. barrett2018sequence,hollenstein2019cognival, hale2018finding). However, before the Zurich Cognitive Language Processing Corpus (ZuCo 1.0), there was no available data for simultaneous eye-tracking and EEG recordings of natural reading. dimigen2011coregistration studied the linguistic effects of eye movements and EEG co-registration in natural reading and showed that they accurately represent lexical processing. Moreover, the simultaneous recordings are crucial to extract word-level brain activity signals.", "id": 643, "question": "Did they experiment with this new dataset?", "title": "ZuCo 2.0: A Dataset of Physiological Recordings During Natural Reading and Annotation"}, {"answers": ["", ""], "context": "In previous work, we recorded a first dataset of simultaneous eye-tracking and EEG during natural reading BIBREF1. ZuCo 1.0 consists of three reading tasks, two of which contain very similar reading material and experiments as presented in the current work. However, the main difference and reason for recording ZuCo 2.0, consists in the experiment procedure. For ZuCo 1.0 the normal reading and task-specific reading paradigms were recorded in different sessions on different days. Therefore, the recorded data is not appropriate as a means of comparison between natural reading and annotation, since the differences in the brain activity data might result mostly from the different sessions due to the sensitivity of EEG. This, and extending the dataset with more sentences and more subjects, were the main factors for recording the current corpus. We purposefully maintained an overlap of some sentences between both datasets to allow additional analyses (details are described in Section SECREF7).", "id": 644, "question": "What kind of sentences were read?", "title": "ZuCo 2.0: A Dataset of Physiological Recordings During Natural Reading and Annotation"}, {"answers": ["They use a slightly modified copy of the target to create the pseudo-text instead of full BT to make their technique cheaper", "They do not require the availability of a backward translation engine."], "context": "The new generation of Neural Machine Translation (NMT) systems is known to be extremely data hungry BIBREF0 . Yet, most existing NMT training pipelines fail to fully take advantage of the very large volume of monolingual source and/or parallel data that is often available. Making a better use of data is particularly critical in domain adaptation scenarios, where parallel adaptation data is usually assumed to be small in comparison to out-of-domain parallel data, or to in-domain monolingual texts. This situation sharply contrasts with the previous generation of statistical MT engines BIBREF1 , which could seamlessly integrate very large amounts of non-parallel documents, usually with a large positive effect on translation quality.", "id": 645, "question": "why are their techniques cheaper to implement?", "title": "Using Monolingual Data in Neural Machine Translation: a Systematic Study"}, {"answers": ["", ""], "context": "We are mostly interested with the following training scenario: a large out-of-domain parallel corpus, and limited monolingual in-domain data. We focus here on the Europarl domain, for which we have ample data in several languages, and use as in-domain training data the Europarl corpus BIBREF5 for two translation directions: English INLINEFORM0 German and English INLINEFORM1 French. As we study the benefits of monolingual data, most of our experiments only use the target side of this corpus. The rationale for choosing this domain is to (i) to perform large scale comparisons of synthetic and natural parallel corpora; (ii) to study the effect of BT in a well-defined domain-adaptation scenario. For both language pairs, we use the Europarl tests from 2007 and 2008 for evaluation purposes, keeping test 2006 for development. When measuring out-of-domain performance, we will use the WMT newstest 2014.", "id": 646, "question": "what data simulation techniques were introduced?", "title": "Using Monolingual Data in Neural Machine Translation: a Systematic Study"}, {"answers": [""], "context": "Our baseline NMT system implements the attentional encoder-decoder approach BIBREF6 , BIBREF7 as implemented in Nematus BIBREF8 on 4 million out-of-domain parallel sentences. For French we use samples from News-Commentary-11 and Wikipedia from WMT 2014 shared translation task, as well as the Multi-UN BIBREF9 and EU-Bookshop BIBREF10 corpora. For German, we use samples from News-Commentary-11, Rapid, Common-Crawl (WMT 2017) and Multi-UN (see table TABREF5 ). Bilingual BPE units BIBREF11 are learned with 50k merge operations, yielding vocabularies of about respectively 32k and 36k for English INLINEFORM0 French and 32k and 44k for English INLINEFORM1 German.", "id": 647, "question": "what is their explanation for the effectiveness of back-translation?", "title": "Using Monolingual Data in Neural Machine Translation: a Systematic Study"}, {"answers": ["", "Europarl tests from 2006, 2007, 2008; WMT newstest 2014."], "context": "A simple way to use monolingual data in MT is to turn it into synthetic parallel data and let the training procedure run as usual BIBREF16 . In this section, we explore various ways to implement this strategy. We first reproduce results of BIBREF2 with BT of various qualities, that we then analyze thoroughly.", "id": 648, "question": "what dataset is used?", "title": "Using Monolingual Data in Neural Machine Translation: a Systematic Study"}, {"answers": ["English-German, English-French.", "English-German, English-French"], "context": "BT requires the availability of an MT system in the reverse translation direction. We consider here three MT systems of increasing quality:", "id": 649, "question": "what language pairs are explored?", "title": "Using Monolingual Data in Neural Machine Translation: a Systematic Study"}, {"answers": [""], "context": "Comparing the natural and artificial sources of our parallel data wrt. several linguistic and distributional properties, we observe that (see Fig. FIGREF21 - FIGREF22 ):", "id": 650, "question": "what language is the data in?", "title": "Using Monolingual Data in Neural Machine Translation: a Systematic Study"}, {"answers": ["", ""], "context": "From a group of small users at the time of its inception in 2009, Quora has evolved in the last few years into one of the largest community driven Q&A sites with diverse user communities. With the help of efficient content moderation/review policies and active in-house review team, efficient Quora bots, this site has emerged into one of the largest and reliable sources of Q&A on the Internet. On Quora, users can post questions, follow questions, share questions, tag them with relevant topics, follow topics, follow users apart from answering, commenting, upvoting/downvoting etc. The integrated social structure at the backbone of it and the topical organization of its rich content have made Quora unique with respect to other Q&A sites like Stack Overflow, Yahoo! Answers etc. and these are some of the prime reasons behind its popularity in recent times. Quality question posting and getting them answered are the key objectives of any Q&A site. In this study we focus on the answerability of questions on Quora, i.e., whether a posted question shall eventually get answered. In Quora, the questions with no answers are referred to as \u201copen questions\u201d. These open questions need to be studied separately to understand the reason behind their not being answered or to be precise, are there any characteristic differences between `open' questions and the answered ones. For example, the question \u201cWhat are the most promising advances in the treatment of traumatic brain injuries?\u201d was posted on Quora on 23rd June, 2011 and got its first answer after almost 2 years on 22nd April, 2013. The reason that this question remained open so long might be the hardness of answering it and the lack of visibility and experts in the domain. Therefore, it is important to identify the open questions and take measures based on the types - poor quality questions can be removed from Quora and the good quality questions can be promoted so that they get more visibility and are eventually routed to topical experts for better answers.", "id": 651, "question": "Does the experiments focus on a specific domain?", "title": "Language Use Matters: Analysis of the Linguistic Structure of Question Texts Can Characterize Answerability in Quora"}, {"answers": ["", ""], "context": "We obtained our Quora dataset BIBREF7 through web-based crawls between June 2014 to August 2014. This crawling exercise has resulted in the accumulation of a massive Q&A dataset spanning over a period of over four years starting from January 2010 to May 2014. We initiated crawling with 100 questions randomly selected from different topics so that different genre of questions can be covered. The crawling of the questions follow a BFS pattern through the related question links. We obtained 822,040 unique questions across 80,253 different topics with a total of 1,833,125 answers to these questions. For each question, we separately crawl their revision logs that contain different types of edit information for the question and the activity log of the question asker.", "id": 652, "question": "how many training samples do you have for training?", "title": "Language Use Matters: Analysis of the Linguistic Structure of Question Texts Can Characterize Answerability in Quora"}, {"answers": [""], "context": "In this section, we identify various linguistic activities on Quora and propose quantifications of the language usage patterns in this Q&A site. In particular, we show that there exists significant differences in the linguistic structure of the open and the answered questions. Note that most of the measures that we define are simple, intuitive and can be easily obtained automatically from the data (without manual intervention). Therefore the framework is practical, inexpensive and highly scalable.", "id": 653, "question": "Do the answered questions measure for the usefulness of the answer?", "title": "Language Use Matters: Analysis of the Linguistic Structure of Question Texts Can Characterize Answerability in Quora"}, {"answers": ["", ""], "context": "User profiles on social media platforms serve as a virtual introduction of the users. People often maintain their online profile space to reflect their likes and values. Further, how users maintain their profile helps them develop relationship with coveted audience BIBREF0. A user profile on Twitter is composed of several attributes with some of the most prominent ones being the profile name, screen name, profile image, location, description, followers count, and friend count. While the screen name, display name, profile image, and description identify the user, the follower and friend counts represent the user's social connectivity. Profile changes might represent identity choice at a small level. However, previous studies have shown that on a broader level, profile changes may be an indication of a rise in a social movement BIBREF1, BIBREF2.", "id": 654, "question": "What profile metadata is used for this analysis?", "title": "Is change the only constant? Profile change perspective on #LokSabhaElections2019"}, {"answers": ["Organic: mention of political parties names in the profile attributes, specific mentions of political handles in the profile attributes.\nInorganic: adding Chowkidar to the profile attributes, the effect of changing the profile attribute in accordance with Prime Minister's campaign, the addition of election campaign related keywords to the profile.", "Mentioning of political parties names and political twitter handles is the organic way to show political affiliation; adding Chowkidar or its variants to the profile is the inorganic way."], "context": "To analyze the importance of user-profiles in elections, we need to distinguish between the profile change behavior of political accounts and follower accounts. This brings us to the first research question:", "id": 655, "question": "What are the organic and inorganic ways to show political affiliation through profile changes?", "title": "Is change the only constant? Profile change perspective on #LokSabhaElections2019"}, {"answers": ["Influential leaders are more likely to change their profile attributes than their followers; the leaders do not change their usernames, while their followers change their usernames a lot; the leaders tend to make new changes related to previous attribute values, while the followers make comparatively less related changes to previous attribute values."], "context": "In summary, our main contributions are:", "id": 656, "question": "How do profile changes vary for influential leads and their followers over the social movement?", "title": "Is change the only constant? Profile change perspective on #LokSabhaElections2019"}, {"answers": ["", ""], "context": "Social media is now becoming an important real-time information source, especially during natural disasters and emergencies. It is now very common for traditional news media to frequently probe users and resort to social media platforms to obtain real-time developments of events. According to a recent survey by Pew Research Center, in 2017, more than two-thirds of Americans read some of their news on social media. Even for American people who are 50 or older, INLINEFORM0 of them report getting news from social media, which is INLINEFORM1 points higher than the number in 2016. Among all major social media sites, Twitter is most frequently used as a news source, with INLINEFORM2 of its users obtaining their news from Twitter. All these statistical facts suggest that understanding user-generated noisy social media text from Twitter is a significant task.", "id": 657, "question": "What evaluation metrics do they use?", "title": "TWEETQA: A Social Media Focused Question Answering Dataset"}, {"answers": ["", ""], "context": "In this section, we first describe the three-step data collection process of TweetQA: tweet crawling, question-answer writing and answer validation. Next, we define the specific task of TweetQA and discuss several evaluation metrics. To better understand the characteristics of the TweetQA task, we also include our analysis on the answer and question characteristics using a subset of QA pairs from the development set.", "id": 658, "question": "What is the size of this dataset?", "title": "TWEETQA: A Social Media Focused Question Answering Dataset"}, {"answers": [""], "context": "One major challenge of building a QA dataset on tweets is the sparsity of informative tweets. Many users write tweets to express their feelings or emotions about their personal lives. These tweets are generally uninformative and also very difficult to ask questions about. Given the linguistic variance of tweets, it is generally hard to directly distinguish those tweets from informative ones. In terms of this, rather than starting from Twitter API Search, we look into the archived snapshots of two major news websites (CNN, NBC), and then extract the tweet blocks that are embedded in the news articles. In order to get enough data, we first extract the URLs of all section pages (e.g. World, Politics, Money, Tech) from the snapshot of each home page and then crawl all articles with tweets from these section pages. Note that another possible way to collect informative tweets is to download the tweets that are posted by the official Twitter accounts of news media. However, these tweets are often just the summaries of news articles, which are written in formal text. As our focus is to develop a dataset for QA on informal social media text, we do not consider this approach.", "id": 659, "question": "How do they determine if tweets have been used by journalists?", "title": "TWEETQA: A Social Media Focused Question Answering Dataset"}, {"answers": ["", "23085 hours of data"], "context": "Recently, deep neural network has been widely employed in various recognition tasks. Increasing the depth of neural network is a effective way to improve the performance, and convolutional neural network (CNN) has benefited from it in visual recognition task BIBREF0 . Deeper long short-term memory (LSTM) recurrent neural networks (RNNs) are also applied in large vocabulary continuous speech recognition (LVCSR) task, because LSTM networks have shown better performance than Fully-connected feed-forward deep neural network BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 .", "id": 660, "question": "how small of a dataset did they train on?", "title": "Deep LSTM for Large Vocabulary Continuous Speech Recognition"}, {"answers": ["2.49% for layer-wise training, 2.63% for distillation, 6.26% for transfer learning.", "Their best model achieved a 2.49% Character Error Rate."], "context": "Gradient-based optimization of deep LSTM network with random initialization get stuck in poor solution easily. Xavier initialization can partially solve this problem BIBREF7 , so this method is the regular initialization method of all training procedure. However, it does not work well when it is utilized to initialize very deep model directly, because of vanishing or exploding gradients. Instead, layer-wise pre-training method is a effective way to train the weights of very deep architecture BIBREF6 , BIBREF20 . In layer-wise pre-training procedure, a one-layer LSTM model is firstly trained with normalized initialization. Sequentially, two-layers LSTM model's first layer is initialized by trained one-layer model, and its second layer is regularly initialized. In this way, a deep architecture is layer-by-layer trained, and it can converge well.", "id": 661, "question": "what was their character error rate?", "title": "Deep LSTM for Large Vocabulary Continuous Speech Recognition"}, {"answers": ["Unidirectional LSTM networks with 2, 6, 7, 8, and 9 layers."], "context": "The objects of conventional saturation check are gradients and the cell activations BIBREF4 . Gradients are clipped to range [-5, 5], while the cell activations clipped to range [-50, 50]. Apart from them, the differentials of recurrent layers is also limited. If the differentials go beyond the range, corresponding back propagation is skipped, while if the gradients and cell activations go beyond the bound, values are set as the boundary values. The differentials which are too large or too small will lead to the gradients easily vanishing, and it demonstrates the failure of this propagation. As a result, the parameters are not updated, and next propagation .", "id": 662, "question": "which lstm models did they compare with?", "title": "Deep LSTM for Large Vocabulary Continuous Speech Recognition"}, {"answers": ["They use text transcription.", "both"], "context": "Recently, deep learning algorithms have successfully addressed problems in various fields, such as image classification, machine translation, speech recognition, text-to-speech generation and other machine learning related areas BIBREF0 , BIBREF1 , BIBREF2 . Similarly, substantial improvements in performance have been obtained when deep learning algorithms have been applied to statistical speech processing BIBREF3 . These fundamental improvements have led researchers to investigate additional topics related to human nature, which have long been objects of study. One such topic involves understanding human emotions and reflecting it through machine intelligence, such as emotional dialogue models BIBREF4 , BIBREF5 .", "id": 663, "question": "Do they use datasets with transcribed text or do they determine text from the audio?", "title": "Multimodal Speech Emotion Recognition Using Audio and Text"}, {"answers": [""], "context": "Classical machine learning algorithms, such as hidden Markov models (HMMs), support vector machines (SVMs), and decision tree-based methods, have been employed in speech emotion recognition problems BIBREF11 , BIBREF12 , BIBREF13 . Recently, researchers have proposed various neural network-based architectures to improve the performance of speech emotion recognition. An initial study utilized deep neural networks (DNNs) to extract high-level features from raw audio data and demonstrated its effectiveness in speech emotion recognition BIBREF14 . With the advancement of deep learning methods, more complex neural-based architectures have been proposed. Convolutional neural network (CNN)-based models have been trained on information derived from raw audio signals using spectrograms or audio features such as Mel-frequency cepstral coefficients (MFCCs) and low-level descriptors (LLDs) BIBREF15 , BIBREF16 , BIBREF17 . These neural network-based models are combined to produce higher-complexity models BIBREF18 , BIBREF19 , and these models achieved the best-recorded performance when applied to the IEMOCAP dataset.", "id": 664, "question": "By how much does their model outperform the state of the art results?", "title": "Multimodal Speech Emotion Recognition Using Audio and Text"}, {"answers": ["", ""], "context": "This section describes the methodologies that are applied to the speech emotion recognition task. We start by introducing the recurrent encoder model for the audio and text modalities individually. We then propose a multimodal approach that encodes both audio and textual information simultaneously via a dual recurrent encoder.", "id": 665, "question": "How do they combine audio and text sequences in their RNN?", "title": "Multimodal Speech Emotion Recognition Using Audio and Text"}, {"answers": ["", ""], "context": "Automatic classification of sentiment has mainly focused on categorizing tweets in either two (binary sentiment analysis) or three (ternary sentiment analysis) categories BIBREF0 . In this work we study the problem of fine-grained sentiment classification where tweets are classified according to a five-point scale ranging from VeryNegative to VeryPositive. To illustrate this, Table TABREF3 presents examples of tweets associated with each of these categories. Five-point scales are widely adopted in review sites like Amazon and TripAdvisor, where a user's sentiment is ordered with respect to its intensity. From a sentiment analysis perspective, this defines a classification problem with five categories. In particular, Sebastiani et al. BIBREF1 defined such classification problems whose categories are explicitly ordered to be ordinal classification problems. To account for the ordering of the categories, learners are penalized according to how far from the true class their predictions are.", "id": 666, "question": "What was the baseline?", "title": "Multitask Learning for Fine-Grained Twitter Sentiment Analysis"}, {"answers": ["They decrease MAE in 0.34"], "context": "In his work, Caruana BIBREF4 proposed a multitask approach in which a learner takes advantage of the multiplicity of interdependent tasks while jointly learning them. The intuition is that if the tasks are correlated, the learner can learn a model jointly for them while taking into account the shared information which is expected to improve its generalization ability. People express their opinions online on various subjects (events, products..), on several languages and in several styles (tweets, paragraph-sized reviews..), and it is exactly this variety that motivates the multitask approaches. Specifically for Twitter for instance, the different settings of classification like binary, ternary and fine-grained are correlated since their difference lies in the sentiment granularity of the classes which increases while moving from binary to fine-grained problems.", "id": 667, "question": "By how much did they improve?", "title": "Multitask Learning for Fine-Grained Twitter Sentiment Analysis"}, {"answers": [" high-quality datasets from SemEval-2016 \u201cSentiment Analysis in Twitter\u201d task", ""], "context": "Our goal is to demonstrate how multitask learning can be successfully applied on the task of sentiment classification of tweets. The particularities of tweets are to be short and informal text spans. The common use of abbreviations, creative language etc., makes the sentiment classification problem challenging. To validate our hypothesis, that learning the tasks jointly can benefit the performance, we propose an experimental setting where there are data from two different twitter sentiment classification problems: a fine-grained and a ternary. We consider the fine-grained task to be our primary task as it is more challenging and obtaining bigger datasets, e.g. by distant supervision, is not straightforward and, hence we report the performance achieved for this task.", "id": 668, "question": "What dataset did they use?", "title": "Multitask Learning for Fine-Grained Twitter Sentiment Analysis"}, {"answers": ["", ""], "context": "Cancer is one of the leading causes of death in the world, with over 80,000 deaths registered in Canada in 2017 (Canadian Cancer Statistics 2017). A computer-aided system for cancer diagnosis usually involves a pathologist rendering a descriptive report after examining the tissue glass slides obtained from the biopsy of a patient. A pathology report contains specific analysis of cells and tissues, and other histopathological indicators that are crucial for diagnosing malignancies. An average sized laboratory may produces a large quantity of pathology reports annually (e.g., in excess of 50,000), but these reports are written in mostly unstructured text and with no direct link to the tissue sample. Furthermore, the report for each patient is a personalized document and offers very high variability in terminology due to lack of standards and may even include misspellings and missing punctuation, clinical diagnoses interspersed with complex explanations, different terminology to label the same malignancy, and information about multiple carcinoma appearances included in a single report BIBREF0 .", "id": 669, "question": "What is the reported agreement for the annotation?", "title": "Automatic Classification of Pathology Reports using TF-IDF Features"}, {"answers": ["", ""], "context": "NLP approaches for information extraction within the biomedical research areas range from rule-based systems BIBREF3 , to domain-specific systems using feature-based classification BIBREF1 , to the recent deep networks for end-to-end feature extraction and classification BIBREF0 . NLP has had varied degree of success with free-text pathology reports BIBREF4 . Various studies have acknowledge the success of NLP in interpreting pathology reports, especially for classification tasks or extracting a single attribute from a report BIBREF4 , BIBREF5 .", "id": 670, "question": "How many annotators participated?", "title": "Automatic Classification of Pathology Reports using TF-IDF Features"}, {"answers": [""], "context": "We assembled a dataset of 1,949 cleaned pathology reports. Each report is associated with one of the 37 different primary diagnoses based on IDC-O codes. The reports are collected from four different body parts or primary sites from multiple patients. The distribution of reports across different primary diagnoses and primary sites is reported in tab:report-distribution. The dataset was developed in three steps as follows.", "id": 671, "question": "What features are used?", "title": "Automatic Classification of Pathology Reports using TF-IDF Features"}, {"answers": ["", ""], "context": "Knowledge and/or data is often modeled in a structure, such as indexes, tables, key-value pairs, or triplets. These data, by their nature (e.g., raw data or long time-series data), are not easily usable by humans; outlining their crucial need to be synthesized. Recently, numerous works have focused on leveraging structured data in various applications, such as question answering BIBREF0, BIBREF1 or table retrieval BIBREF2, BIBREF3. One emerging research field consists in transcribing data-structures into natural language in order to ease their understandablity and their usablity. This field is referred to as \u201cdata-to-text\" BIBREF4 and has its place in several application domains (such as journalism BIBREF5 or medical diagnosis BIBREF6) or wide-audience applications (such as financial BIBREF7 and weather reports BIBREF8, or sport broadcasting BIBREF9, BIBREF10). As an example, Figure FIGREF1 shows a data-structure containing statistics on NBA basketball games, paired with its corresponding journalistic description.", "id": 672, "question": "What future possible improvements are listed?", "title": "A Hierarchical Model for Data-to-Text Generation"}, {"answers": ["", ""], "context": "Until recently, efforts to bring out semantics from structured-data relied heavily on expert knowledge BIBREF22, BIBREF8. For example, in order to better transcribe numerical time series of weather data to a textual forecast, Reiter et al. BIBREF8 devise complex template schemes in collaboration with weather experts to build a consistent set of data-to-word rules.", "id": 673, "question": "Which qualitative metric are used for evaluation?", "title": "A Hierarchical Model for Data-to-Text Generation"}, {"answers": [""], "context": "In this section we introduce our proposed hierarchical model taking into account the data structure. We outline that the decoding component aiming to generate descriptions is considered as a black-box module so that our contribution is focused on the encoding module. We first describe the model overview, before detailing the hierarchical encoder and the associated hierarchical attention.", "id": 674, "question": "What is quantitative improvement of proposed method (the best variant) w.r.t. baseline (the best variant)?", "title": "A Hierarchical Model for Data-to-Text Generation"}, {"answers": ["", ""], "context": "The challenges of imbalanced classification\u2014in which the proportion of elements in each class for a classification task significantly differ\u2014and of the ability to generalise on dissimilar data have remained important problems in Natural Language Processing (NLP) and Machine Learning in general. Popular NLP tasks including sentiment analysis, propaganda detection, and event extraction from social media are all examples of imbalanced classification problems. In each case the number of elements in one of the classes (e.g. negative sentiment, propagandistic content, or specific events discussed on social media, respectively) is significantly lower than the number of elements in the other classes.", "id": 675, "question": "How is \"propaganda\" defined for the purposes of this study?", "title": "Cost-Sensitive BERT for Generalisable Sentence Classification with Imbalanced Data"}, {"answers": [""], "context": "The term `propaganda' derives from propagare in post-classical Latin, as in \u201cpropagation of the faith\" BIBREF1, and thus has from the beginning been associated with an intentional and potentially multicast communication; only later did it become a pejorative term. It was pragmatically defined in the World War II era as \u201cthe expression of an opinion or an action by individuals or groups deliberately designed to influence the opinions or the actions of other individuals or groups with reference to predetermined ends\" BIBREF2.", "id": 676, "question": "What metrics are used in evaluation?", "title": "Cost-Sensitive BERT for Generalisable Sentence Classification with Imbalanced Data"}, {"answers": ["", "English"], "context": "Most of the existing works on propaganda detection focus on identifying propaganda at the news article level, or even at the news outlet level with the assumption that each of the articles of the suspected propagandistic outlet are propaganda BIBREF5, BIBREF6.", "id": 677, "question": "Which natural language(s) are studied in this paper?", "title": "Cost-Sensitive BERT for Generalisable Sentence Classification with Imbalanced Data"}, {"answers": ["", ""], "context": "Language models can be optimized to recognize syntax and semantics with great accuracy BIBREF0. However, the output generated can be repetitive and generic leading to monotonous or uninteresting responses (e.g \u201cI don't know\u201d) regardless of the input BIBREF1. While application of attention BIBREF2, BIBREF3 and advanced decoding mechanisms like beam search and variation sampling BIBREF4 have shown improvements, it does not solve the underlying problem. In creative text generation, the objective is not strongly bound to the ground truth\u2014instead the objective is to generate diverse, unique or original samples. We attempt to do this through a discriminator which can give feedback to the generative model through a cost function that encourages sampling of creative tokens. The contributions of this paper are in the usage of a GAN framework to generate creative pieces of writing. Our experiments suggest that generative text models, while very good at encapsulating semantic, syntactic and domain information, perform better with external feedback from a discriminator for fine-tuning objectiveless decoding tasks like that of creative text. We show this by evaluating our model on three very different creative datasets containing poetry, metaphors and lyrics.", "id": 678, "question": "Do they report results only on English data?", "title": "Creative GANs for generating poems, lyrics, and metaphors"}, {"answers": [""], "context": "Using GANs, we can train generative models in a two-player game setting between a discriminator and a generator, where the discriminator (a binary classifier) learns to distinguish between real and fake data samples and the generator tries to fool the discriminator by generating authentic and high quality output BIBREF16. GANs have shown to be successful in image generation tasks BIBREF17 and recently, some progress has been observed in text generation BIBREF13, BIBREF12, BIBREF15. Our generator is a language model trained using backpropagation through time BIBREF18. During the pre-training phase we optimize for MLE and during the GAN training phase, we optimize on the creativity reward from the discriminator. The discriminator's encoder has the same architecture as the generator encoder module with the addition of a pooled decoder layer. The decoder contains 3 $[Dense Batch Normalization,ReLU]$ blocks and an addtional $Sigmoid$ layer. The discriminator decoder takes the hidden state at the last time step of a sequence concatenated with both the max-pooled and mean-pooled representation of the hidden states BIBREF19 and outputs a number in the range $[0,1]$. The difficulty of using GANs in text generation comes from the discrete nature of text, making the model non-differentiable hence, we update parameters for the generator model with policy gradients as described in Yu BIBREF15.", "id": 679, "question": "What objective function is used in the GAN?", "title": "Creative GANs for generating poems, lyrics, and metaphors"}, {"answers": ["", ""], "context": "Evaluating creative generation tasks is both critical and complex BIBREF26. Along the lines of previous research on evaluating text generation tasks BIBREF26, we report the perplexity scores of our test set on the evaluated models in the Supplementary Section, Table TABREF4 Our model shows improvements over baseline and GumbelGAN. Common computational methods like BLEU BIBREF27 and perplexity are at best a heuristic and not strong indicators of good performance in text generation models BIBREF28. Particularly, since these scores use target sequences as a reference, it has the same pitfalls as relying on MLE. The advantages in this approach lie in the discriminator's ability to influence the generator to explore other possibilities. Sample outputs for our model can be found on our website .", "id": 680, "question": "Which datasets are used?", "title": "Creative GANs for generating poems, lyrics, and metaphors"}, {"answers": ["Byte-Pair Encoding perplexity (BPE PPL),\nBLEU-1,\nBLEU-4,\nROUGE-L,\npercentage of distinct unigram (D-1),\npercentage of distinct bigrams(D-2),\nuser matching accuracy(UMA),\nMean Reciprocal Rank(MRR)\nPairwise preference over baseline(PP)", "", " Distinct-1/2, UMA = User Matching Accuracy, MRR\n= Mean Reciprocal Rank, PP = Pairwise preference over baseline (evaluated for 310 recipe pairs per model)"], "context": "In the kitchen, we increasingly rely on instructions from cooking websites: recipes. A cook with a predilection for Asian cuisine may wish to prepare chicken curry, but may not know all necessary ingredients apart from a few basics. These users with limited knowledge cannot rely on existing recipe generation approaches that focus on creating coherent recipes given all ingredients and a recipe name BIBREF0. Such models do not address issues of personal preference (e.g. culinary tastes, garnish choices) and incomplete recipe details. We propose to approach both problems via personalized generation of plausible, user-specific recipes using user preferences extracted from previously consumed recipes.", "id": 681, "question": "What metrics are used for evaluation?", "title": "Generating Personalized Recipes from Historical User Preferences"}, {"answers": ["English", "English", ""], "context": "Large-scale transformer-based language models have shown surprising expressivity and fluency in creative and conditional long-text generation BIBREF6, BIBREF7. Recent works have proposed hierarchical methods that condition on narrative frameworks to generate internally consistent long texts BIBREF8, BIBREF9, BIBREF10. Here, we generate procedurally structured recipes instead of free-form narratives.", "id": 682, "question": "What natural language(s) are the recipes written in?", "title": "Generating Personalized Recipes from Historical User Preferences"}, {"answers": [""], "context": "Our model's input specification consists of: the recipe name as a sequence of tokens, a partial list of ingredients, and a caloric level (high, medium, low). It outputs the recipe instructions as a token sequence: $\\mathcal {W}_r=\\lbrace w_{r,0}, \\dots , w_{r,T}\\rbrace $ for a recipe $r$ of length $T$. To personalize output, we use historical recipe interactions of a user $u \\in \\mathcal {U}$.", "id": 683, "question": "What were their results on the new dataset?", "title": "Generating Personalized Recipes from Historical User Preferences"}, {"answers": [""], "context": "We collect a novel dataset of 230K+ recipe texts and 1M+ user interactions (reviews) over 18 years (2000-2018) from Food.com. Here, we restrict to recipes with at least 3 steps, and at least 4 and no more than 20 ingredients. We discard users with fewer than 4 reviews, giving 180K+ recipes and 700K+ reviews, with splits as in tab:recipeixnstats.", "id": 684, "question": "What are the baseline models?", "title": "Generating Personalized Recipes from Historical User Preferences"}, {"answers": [""], "context": "For training and evaluation, we provide our model with the first 3-5 ingredients listed in each recipe. We decode recipe text via top-$k$ sampling BIBREF7, finding $k=3$ to produce satisfactory results. We use a hidden size $d_h=256$ for both the encoder and decoder. Embedding dimensions for vocabulary, ingredient, recipe, techniques, and caloric level are 300, 10, 50, 50, and 5 (respectively). For prior recipe attention, we set $k=20$, the 80th %-ile for the number of user interactions. We use the Adam optimizer BIBREF28 with a learning rate of $10^{-3}$, annealed with a decay rate of 0.9 BIBREF29. We also use teacher-forcing BIBREF30 in all training epochs.", "id": 685, "question": "How did they obtain the interactions?", "title": "Generating Personalized Recipes from Historical User Preferences"}, {"answers": [""], "context": "In this paper, we propose a novel task: to generate personalized recipes from incomplete input specifications and user histories. On a large novel dataset of 180K recipes and 700K reviews, we show that our personalized generative models can generate plausible, personalized, and coherent recipes preferred by human evaluators for consumption. We also introduce a set of automatic coherence measures for instructional texts as well as personalization metrics to support our claims. Our future work includes generating structured representations of recipes to handle ingredient properties, as well as accounting for references to collections of ingredients (e.g. \u201cdry mix\").", "id": 686, "question": "Where do they get the recipes from?", "title": "Generating Personalized Recipes from Historical User Preferences"}, {"answers": [""], "context": "The task of speculation detection and scope resolution is critical in distinguishing factual information from speculative information. This has multiple use-cases, like systems that determine the veracity of information, and those that involve requirement analysis. This task is particularly important to the biomedical domain, where patient reports and medical articles often use this feature of natural language. This task is commonly broken down into two subtasks: the first subtask, speculation cue detection, is to identify the uncertainty cue in a sentence, while the second subtask: scope resolution, is to identify the scope of that cue. For instance, consider the example:", "id": 687, "question": "What were the baselines?", "title": "Resolving the Scope of Speculation and Negation using Transformer-Based Architectures"}, {"answers": ["", ""], "context": "We use the methodology by Khandelwal and Sawant (BIBREF12), and modify it to support experimentation with multiple models.", "id": 688, "question": "Does RoBERTa outperform BERT?", "title": "Resolving the Scope of Speculation and Negation using Transformer-Based Architectures"}, {"answers": ["", "BioScope Abstracts, SFU, and BioScope Full Papers"], "context": "We use a default train-validation-test split of 70-15-15 for each dataset. For the speculation detection and scope resolution subtasks using single-dataset training, we report the results as an average of 5 runs of the model. For training the model on multiple datasets, we perform a 70-15-15 split of each training dataset, after which the train and validation part of the individual datasets are merged while the scores are reported for the test part of the individual datasets, which is not used for training or validation. We report the results as an average of 3 runs of the model. Figure FIGREF8 contains results for speculation cue detection and scope resolution when trained on a single dataset. All models perform the best when trained on the same dataset as they are evaluated on, except for BF, which gets the best results when trained on BA. This is because of the transfer learning capabilities of the models and the fact that BF is a smaller dataset than BA (BF: 2670 sentences, BA: 11871 sentences). For speculation cue detection, there is lesser generalizability for models trained on BF or BA, while there is more generalizability for models trained on SFU. This could be because of the different nature of the biomedical domain.", "id": 689, "question": "Which multiple datasets did they train on during joint training?", "title": "Resolving the Scope of Speculation and Negation using Transformer-Based Architectures"}, {"answers": [""], "context": "We use a default train-validation-test split of 70-15-15 for each dataset, and use all 4 datasets (BF, BA, SFU and Sherlock). The results for BERT are taken from BIBREF12. The results for XLNet and RoBERTa are averaged across 5 runs for statistical significance. Figure FIGREF14 contains results for negation cue detection and scope resolution. We report state-of-the-art results on negation scope resolution on BF, BA and SFU datasets. Contrary to popular opinion, we observe that XLNet is better than RoBERTa for the cue detection and scope resolution tasks. A few possible reasons for this trend are:", "id": 690, "question": "What were the previously reported results?", "title": "Resolving the Scope of Speculation and Negation using Transformer-Based Architectures"}, {"answers": ["", ""], "context": "In this paper, we expanded on the work of Khandelwal and Sawant (BIBREF12) by looking at alternative transfer-learning models and experimented with training on multiple datasets. On the speculation detection task, we obtained a gain of 0.42 F1 points on BF, 1.98 F1 points on BA and 0.29 F1 points on SFU, while on the scope resolution task, we obtained a gain of 8.06 F1 points on BF, 4.27 F1 points on BA and 11.87 F1 points on SFU, when trained on a single dataset. While training on multiple datasets, we observed a gain of 10.6 F1 points on BF and 1.94 F1 points on BA on the speculation detection task and 2.16 F1 points on BF and 0.25 F1 points on SFU on the scope resolution task over the single dataset training approach. We thus significantly advance the state-of-the-art for speculation detection and scope resolution. On the negation scope resolution task, we applied the XLNet and RoBERTa and obtained a gain of 3.16 F1 points on BF, 0.06 F1 points on BA and 0.3 F1 points on SFU. Thus, we demonstrated the usefulness of transformer-based architectures in the field of negation and speculation detection and scope resolution. We believe that a larger and more general dataset would go a long way in bolstering future research and would help create better systems that are not domain-specific.", "id": 691, "question": "What is the size of SFU Review corpus?", "title": "Resolving the Scope of Speculation and Negation using Transformer-Based Architectures"}, {"answers": ["", ""], "context": "Numerous lexical semantic properties are captured by representations encoding distributional properties of words, as has been demonstrated in a variety of tasks BIBREF0 , BIBREF1 , BIBREF2 . However, this distributional account of meaning does not scale to larger units like phrases and sentences BIBREF3 , BIBREF4 , motivating research into compositional models that combine word representations to produce representations of the semantics of longer units BIBREF5 , BIBREF6 , BIBREF7 . Previous work has learned these models using autoencoder formulations BIBREF8 or limited human supervision BIBREF5 . In this work, we explore the hypothesis that the equivalent knowledge about how words compose can be obtained through monolingual paraphrases that have been extracted using word alignments and an intermediate language BIBREF9 . Confirming this hypothesis would allow the rapid development of compositional models in a large number of languages.", "id": 692, "question": "Do they study numerical properties of their obtained vectors (such as orthogonality)?", "title": "Paraphrase-Supervised Models of Compositionality"}, {"answers": [""], "context": "We formalize composition as a function INLINEFORM0 that maps INLINEFORM1 -dimensional vector representations of phrase constituents INLINEFORM2 to an INLINEFORM3 -dimensional vector representation of the phrase, i.e., the composed representation. A phrase is defined as any contiguous sequence of words of length 2 or greater, and does not have to adhere to constituents in a phrase structure grammar. This definition is in line with our MT application and ignores \u201cgappy\u201d noncontiguous phrases, but this pragmatic choice does exclude many verb-object relations BIBREF13 . We assume the existence of word-level vector representations for every word in our vocabulary of size INLINEFORM4 . Compositionality is modeled as a bilinear map, and two classes of linear models with different levels of parametrization are proposed. Unlike previous work BIBREF6 , BIBREF7 , BIBREF14 where the functions are word-specific, our compositional functions operate on part-of-speech (POS) tag pairs, which facilitates learning by drastically reducing the number of parameters, and only requires a shallow syntactic parse of the input.", "id": 693, "question": "How do they score phrasal compositionality?", "title": "Paraphrase-Supervised Models of Compositionality"}, {"answers": ["", ""], "context": "Our first class of models is a generalization of the additive models introduced in Mitchell2008: DISPLAYFORM0 ", "id": 694, "question": "Which translation systems do they compare against?", "title": "Paraphrase-Supervised Models of Compositionality"}, {"answers": [""], "context": "Automatic judgment prediction is to train a machine judge to determine whether a certain plea in a given civil case would be supported or rejected. In countries with civil law system, e.g. mainland China, such process should be done with reference to related law articles and the fact description, as is performed by a human judge. The intuition comes from the fact that under civil law system, law articles act as principles for juridical judgments. Such techniques would have a wide range of promising applications. On the one hand, legal consulting systems could provide better access to high-quality legal resources in a low-cost way to legal outsiders, who suffer from the complicated terminologies. On the other hand, machine judge assistants for professionals would help improve the efficiency of the judicial system. Besides, automated judgment system can help in improving juridical equality and transparency. From another perspective, there are currently 7 times much more civil cases than criminal cases in mainland China, with annual rates of increase of INLINEFORM0 and INLINEFORM1 respectively, making judgment prediction in civil cases a promising application BIBREF0 .", "id": 695, "question": "what are their results on the constructed dataset?", "title": "Automatic Judgment Prediction via Legal Reading Comprehension"}, {"answers": ["", ""], "context": "Automatic judgment prediction has been studied for decades. At the very first stage of judgment prediction studies, researchers focus on mathematical and statistical analysis of existing cases, without any conclusions or methodologies on how to predict them BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Recent attempts consider judgment prediction under the text classification framework. Most of these works extract efficient features from text (e.g., N-grams) BIBREF15 , BIBREF4 , BIBREF1 , BIBREF16 , BIBREF17 or case profiles (e.g., dates, terms, locations and types) BIBREF2 . All these methods require a large amount of human effort to design features or annotate cases. Besides, they also suffer from generalization issue when applied to other scenarios.", "id": 696, "question": "what evaluation metrics are reported?", "title": "Automatic Judgment Prediction via Legal Reading Comprehension"}, {"answers": ["", ""], "context": "As the basis of previous judgment prediction works, typical text classification task takes a single text content as input and predicts the category it belongs to. Recent works usually employ neural networks to model the internal structure of a single input BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 .", "id": 697, "question": "what civil field is the dataset about?", "title": "Automatic Judgment Prediction via Legal Reading Comprehension"}, {"answers": ["", ""], "context": "Reading comprehension is a relevant task to model heterogeneous and complementary inputs, where an answer is predicted given two channels of inputs, i.e. a textual passage and a query. Considerable progress has been made BIBREF6 , BIBREF24 , BIBREF5 . These models employ various attention mechanism to model the interaction between passage and query. Inspired by the advantage of reading comprehension models on modeling multiple inputs, we apply this idea into the legal area and propose legal reading comprehension for judgment prediction.", "id": 698, "question": "what are the state-of-the-art models?", "title": "Automatic Judgment Prediction via Legal Reading Comprehension"}, {"answers": ["100 000 documents", ""], "context": "Conventional reading comprehension BIBREF25 , BIBREF26 , BIBREF7 , BIBREF8 usually considers reading comprehension as predicting the answer given a passage and a query, where the answer could be a single word, a text span of the original passage, chosen from answer candidates, or generated by human annotators.", "id": 699, "question": "what is the size of the real-world civil case dataset?", "title": "Automatic Judgment Prediction via Legal Reading Comprehension"}, {"answers": [""], "context": "Existing works usually formalize judgment prediction as a text classification task and focus on extracting well-designed features of specific cases. Such simplification ignores that the judgment of a case is determined by its fact description and multiple pleas. Moreover, the final judgment should act up to the legal provisions, especially in civil law systems. Therefore, how to integrate the information (i.e., fact descriptions, pleas, and law articles) in a reasonable way is critical for judgment prediction.", "id": 700, "question": "what datasets are used in the experiment?", "title": "Automatic Judgment Prediction via Legal Reading Comprehension"}, {"answers": ["", ""], "context": "computational sociolinguistics, dehumanization, lexical variation, language change, media, New York Times, LGBTQ", "id": 701, "question": "Do they model semantics ", "title": "A Framework for the Computational Linguistic Analysis of Dehumanization"}, {"answers": [""], "context": "Despite the American public's increasing acceptance of LGBTQ people and recent legal successes, LGBTQ individuals frequently remain the targets of hate and violence BIBREF0, BIBREF1, BIBREF2. At the core of this issue is dehumanization, \u201cthe act of perceiving or treating people as less than human\u201d BIBREF3, a process that heavily contributes to extreme intergroup bias BIBREF4. Language is central to studying this phenomenon; like other forms of bias BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, dehumanizing attitudes are expressed through subtle linguistic manipulations, even in carefully-edited texts. It is crucial to understand the use of such linguistic signals in mainstream media, as the media's representation of marginalized social groups has far-reaching implications for social acceptance, policy, and safety.", "id": 702, "question": "How do they identify discussions of LGBTQ people in the New York Times?", "title": "A Framework for the Computational Linguistic Analysis of Dehumanization"}, {"answers": ["", ""], "context": "Our lexical semantic analysis involves quantifying linguistic correlates of component psychological processes that contribute to dehumanization. Our approaches are informed by social psychology research on dehumanization, which is briefly summarized here. Prior work has identified numerous related processes that comprise dehumanization BIBREF4. One such component is likening members of the target group to non-human entities, such as machines or animals BIBREF4, BIBREF11, BIBREF12. By perceiving members of a target group to be non-human, they are \u201coutside the boundary in which moral values, rules, and considerations of fairness apply\" BIBREF13, which thus leads to violence and other forms of abuse. Metaphors and imagery relating target groups to vermin are particularly insidious and played a prominent role in the genocide of Jews in Nazi Germany and Tutsis in Rwanda BIBREF14. More recently, the vermin metaphor has been invoked by the media to discuss terrorists and political leaders of majority-Muslim countries after September 11 BIBREF15. According to BIBREF16, the vermin metaphor is particularly powerful because it conceptualizes the target group as \u201cengaged in threatening behavior, but devoid of thought or emotional desire\".", "id": 703, "question": "Do they analyze specific derogatory words?", "title": "A Framework for the Computational Linguistic Analysis of Dehumanization"}, {"answers": [""], "context": "Language model pretraining has advanced the state of the art in many NLP tasks ranging from sentiment analysis, to question answering, natural language inference, named entity recognition, and textual similarity. State-of-the-art pretrained models include ELMo BIBREF1, GPT BIBREF2, and more recently Bidirectional Encoder Representations from Transformers (Bert; BIBREF0). Bert combines both word and sentence representations in a single very large Transformer BIBREF3; it is pretrained on vast amounts of text, with an unsupervised objective of masked language modeling and next-sentence prediction and can be fine-tuned with various task-specific objectives.", "id": 704, "question": "What is novel about their document-level encoder?", "title": "Text Summarization with Pretrained Encoders"}, {"answers": ["Best results on unigram:\nCNN/Daily Mail: Rogue F1 43.85\nNYT: Rogue Recall 49.02\nXSum: Rogue F1 38.81", "Highest scores for ROUGE-1, ROUGE-2 and ROUGE-L on CNN/DailyMail test set are 43.85, 20.34 and 39.90 respectively; on the XSum test set 38.81, 16.50 and 31.27 and on the NYT test set 49.02, 31.02 and 45.55"], "context": "Pretrained language models BIBREF1, BIBREF2, BIBREF0, BIBREF12, BIBREF13 have recently emerged as a key technology for achieving impressive gains in a wide variety of natural language tasks. These models extend the idea of word embeddings by learning contextual representations from large-scale corpora using a language modeling objective. Bidirectional Encoder Representations from Transformers (Bert; BIBREF0) is a new language representation model which is trained with a masked language modeling and a \u201cnext sentence prediction\u201d task on a corpus of 3,300M words.", "id": 705, "question": "What rouge score do they achieve?", "title": "Text Summarization with Pretrained Encoders"}, {"answers": ["", ""], "context": "Extractive summarization systems create a summary by identifying (and subsequently concatenating) the most important sentences in a document. Neural models consider extractive summarization as a sentence classification problem: a neural encoder creates sentence representations and a classifier predicts which sentences should be selected as summaries. SummaRuNNer BIBREF7 is one of the earliest neural approaches adopting an encoder based on Recurrent Neural Networks. Refresh BIBREF8 is a reinforcement learning-based system trained by globally optimizing the ROUGE metric. More recent work achieves higher performance with more sophisticated model structures. Latent BIBREF17 frames extractive summarization as a latent variable inference problem; instead of maximizing the likelihood of \u201cgold\u201d standard labels, their latent model directly maximizes the likelihood of human summaries given selected sentences. Sumo BIBREF18 capitalizes on the notion of structured attention to induce a multi-root dependency tree representation of the document while predicting the output summary. NeuSum BIBREF19 scores and selects sentences jointly and represents the state of the art in extractive summarization.", "id": 706, "question": "What are the datasets used for evaluation?", "title": "Text Summarization with Pretrained Encoders"}, {"answers": ["Answer with content missing: (Table 3) Best author's model B-M average micro f-score is 0.409, 0.459, 0.411 on Affective, Fairy Tales and ISEAR datasets respectively. "], "context": "This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/", "id": 707, "question": "What was their performance on emotion detection?", "title": "Distant supervision for emotion detection using Facebook reactions"}, {"answers": ["", ""], "context": "For years, on Facebook people could leave comments to posts, and also \u201clike\u201d them, by using a thumbs-up feature to explicitly express a generic, rather underspecified, approval. A \u201clike\u201d could thus mean \u201cI like what you said\", but also \u201cI like that you bring up such topic (though I find the content of the article you linked annoying)\".", "id": 708, "question": "Which existing benchmarks did they compare to?", "title": "Distant supervision for emotion detection using Facebook reactions"}, {"answers": ["", ""], "context": "Three datasets annotated with emotions are commonly used for the development and evaluation of emotion detection systems, namely the Affective Text dataset, the Fairy Tales dataset, and the ISEAR dataset. In order to compare our performance to state-of-the-art results, we have used them as well. In this Section, in addition to a description of each dataset, we provide an overview of the emotions used, their distribution, and how we mapped them to those we obtained from Facebook posts in Section SECREF7 . A summary is provided in Table TABREF8 , which also shows, in the bottom row, what role each dataset has in our experiments: apart from the development portion of the Affective Text, which we used to develop our models (Section SECREF4 ), all three have been used as benchmarks for our evaluation.", "id": 709, "question": "Which Facebook pages did they look at?", "title": "Distant supervision for emotion detection using Facebook reactions"}, {"answers": ["", ""], "context": "Microblogging such as Twitter and Weibo is a popular social networking service, which allows users to post messages up to 140 characters. There are millions of active users on the platform who stay connected with friends. Unfortunately, spammers also use it as a tool to post malicious links, send unsolicited messages to legitimate users, etc. A certain amount of spammers could sway the public opinion and cause distrust of the social platform. Despite the use of rigid anti-spam rules, human-like spammers whose homepages having photos, detailed profiles etc. have emerged. Unlike previous \"simple\" spammers, whose tweets contain only malicious links, those \"smart\" spammers are more difficult to distinguish from legitimate users via content-based features alone BIBREF0 .", "id": 710, "question": "LDA is an unsupervised method; is this paper introducing an unsupervised approach to spam detection?", "title": "Detecting\"Smart\"Spammers On Social Network: A Topic Model Approach"}, {"answers": ["Social Honeypot dataset (public) and Weibo dataset (self-collected); yes", "Social Honeypot, which is not of high quality"], "context": "In this section, we first provide some observations we obtained after carefully exploring the social network, then the LDA model is introduced. Based on the LDA model, the ways to obtain the topic probability vector for each user and the two topic-based features are provided.", "id": 711, "question": "What is the benchmark dataset and is its quality high?", "title": "Detecting\"Smart\"Spammers On Social Network: A Topic Model Approach"}, {"answers": ["Extract features from the LDA model and use them in a binary classification task"], "context": "After exploring the homepages of a substantial number of spammers, we have two observations. 1) social spammers can be divided into two categories. One is content polluters, and their tweets are all about certain kinds of advertisement and campaign. The other is fake accounts, and their tweets resemble legitimate users' but it seems they are simply random copies of others to avoid being detected by anti-spam rules. 2) For legitimate users, content polluters and fake accounts, they show different patterns on topics which interest them.", "id": 712, "question": "How do they detect spammers?", "title": "Detecting\"Smart\"Spammers On Social Network: A Topic Model Approach"}, {"answers": ["", ""], "context": "Automatic summarization has enjoyed wide popularity in natural language processing due to its potential for various information access applications. Examples include tools which aid users navigate and digest web content (e.g., news, social media, product reviews), question answering, and personalized recommendation engines. Single document summarization \u2014 the task of producing a shorter version of a document while preserving its information content \u2014 is perhaps the most basic of summarization tasks that have been identified over the years (see BIBREF0 , BIBREF0 for a comprehensive overview).", "id": 713, "question": "Do they use other evaluation metrics besides ROUGE?", "title": "Ranking Sentences for Extractive Summarization with Reinforcement Learning"}, {"answers": [""], "context": "Given a document D consisting of a sequence of sentences INLINEFORM0 , an extractive summarizer aims to produce a summary INLINEFORM1 by selecting INLINEFORM2 sentences from D (where INLINEFORM3 ). For each sentence INLINEFORM4 , we predict a label INLINEFORM5 (where 1 means that INLINEFORM6 should be included in the summary) and assign a score INLINEFORM7 quantifying INLINEFORM8 's relevance to the summary. The model learns to assign INLINEFORM9 when sentence INLINEFORM10 is more relevant than INLINEFORM11 . Model parameters are denoted by INLINEFORM12 . We estimate INLINEFORM13 using a neural network model and assemble a summary INLINEFORM14 by selecting INLINEFORM15 sentences with top INLINEFORM16 scores.", "id": 714, "question": "What is their ROUGE score?", "title": "Ranking Sentences for Extractive Summarization with Reinforcement Learning"}, {"answers": ["", "Answer with content missing: (Experimental Setup missing subsections)\nTo be selected: We compared REFRESH against a baseline which simply selects the first m leading sentences from each document (LEAD) and two neural models similar to ours (see left block in Figure 1), both trained with cross-entropy loss.\nAnswer: LEAD"], "context": "Previous work optimizes summarization models by maximizing INLINEFORM0 , the likelihood of the ground-truth labels y = INLINEFORM1 for sentences INLINEFORM2 , given document D and model parameters INLINEFORM3 . This objective can be achieved by minimizing the cross-entropy loss at each decoding step: DISPLAYFORM0 ", "id": 715, "question": "What are the baselines?", "title": "Ranking Sentences for Extractive Summarization with Reinforcement Learning"}, {"answers": ["", "1 IMDB dataset and 2 Yelp datasets"], "context": "Adversarial examples, a term introduced in BIBREF0, are inputs transformed by small perturbations that machine learning models consistently misclassify. The experiments are conducted in the context of computer vision (CV), and the core idea is encapsulated by an illustrative example: after imperceptible noises are added to a panda image, an image classifier predicts, with high confidence, that it is a gibbon. Interestingly, these adversarial examples can also be used to improve the classifier \u2014 either as additional training data BIBREF0 or as a regularisation objective BIBREF1 \u2014 thus providing motivation for generating effective adversarial examples.", "id": 716, "question": "What datasets do they use?", "title": "Elephant in the Room: An Evaluation Framework for Assessing Adversarial Examples in NLP"}, {"answers": [""], "context": "Most existing adversarial attack methods for text inputs are derived from those for image inputs. These methods can be categorised into three types including gradient-based attacks, optimisation-based attacks and model-based attacks.", "id": 717, "question": "What other factors affect the performance?", "title": "Elephant in the Room: An Evaluation Framework for Assessing Adversarial Examples in NLP"}, {"answers": ["", ""], "context": "There are a number of off-the-shelf neural models for sentiment classification BIBREF14, BIBREF15, most of which are based on long-short term memory networks (LSTM) BIBREF16 or convolutional neural networks (CNN) BIBREF14. In this paper, we pre-train three sentiment classifiers: BiLSTM, BiLSTM$+$A, and CNN. These classifiers are targeted by white-box attacking methods to generate adversarial examples (detailed in Section SECREF9). BiLSTM is composed of an embedding layer that maps individual words to pre-trained word embeddings; a number of bi-directional LSTMs that capture sequential contexts; and an output layer that maps the averaged LSTM hidden states to a binary output. BiLSTM$+$A is similar to BiLSTM except it has an extra self-attention layer which learns to attend to salient words for sentiment classification, and we compute a weighted mean of the LSTM hidden states prior to the output layer. Manual inspection of the attention weights show that polarity words such as awesome and disappointed are assigned with higher weights. Finally, CNN has a number of convolutional filters of varying sizes, and their outputs are concatenated, pooled and fed to a fully-connected layer followed by a binary output layer.", "id": 718, "question": "What are the benchmark attacking methods?", "title": "Elephant in the Room: An Evaluation Framework for Assessing Adversarial Examples in NLP"}, {"answers": ["No specific domain is covered in the corpus."], "context": "End-to-end speech-to-text translation (ST) has attracted much attention recently BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 given its simplicity against cascading automatic speech recognition (ASR) and machine translation (MT) systems. The lack of labeled data, however, has become a major blocker for bridging the performance gaps between end-to-end models and cascading systems. Several corpora have been developed in recent years. post2013improved introduced a 38-hour Spanish-English ST corpus by augmenting the transcripts of the Fisher and Callhome corpora with English translations. di-gangi-etal-2019-must created the largest ST corpus to date from TED talks but the language pairs involved are out of English only. beilharz2019librivoxdeen created a 110-hour German-English ST corpus from LibriVox audiobooks. godard-etal-2018-low created a Moboshi-French ST corpus as part of a rare language documentation effort. woldeyohannis provided an Amharic-English ST corpus in the tourism domain. boito2019mass created a multilingual ST corpus involving 8 languages from a multilingual speech corpus based on Bible readings BIBREF7. Previous work either involves language pairs out of English, very specific domains, very low resource languages or a limited set of language pairs. This limits the scope of study, including the latest explorations on end-to-end multilingual ST BIBREF8, BIBREF9. Our work is mostly similar and concurrent to iranzosnchez2019europarlst who created a multilingual ST corpus from the European Parliament proceedings. The corpus we introduce has larger speech durations and more translation tokens. It is diversified with multiple speakers per transcript/translation. Finally, we provide additional out-of-domain test sets.", "id": 719, "question": "What domains are covered in the corpus?", "title": "CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus"}, {"answers": [""], "context": "Common Voice BIBREF10 is a crowdsourcing speech recognition corpus with an open CC0 license. Contributors record voice clips by reading from a bank of donated sentences. Each voice clip was validated by at least two other users. Most of the sentences are covered by multiple speakers, with potentially different genders, age groups or accents.", "id": 720, "question": "What is the architecture of their model?", "title": "CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus"}, {"answers": ["", ""], "context": "Tatoeba (TT) is a community built language learning corpus having sentences aligned across multiple languages with the corresponding speech partially available. Its sentences are on average shorter than those in CoVoST (see also Table TABREF2) given the original purpose of language learning. Sentences in TT are licensed under CC BY 2.0 FR and part of the speeches are available under various CC licenses.", "id": 721, "question": "How was the dataset collected?", "title": "CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus"}, {"answers": ["", ""], "context": "Basic statistics for CoVoST and TT are listed in Table TABREF2 including (unique) sentence counts, speech durations, speaker demographics (partially available) as well as vocabulary and token statistics (based on Moses-tokenized sentences by sacreMoses) on both transcripts and translations. We see that CoVoST has over 327 hours of German speeches and over 171 hours of French speeches, which, to our knowledge, corresponds to the largest corpus among existing public ST corpora (the second largest is 110 hours BIBREF18 for German and 38 hours BIBREF19 for French). Moreover, CoVoST has a total of 18 hours of Dutch speeches, to our knowledge, contributing the first public Dutch ST resource. CoVoST also has around 27-hour Russian speeches, 37-hour Italian speeches and 67-hour Persian speeches, which is 1.8 times, 2.5 times and 13.3 times of the previous largest public one BIBREF7. Most of the sentences (transcripts) in CoVoST are covered by multiple speakers with potentially different accents, resulting in a rich diversity in the speeches. For example, there are over 1,000 speakers and over 10 accents in the French and German development / test sets. This enables good coverage of speech variations in both model training and evaluation.", "id": 722, "question": "Which languages are part of the corpus?", "title": "CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus"}, {"answers": ["", ""], "context": "As we can see from Table TABREF2, CoVoST is diversified with a rich set of speakers and accents. We further inspect the speaker demographics in terms of sample distributions with respect to speaker counts, accent counts and age groups, which is shown in Figure FIGREF6, FIGREF7 and FIGREF8. We observe that for 8 of the 11 languages, at least 60% of the sentences (transcripts) are covered by multiple speakers. Over 80% of the French sentences have at least 3 speakers. And for German sentences, even over 90% of them have at least 5 speakers. Similarly, we see that a large portion of sentences are spoken in multiple accents for French, German, Dutch and Spanish. Speakers of each language also spread widely across different age groups (below 20, 20s, 30s, 40s, 50s, 60s and 70s).", "id": 723, "question": "How is the quality of the data empirically evaluated? ", "title": "CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus"}, {"answers": [""], "context": "We provide baselines using the official train-development-test split on the following tasks: automatic speech recognition (ASR), machine translation (MT) and speech translation (ST).", "id": 724, "question": "Is the data in CoVoST annotated for dialect?", "title": "CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus"}, {"answers": ["", ""], "context": "We convert raw MP3 audio files from CoVo and TT into mono-channel waveforms, and downsample them to 16,000 Hz. For transcripts and translations, we normalize the punctuation, we tokenize the text with sacreMoses and lowercase it. For transcripts, we further remove all punctuation markers except for apostrophes. We use character vocabularies on all the tasks, with 100% coverage of all the characters. Preliminary experimentation showed that character vocabularies provided more stable training than BPE. For MT, the vocabulary is created jointly on both transcripts and translations. We extract 80-channel log-mel filterbank features, computed with a 25ms window size and 10ms window shift using torchaudio. The features are normalized to 0 mean and 1.0 standard deviation. We remove samples having more than 3,000 frames or more than 256 characters for GPU memory efficiency (less than 25 samples are removed for all languages).", "id": 725, "question": "Is Arabic one of the 11 languages in CoVost?", "title": "CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus"}, {"answers": ["", ""], "context": "Our ASR and ST models follow the architecture in berard2018end, but have 3 decoder layers like that in pino2019harnessing. For MT, we use a Transformer base architecture BIBREF15, but with 3 encoder layers, 3 decoder layers and 0.3 dropout. We use a batch size of 10,000 frames for ASR and ST, and a batch size of 4,000 tokens for MT. We train all models using Fairseq BIBREF20 for up to 200,000 updates. We use SpecAugment BIBREF21 for ASR and ST to alleviate overfitting.", "id": 726, "question": "How big is Augmented LibriSpeech dataset?", "title": "CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus"}, {"answers": ["0.8% F1 better than the best state-of-the-art", "Best proposed model achieves F1 score of 84.9 compared to best previous result of 84.1."], "context": "Relation classification is the task of assigning sentences with two marked entities to a predefined set of relations. The sentence \u201cWe poured the milk into the pumpkin mixture.\u201d, for example, expresses the relation Entity-Destination(e1,e2). While early research mostly focused on support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 , recent research showed performance improvements by applying neural networks (NNs) BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 on the benchmark data from SemEval 2010 shared task 8 BIBREF8 .", "id": 727, "question": "By how much does their best model outperform the state-of-the-art?", "title": "Combining Recurrent and Convolutional Neural Networks for Relation Classification"}, {"answers": ["", ""], "context": "In 2010, manually annotated data for relation classification was released in the context of a SemEval shared task BIBREF8 . Shared task participants used, i.a., support vector machines or maximum entropy classifiers BIBREF0 , BIBREF1 . Recently, their results on this data set were outperformed by applying NNs BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 .", "id": 728, "question": "Which dataset do they train their models on?", "title": "Combining Recurrent and Convolutional Neural Networks for Relation Classification"}, {"answers": ["", "Among all the classes predicted by several models, for each test sentence, class with most votes are picked. In case of a tie, one of the most frequent classes are picked randomly."], "context": "CNNs perform a discrete convolution on an input matrix with a set of different filters. For NLP tasks, the input matrix represents a sentence: Each column of the matrix stores the word embedding of the corresponding word. By applying a filter with a width of, e.g., three columns, three neighboring words (trigram) are convolved. Afterwards, the results of the convolution are pooled. Following collobertWeston, we perform max-pooling which extracts the maximum value for each filter and, thus, the most informative n-gram for the following steps. Finally, the resulting values are concatenated and used for classifying the relation expressed in the sentence.", "id": 729, "question": "How does their simple voting scheme work?", "title": "Combining Recurrent and Convolutional Neural Networks for Relation Classification"}, {"answers": [""], "context": "One of our contributions is a new input representation especially designed for relation classification. The contexts are split into three disjoint regions based on the two relation arguments: the left context, the middle context and the right context. Since in most cases the middle context contains the most relevant information for the relation, we want to focus on it but not ignore the other regions completely. Hence, we propose to use two contexts: (1) a combination of the left context, the left entity and the middle context; and (2) a combination of the middle context, the right entity and the right context. Due to the repetition of the middle context, we force the network to pay special attention to it. The two contexts are processed by two independent convolutional and max-pooling layers. After pooling, the results are concatenated to form the sentence representation. Figure FIGREF3 depicts this procedure. It shows an examplary sentence: \u201cHe had chest pain and headaches from mold in the bedroom.\u201d If we only considered the middle context \u201cfrom\u201d, the network might be tempted to predict a relation like Entity-Origin(e1,e2). However, by also taking the left and right context into account, the model can detect the relation Cause-Effect(e2,e1). While this could also be achieved by integrating the whole context into the model, using the whole context can have disadvantages for longer sentences: The max pooling step can easily choose a value from a part of the sentence which is far away from the mention of the relation. With splitting the context into two parts, we reduce this danger. Repeating the middle context increases the chance for the max pooling step to pick a value from the middle context.", "id": 730, "question": "Which variant of the recurrent neural network do they use?", "title": "Combining Recurrent and Convolutional Neural Networks for Relation Classification"}, {"answers": ["They use two independent convolutional and max-pooling layers on (1) a combination of the left context, the left entity and the middle context; and (2) a combination of the middle context, the right entity and the right context. They concatenated the two results after pooling to get the new context representation."], "context": "Following previous work (e.g., BIBREF5 , BIBREF6 ), we use 2D filters spanning all embedding dimensions. After convolution, a max pooling operation is applied that stores only the highest activation of each filter. We apply filters with different window sizes 2-5 (multi-windows) as in BIBREF5 , i.e. spanning a different number of input words.", "id": 731, "question": "How do they obtain the new context represetation?", "title": "Combining Recurrent and Convolutional Neural Networks for Relation Classification"}, {"answers": ["", "", ""], "context": "In recent years many datasets have been created for the task of automated stance detection, advancing natural language understanding systems for political science, opinion research and other application areas. Typically, such benchmarks BIBREF0 are composed of short pieces of text commenting on politicians or public issues and are manually annotated with their stance towards a target entity (e.g. Climate Change, or Trump). However, they are limited in scope on multiple levels BIBREF1.", "id": 732, "question": "Does the paper report the performance of the model for each individual language?", "title": "X-Stance: A Multilingual Multi-Target Dataset for Stance Detection"}, {"answers": ["M-Bert had 76.6 F1 macro score.", "75.1% and 75.6% accuracy"], "context": "In the context of the IberEval shared tasks, two related multilingual datasets have been created BIBREF2, BIBREF3. Both are a collection of annotated Spanish and Catalan tweets. Crucially, the tweets in both languages focus on the same issue (Catalan independence); given this fact they are the first truly multilingual stance detection datasets known to us.", "id": 733, "question": "What is the performance of the baseline?", "title": "X-Stance: A Multilingual Multi-Target Dataset for Stance Detection"}, {"answers": [""], "context": "The SemEval-2016 task on detecting stance in tweets BIBREF9 offers data concerning multiple targets (Atheism, Climate Change, Feminism, Hillary Clinton, and Abortion). In the supervised subtask A, participants tended to develop a target-specific model for each of those targets. In subtask B cross-target transfer to the target \u201cDonald Trump\u201d was tested, for which no annotated training data were provided. While this required the development of more universal models, their performance was generally much lower.", "id": 734, "question": "Did they pefrorm any cross-lingual vs single language evaluation?", "title": "X-Stance: A Multilingual Multi-Target Dataset for Stance Detection"}, {"answers": ["BERT had 76.6 F1 macro score on x-stance dataset."], "context": "In a target-specific setting, BIBREF10 perform a systematic evaluation of stance detection approaches. They also evaluate Bert BIBREF5 and find that it consistently outperforms previous approaches.", "id": 735, "question": "What was the performance of multilingual BERT?", "title": "X-Stance: A Multilingual Multi-Target Dataset for Stance Detection"}, {"answers": [""], "context": "The input provided by x-stance is two-fold: (A) a natural language question concerning a political issue; (B) a natural language commentary on a specific stance towards the question.", "id": 736, "question": "What annotations are present in dataset?", "title": "X-Stance: A Multilingual Multi-Target Dataset for Stance Detection"}, {"answers": ["A unordered text document is one where sentences in the document are disordered or jumbled. It doesn't appear that unordered text documents appear in corpora, but rather are introduced as part of processing pipeline."], "context": "To structure an unordered document is an essential task in many applications. It is a post-requisite for applications like multiple document extractive text summarization where we have to present a summary of multiple documents. It is a prerequisite for applications like question answering from multiple documents where we have to present an answer by processing multiple documents. In this paper, we address the task of segmenting an unordered text document into different sections. The input document/summary that may have unordered sentences is processed so that it will have sentences clustered together. Clustering is based on the similarity with the respective keyword as well as with the sentences belonging to the same cluster. Keywords are identified and clusters are formed for each keyword.", "id": 737, "question": "What is an unordered text document, do these arise in real-world corpora?", "title": "Structuring an unordered text document"}, {"answers": [""], "context": "Several models have been performed in the past to retrieve sentences of a document belonging to a particular topic BIBREF2 . Given a topic, retrieving sentences that may belong to that topic should be considered as a different task than what we aim in this paper. A graph based approach for extracting information relevant to a query is presented in BIBREF3 , where subgraphs are built using the relatedness of the sentences to the query. An incremental integrated graph to represent the sentences in a collection of documents is presented in BIBREF4 , BIBREF5 . Sentences from the documents are merged into a master sequence to improve coherence and flow. The same ordering is used for sequencing the sentences in the extracted summary. Ordering of sentences in a document is discussed in BIBREF6 .", "id": 738, "question": "What kind of model do they use?", "title": "Structuring an unordered text document"}, {"answers": ["", ""], "context": "Our methodology is described in the Figure 1 . The process starts by taking an unordered document as an input. The next step is to extract the keywords from the input document using TextRank algorithm BIBREF0 and store them in a list $K$ . The keywords stored in $K$ act as centroids for the clusters. Note that, the quality of keywords extracted will have a bearing on the final results. In this paper, we present a model that can be used for structuring an unstructured document. In the process, we use a popular keyword extraction algorithm. Our model is not bound to TextRank, and if a better keyword extraction algorithm is available, it can replace TextRank.", "id": 739, "question": "Do they release a data set?", "title": "Structuring an unordered text document"}, {"answers": ["", ""], "context": "To evaluate our algorithm, we propose two similarity metrics, $Sim1$ and $Sim2$ . These metrics compute the similarity of each section of the original document with all the sections/clusters (keyword and the sentences mapped to it) of the output document and assign the maximum similarity. $Sim1$ between an input section and an output section is calculated as the number of sentences of the input section that are present in the output section divided by the total number of sentences in the input section. To calculate the final similarity (similarity of the entire output document) we take the weighted mean of similarity calculated corresponding to each input section. $Sim2$ between an input section and an output section is computed as the number of sentences of an input section that are present in an output section divided by the sum of sentences in the input and output sections. The final similarity is computed in a similar manner.", "id": 740, "question": "Do they release code?", "title": "Structuring an unordered text document"}, {"answers": ["", ""], "context": "For our experiments, we prepared five sets of documents. Each set has 100 wiki documents (randomly chosen). Each document is restructured randomly (sentences are rearranged randomly). This restructured document is the input to our model and the output document is compared against the original input document.", "id": 741, "question": "Which languages do they evaluate on?", "title": "Structuring an unordered text document"}, {"answers": [""], "context": "Privacy policies are the documents which disclose the ways in which a company gathers, uses, shares and manages a user's data. As legal documents, they function using the principle of notice and choice BIBREF0, where companies post their policies, and theoretically, users read the policies and decide to use a company's products or services only if they find the conditions outlined in its privacy policy acceptable. Many legal jurisdictions around the world accept this framework, including the United States and the European Union BIBREF1, BIBREF2. However, the legitimacy of this framework depends upon users actually reading and understanding privacy policies to determine whether company practices are acceptable to them BIBREF3. In practice this is seldom the case BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. This is further complicated by the highly individual and nuanced compromises that users are willing to make with their data BIBREF11, discouraging a `one-size-fits-all' approach to notice of data practices in privacy documents.", "id": 742, "question": "Are the experts comparable to real-world users?", "title": "Question Answering for Privacy Policies: Combining Computational and Legal Perspectives"}, {"answers": [""], "context": "Prior work has aimed to make privacy policies easier to understand. Prescriptive approaches towards communicating privacy information BIBREF21, BIBREF22, BIBREF23 have not been widely adopted by industry. Recently, there have been significant research effort devoted to understanding privacy policies by leveraging NLP techniques BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, especially by identifying specific data practices within a privacy policy. We adopt a personalized approach to understanding privacy policies, that allows users to query a document and selectively explore content salient to them. Most similar is the PolisisQA corpus BIBREF29, which examines questions users ask corporations on Twitter. Our approach differs in several ways: 1) The PrivacyQA dataset is larger, containing 10x as many questions and answers. 2) Answers are formulated by domain experts with legal training. 3) PrivacyQA includes diverse question types, including unanswerable and subjective questions.", "id": 743, "question": "Are the answers double (and not triple) annotated?", "title": "Question Answering for Privacy Policies: Combining Computational and Legal Perspectives"}, {"answers": ["Individuals with legal training", ""], "context": "We describe the data collection methodology used to construct PrivacyQA. With the goal of achieving broad coverage across application types, we collect privacy policies from 35 mobile applications representing a number of different categories in the Google Play Store. One of our goals is to include both policies from well-known applications, which are likely to have carefully-constructed privacy policies, and lesser-known applications with smaller install bases, whose policies might be considerably less sophisticated. Thus, setting 5 million installs as a threshold, we ensure each category includes applications with installs on both sides of this threshold. All policies included in the corpus are in English, and were collected before April 1, 2018, predating many companies' GDPR-focused BIBREF41 updates. We leave it to future studies BIBREF42 to look at the impact of the GDPR (e.g., to what extent GDPR requirements contribute to making it possible to provide users with more informative answers, and to what extent their disclosures continue to omit issues that matter to users).", "id": 744, "question": "Who were the experts used for annotation?", "title": "Question Answering for Privacy Policies: Combining Computational and Legal Perspectives"}, {"answers": ["", ""], "context": "The intended audience for privacy policies consists of the general public. This informs the decision to elicit questions from crowdworkers on the contents of privacy policies. We choose not to show the contents of privacy policies to crowdworkers, a procedure motivated by a desire to avoid inadvertent biases BIBREF43, BIBREF44, BIBREF45, BIBREF46, BIBREF47, and encourage crowdworkers to ask a variety of questions beyond only asking questions based on practices described in the document.", "id": 745, "question": "What type of neural model was used?", "title": "Question Answering for Privacy Policies: Combining Computational and Legal Perspectives"}, {"answers": ["", ""], "context": "To identify legally sound answers, we recruit seven experts with legal training to construct answers to Turker questions. Experts identify relevant evidence within the privacy policy, as well as provide meta-annotation on the question's relevance, subjectivity, OPP-115 category BIBREF49, and how likely any privacy policy is to contain the answer to the question asked.", "id": 746, "question": "Were other baselines tested to compare with the neural baseline?", "title": "Question Answering for Privacy Policies: Combining Computational and Legal Perspectives"}, {"answers": [""], "context": "We understand from Zipf's Law that in any natural language corpus a majority of the vocabulary word types will either be absent or occur in low frequency. Estimating the statistical properties of these rare word types is naturally a difficult task. This is analogous to the curse of dimensionality when we deal with sequences of tokens - most sequences will occur only once in the training data. Neural network architectures overcome this problem by defining non-linear compositional models over vector space representations of tokens and hence assign non-zero probability even to sequences not seen during training BIBREF0 , BIBREF1 . In this work, we explore a similar approach to learning distributed representations of social media posts by composing them from their constituent characters, with the goal of generalizing to out-of-vocabulary words as well as sequences at test time.", "id": 747, "question": "Does the paper clearly establish that the challenges listed here exist in this dataset and task?", "title": "Tweet2Vec: Character-Based Distributed Representations for Social Media"}, {"answers": ["established task", ""], "context": "Using neural networks to learn distributed representations of words dates back to BIBREF0 . More recently, BIBREF4 released word2vec - a collection of word vectors trained using a recurrent neural network. These word vectors are in widespread use in the NLP community, and the original work has since been extended to sentences BIBREF1 , documents and paragraphs BIBREF6 , topics BIBREF7 and queries BIBREF8 . All these methods require storing an extremely large table of vectors for all word types and cannot be easily generalized to unseen words at test time BIBREF2 . They also require preprocessing to find word boundaries which is non-trivial for a social network domain like Twitter.", "id": 748, "question": "Is this hashtag prediction task an established task, or something new?", "title": "Tweet2Vec: Character-Based Distributed Representations for Social Media"}, {"answers": ["", ""], "context": "Bi-GRU Encoder: Figure 1 shows our model for encoding tweets. It uses a similar structure to the C2W model in BIBREF2 , with LSTM units replaced with GRU units.", "id": 749, "question": "What is the word-level baseline?", "title": "Tweet2Vec: Character-Based Distributed Representations for Social Media"}, {"answers": ["None"], "context": "Since our objective is to compare character-based and word-based approaches, we have also implemented a simple word-level encoder for tweets. The input tweet is first split into tokens along white-spaces. A more sophisticated tokenizer may be used, but for a fair comparison we wanted to keep language specific preprocessing to a minimum. The encoder is essentially the same as tweet2vec, with the input as words instead of characters. A lookup table stores word vectors for the $V$ (20K here) most common words, and the rest are grouped together under the `UNK' token.", "id": 750, "question": "What other tasks do they test their method on?", "title": "Tweet2Vec: Character-Based Distributed Representations for Social Media"}, {"answers": ["", ""], "context": "Our dataset consists of a large collection of global posts from Twitter between the dates of June 1, 2013 to June 5, 2013. Only English language posts (as detected by the lang field in Twitter API) and posts with at least one hashtag are retained. We removed infrequent hashtags ( $<500$ posts) since they do not have enough data for good generalization. We also removed very frequent tags ( $>19K$ posts) which were almost always from automatically generated posts (ex: #androidgame) which are trivial to predict. The final dataset contains 2 million tweets for training, 10K for validation and 50K for testing, with a total of 2039 distinct hashtags. We use simple regex to preprocess the post text and remove hashtags (since these are to be predicted) and HTML tags, and replace user-names and URLs with special tokens. We also removed retweets and convert the text to lower-case.", "id": 751, "question": "what is the word level baseline they compare to?", "title": "Tweet2Vec: Character-Based Distributed Representations for Social Media"}, {"answers": ["Two knowledge-based systems,\ntwo traditional word expert supervised systems, six recent neural-based systems, and one BERT feature-based system."], "context": "Word Sense Disambiguation (WSD) is a fundamental task and long-standing challenge in Natural Language Processing (NLP), which aims to find the exact sense of an ambiguous word in a particular context BIBREF0. Previous WSD approaches can be grouped into two main categories: knowledge-based and supervised methods.", "id": 752, "question": "What is the state of the art system mentioned?", "title": "GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge"}, {"answers": ["", ""], "context": "In this section, we describe our method in detail.", "id": 753, "question": "Do they incoprorate WordNet into the model?", "title": "GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge"}, {"answers": ["", ""], "context": "In WSD, a sentence $s$ usually consists of a series of words: $\\lbrace w_1,\\cdots ,w_m\\rbrace $, and some of the words $\\lbrace w_{i_1},\\cdots ,w_{i_k}\\rbrace $ are targets $\\lbrace t_1,\\cdots ,t_k\\rbrace $ need to be disambiguated. For each target $t$, its candidate senses $\\lbrace c_1,\\cdots ,c_n\\rbrace $ come from entries of its lemma in a pre-defined sense inventory (usually WordNet). Therefore, WSD task aims to find the most suitable entry (symbolized as unique sense key) for each target in a sentence. See a sentence example in Table TABREF1.", "id": 754, "question": "Is SemCor3.0 reflective of English language data in general?", "title": "GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge"}, {"answers": ["small BERT", "small BERT"], "context": "BERT BIBREF15 is a new language representation model, and its architecture is a multi-layer bidirectional Transformer encoder. BERT model is pre-trained on a large corpus and two novel unsupervised prediction tasks, i.e., masked language model and next sentence prediction tasks are used in pre-training. When incorporating BERT into downstream tasks, the fine-tuning procedure is recommended. We fine-tune the pre-trained BERT model on WSD task.", "id": 755, "question": "Do they use large or small BERT?", "title": "GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge"}, {"answers": [""], "context": "Since every target in a sentence needs to be disambiguated to find its exact sense, WSD task can be regarded as a token-level classification task. To incorporate BERT to WSD task, we take the final hidden state of the token corresponding to the target word (if more than one token, we average them) and add a classification layer for every target lemma, which is the same as the last layer of the Bi-LSTM model BIBREF11.", "id": 756, "question": "How does the neural network architecture accomodate an unknown amount of senses per word?", "title": "GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge"}, {"answers": [""], "context": "The task of document quality assessment is to automatically assess a document according to some predefined inventory of quality labels. This can take many forms, including essay scoring (quality = language quality, coherence, and relevance to a topic), job application filtering (quality = suitability for role + visual/presentational quality of the application), or answer selection in community question answering (quality = actionability + relevance of the answer to the question). In the case of this paper, we focus on document quality assessment in two contexts: Wikipedia document quality classification, and whether a paper submitted to a conference was accepted or not.", "id": 757, "question": "Which fonts are the best indicators of high quality?", "title": "A Joint Model for Multimodal Document Quality Assessment"}, {"answers": ["", ""], "context": "A variety of approaches have been proposed for document quality assessment across different domains: Wikipedia article quality assessment, academic paper rating, content quality assessment in community question answering (cQA), and essay scoring. Among these approaches, some use hand-crafted features while others use neural networks to learn features from documents. For each domain, we first briefly describe feature-based approaches and then review neural network-based approaches. Wikipedia article quality assessment: Quality assessment of Wikipedia articles is a task that assigns a quality class label to a given Wikipedia article, mirroring the quality assessment process that the Wikipedia community carries out manually. Many approaches have been proposed that use features from the article itself, meta-data features (e.g., the editors, and Wikipedia article revision history), or a combination of the two. Article-internal features capture information such as whether an article is properly organized, with supporting evidence, and with appropriate terminology. For example, BIBREF3 use writing styles represented by binarized character trigram features to identify featured articles. BIBREF4 and BIBREF0 explore the number of headings, images, and references in the article. BIBREF5 use nine readability scores, such as the percentage of difficult words in the document, to measure the quality of the article. Meta-data features, which are indirect indicators of article quality, are usually extracted from revision history, and the interaction between editors and articles. For example, one heuristic that has been proposed is that higher-quality articles have more edits BIBREF6 , BIBREF7 . BIBREF8 use the percentage of registered editors and the total number of editors of an article. Article\u2013editor dependencies have also been explored. For example, BIBREF9 use the authority of editors to measure the quality of Wikipedia articles, where the authority of editors is determined by the articles they edit. Deep learning approaches to predicting Wikipedia article quality have also been proposed. For example, BIBREF10 use a version of doc2vec BIBREF11 to represent articles, and feed the document embeddings into a four hidden layer neural network. BIBREF12 first obtain sentence representations by averaging words within a sentence, and then apply a biLSTM BIBREF13 to learn a document-level representation, which is combined with hand-crafted features as side information. BIBREF14 exploit two stacked biLSTMs to learn document representations.", "id": 758, "question": "What kind of model do they use?", "title": "A Joint Model for Multimodal Document Quality Assessment"}, {"answers": ["", ""], "context": "We treat document quality assessment as a classification problem, i.e., given a document, we predict its quality class (e.g., whether an academic paper should be accepted or rejected). The proposed model is a joint model that integrates visual features learned through Inception V3 with textual features learned through a biLSTM. In this section, we present the details of the visual and textual embeddings, and finally describe how we combine the two. We return to discuss hyper-parameter settings and the experimental configuration in the Experiments section.", "id": 759, "question": "Did they release their data set of academic papers?", "title": "A Joint Model for Multimodal Document Quality Assessment"}, {"answers": ["", ""], "context": "A wide range of models have been proposed to tackle the image classification task, such as VGG BIBREF34 , ResNet BIBREF35 , Inception V3 BIBREF1 , and Xception BIBREF36 . However, to the best of our knowledge, there is no existing work that has proposed to use visual renderings of documents to assess document quality. In this paper, we use Inception V3 pretrained on ImageNet (\u201cInception\u201d hereafter) to obtain visual embeddings of documents, noting that any image classifier could be applied to our task. The input to Inception is a visual rendering (screenshot) of a document, and the output is a visual embedding, which we will later integrate with our textual embedding.", "id": 760, "question": "Do the methods that work best on academic papers also work best on Wikipedia?", "title": "A Joint Model for Multimodal Document Quality Assessment"}, {"answers": ["59.4% on wikipedia dataset, 93.4% on peer-reviewed archive AI papers, 77.1% on peer-reviewed archive Computation and Language papers, and 79.9% on peer-reviewed archive Machine Learning papers"], "context": "We adopt a bi-directional LSTM model to generate textual embeddings for document quality assessment, following the method of BIBREF12 (\u201cbiLSTM\u201d hereafter). The input to biLSTM is a textual document, and the output is a textual embedding, which will later integrate with the visual embedding.", "id": 761, "question": "What is their system's absolute accuracy?", "title": "A Joint Model for Multimodal Document Quality Assessment"}, {"answers": ["It depends on the dataset. Experimental results over two datasets reveal that textual and visual features are complementary. "], "context": "The proposed joint model (\u201cJoint\u201d hereafter) combines the visual and textual embeddings (output of Inception and biLSTM) via a simple feed-forward layer and softmax over the document label set, as shown in Figure 2 . We optimize our model based on cross-entropy loss.", "id": 762, "question": "Which is more useful, visual or textual features?", "title": "A Joint Model for Multimodal Document Quality Assessment"}, {"answers": ["", "English"], "context": "In this section, we first describe the two datasets used in our experiments: (1) Wikipedia, and (2) arXiv. Then, we report the experimental details and results.", "id": 763, "question": "Which languages do they use?", "title": "A Joint Model for Multimodal Document Quality Assessment"}, {"answers": ["a sample of 29,794 wikipedia articles and 2,794 arXiv papers "], "context": "The Wikipedia dataset consists of articles from English Wikipedia, with quality class labels assigned by the Wikipedia community. Wikipedia articles are labelled with one of six quality classes, in descending order of quality: Featured Article (\u201cFA\u201d), Good Article (\u201cGA\u201d), B-class Article (\u201cB\u201d), C-class Article (\u201cC\u201d), Start Article (\u201cStart\u201d), and Stub Article (\u201cStub\u201d). A description of the criteria associated with the different classes can be found in the Wikipedia grading scheme page. The quality class of a Wikipedia article is assigned by Wikipedia reviewers or any registered user, who can discuss through the article's talk page to reach consensus. We constructed the dataset by first crawling all articles from each quality class repository, e.g., we get FA articles by crawling pages from the FA repository: https://en.wikipedia.org/wiki/Category:Featured_articles. This resulted in around 5K FA, 28K GA, 212K B, 533K C, 2.6M Start, and 3.2M Stub articles.", "id": 764, "question": "How large is their data set?", "title": "A Joint Model for Multimodal Document Quality Assessment"}, {"answers": ["", ""], "context": "As discussed above, our model has two main components \u2014 biLSTM and Inception\u2014 which generate textual and visual representations, respectively. For the biLSTM component, the documents are preprocessed as described in BIBREF12 , where an article is divided into sentences and tokenized using NLTK BIBREF38 . Words appearing more than 20 times are retained when building the vocabulary. All other words are replaced by the special UNK token. We use the pre-trained GloVe BIBREF39 50-dimensional word embeddings to represent words. For words not in GloVe, word embeddings are randomly initialized based on sampling from a uniform distribution $U(-1, 1)$ . All word embeddings are updated in the training process. We set the LSTM hidden layer size to 256. The concatenation of the forward and backward LSTMs thus gives us 512 dimensions for the document embedding. A dropout layer is applied at the sentence and document level, respectively, with a probability of 0.5.", "id": 765, "question": "Where do they get their ground truth quality judgments?", "title": "A Joint Model for Multimodal Document Quality Assessment"}, {"answers": [""], "context": "In the field of natural language processing (NLP), the most prevalent neural approach to obtaining sentence representations is to use recurrent neural networks (RNNs), where words in a sentence are processed in a sequential and recurrent manner. Along with their intuitive design, RNNs have shown outstanding performance across various NLP tasks e.g. language modeling BIBREF0 , BIBREF1 , machine translation BIBREF2 , BIBREF3 , BIBREF4 , text classification BIBREF5 , BIBREF6 , and parsing BIBREF7 , BIBREF8 .", "id": 766, "question": "Which models did they experiment with?", "title": "Cell-aware Stacked LSTMs for Modeling Sentences"}, {"answers": ["", ""], "context": "In this section, we give a detailed formulation of the architectures used in experiments.", "id": 767, "question": "What were their best results on the benchmark datasets?", "title": "Cell-aware Stacked LSTMs for Modeling Sentences"}, {"answers": [""], "context": "Throughout this paper, we denote matrices as boldface capital letters ( INLINEFORM0 ), vectors as boldface lowercase letters ( INLINEFORM1 ), and scalars as normal italic letters ( INLINEFORM2 ). For LSTM states, we denote a hidden state as INLINEFORM3 and a cell state as INLINEFORM4 . Also, a layer index of INLINEFORM5 or INLINEFORM6 is denoted by superscript and a time index is denoted by a subscript, i.e. INLINEFORM7 indicates the hidden state at time INLINEFORM8 and layer INLINEFORM9 . INLINEFORM10 means the element-wise multiplication between two vectors. We write INLINEFORM11 -th component of vector INLINEFORM12 as INLINEFORM13 . All vectors are assumed to be column vectors.", "id": 768, "question": "What were the baselines?", "title": "Cell-aware Stacked LSTMs for Modeling Sentences"}, {"answers": ["", ""], "context": "While there exist various versions of LSTM formulation, in this work we use the following, one of the most common versions: DISPLAYFORM0 DISPLAYFORM1 ", "id": 769, "question": "Which datasets were used?", "title": "Cell-aware Stacked LSTMs for Modeling Sentences"}, {"answers": ["", ""], "context": "Modern Standard Arabic (MSA) and Classical Arabic (CA) have two types of vowels, namely long vowels, which are explicitly written, and short vowels, aka diacritics, which are typically omitted in writing but are reintroduced by readers to properly pronounce words. Since diacritics disambiguate the sense of the words in context and their syntactic roles in sentences, automatic diacritic recovery is essential for applications such as text-to-speech and educational tools for language learners, who may not know how to properly verbalize words. Diacritics have two types, namely: core-word (CW) diacritics, which are internal to words and specify lexical selection; and case-endings (CE), which appear on the last letter of word stems, typically specifying their syntactic role. For example, the word \u201cktb\u201d (\u0643\u062a\u0628>) can have multiple diacritized forms such as \u201ckatab\u201d (\u0643\u064e\u062a\u064e\u0628> \u2013 meaning \u201che wrote\u201d) \u201ckutub\u201d (\u0643\u064f\u062a\u064f\u0628> \u2013 \u201cbooks\u201d). While \u201ckatab\u201d can only assume one CE, namely \u201cfatHa\u201d (\u201ca\u201d), \u201ckutub\u201d can accept the CEs: \u201cdamma\u201d (\u201cu\u201d) (nominal \u2013 ex. subject), \u201ca\u201d (accusative \u2013 ex. object), \u201ckasra\u201d (\u201ci\u201d) (genitive \u2013 ex. PP predicate), or their nunations. There are 14 diacritic combinations. When used as CEs, they typically convey specific syntactic information, namely: fatHa \u201ca\u201d for accusative nouns, past verbs and subjunctive present verbs; kasra \u201ci\u201d for genitive nouns; damma \u201cu\u201d for nominative nouns and indicative present verbs; sukun \u201co\u201d for jussive present verbs and imperative verbs. FatHa, kasra and damma can be preceded by shadda \u201c$\\sim $\u201d for gemination (consonant doubling) and/or converted to nunation forms following some grammar rules. In addition, according to Arabic orthography and phonology, some words take a virtual (null) \u201c#\u201d marker when they end with certain characters (ex: long vowels). This applies also to all non-Arabic words (ex: punctuation, digits, Latin words, etc.). Generally, function words, adverbs and foreign named entities (NEs) have set CEs (sukun, fatHa or virtual). Similar to other Semitic languages, Arabic allows flexible Verb-Subject-Object as well as Verb-Object-Subject constructs BIBREF1. Such flexibility creates inherent ambiguity, which is resolved by diacritics as in \u201cr$>$Y Emr Ely\u201d (\u0631\u0623\u0649 \u0639\u0645\u0631 \u0639\u0644\u064a> Omar saw Ali/Ali saw Omar). In the absence of diacritics it is not clear who saw whom. Similarly, in the sub-sentence \u201ckAn Alm&tmr AltAsE\u201d (\u0643\u0627\u0646 \u0627\u0644\u0645\u0624\u062a\u0645\u0631 \u0627\u0644\u062a\u0627\u0633\u0639>), if the last word, is a predicate of the verb \u201ckAn\u201d, then the sentence would mean \u201cthis conference was the ninth\u201d and would receive a fatHa (a) as a case ending. Conversely, if it was an adjective to the \u201cconference\u201d, then the sentence would mean \u201cthe ninth conference was ...\u201d and would receive a damma (u) as a case ending. Thus, a consideration of context is required for proper disambiguation. Due to the inter-word dependence of CEs, they are typically harder to predict compared to core-word diacritics BIBREF2, BIBREF3, BIBREF4, BIBREF5, with CEER of state-of-the-art systems being in double digits compared to nearly 3% for word-cores. Since recovering CEs is akin to shallow parsing BIBREF6 and requires morphological and syntactic processing, it is a difficult problem in Arabic NLP. In this paper, we focus on recovering both CW diacritics and CEs. We employ two separate Deep Neural Network (DNN) architectures for recovering both kinds of diacritic types. We use character-level and word-level bidirectional Long-Short Term Memory (biLSTM) based recurrent neural models for CW diacritic and CE recovery respectively. We train models for both Modern Standard Arabic (MSA) and Classical Arabic (CA). For CW diacritics, the model is informed using word segmentation information and a unigram language model. We also employ a unigram language model to perform post correction on the model output. We achieve word error rates for CW diacritics of 2.9% and 2.2% for MSA and CA. The MSA word error rate is 6% lower than the best results in the literature (the RDI diacritizer BIBREF7). The CE model is trained with a rich set of surface, morphological, and syntactic features. The proposed features would aid the biLSTM model in capturing syntactic dependencies indicated by Part-Of-Speech (POS) tags, gender and number features, morphological patterns, and affixes. We show that our model achieves a case ending error rate (CEER) of 3.7% for MSA and 2.5% for CA. For MSA, this CEER is more than 60% lower than other state-of-the-art systems such as Farasa and the RDI diacritizer, which are trained on the same dataset and achieve CEERs of 10.7% and 14.4% respectively. The contributions of this paper are as follows:", "id": 770, "question": "what datasets were used?", "title": "Arabic Diacritic Recovery Using a Feature-Rich biLSTM Model"}, {"answers": ["", ""], "context": "Automatic diacritics restoration has been investigated for many different language such as European languages (e.g. Romanian BIBREF8, BIBREF9, French BIBREF10, and Croatian BIBREF11), African languages (e.g. Yorba BIBREF12), Southeast Asian languages (e.g. Vietnamese BIBREF13), Semitic language (e.g. Arabic and Hebrew BIBREF14), and many others BIBREF15. For many languages, diacritic (or accent restoration) is limited to a handful of letters. However, for Semitic languages, diacritic recovery extends to most letters. Many general approaches have been explored for this problem including linguistically motivated rule-based approaches, machine learning approaches, such as Hidden Markov Models (HMM) BIBREF14 and Conditional Random Fields (CRF) BIBREF16, and lately deep learning approaches such as Arabic BIBREF17, BIBREF18, BIBREF19, Slovak BIBREF20, and Yorba BIBREF12. Aside from rule-based approaches BIBREF21, different methods were used to recover diacritics in Arabic text. Using a hidden Markov model (HMM) BIBREF14, BIBREF22 with an input character sequence, the model attempts to find the best state sequence given previous observations. BIBREF14 reported a 14% word error rate (WER) while BIBREF22 achieved a 4.1% diacritic error rate (DER) on the Quran (CA). BIBREF23 combined both morphological, acoustic, and contextual features to build a diacritizer trained on FBIS and LDC CallHome ECA collections. They reported a 9% (DER) without CE, and 28% DER with CE. BIBREF24 employed a cascade of a finite state transducers. The cascade stacked a word language model (LM), a charachter LM, and a morphological model. The model achieved an accuracy of 7.33% WER without CE and and 23.61% WER with CE. BIBREF25 employed a maximum entropy model for sequence classification. The system was trained on the LDC\u2019s Arabic Treebank (ATB) and evaluated on a 600 articles from An-Nahar Newspaper (340K words) and achieved 5.5% DER and 18% WER on words without CE.", "id": 771, "question": "what are the previous state of the art?", "title": "Arabic Diacritic Recovery Using a Feature-Rich biLSTM Model"}, {"answers": [""], "context": "For MSA, we acquired the diacritized corpus that was used to train the RDI BIBREF7 diacritizer and the Farasa diacritizer BIBREF31. The corpus contains 9.7M tokens with approximately 194K unique surface forms (excluding numbers and punctuation marks). The corpus covers multiple genres such as politics and sports and is a mix of MSA and CA. This corpus is considerably larger than the Arabic Treebank BIBREF35 and is more consistent in its diacritization. For testing, we used the freely available WikiNews test set BIBREF31, which is composed of 70 MSA WikiNews articles (18,300 tokens) and evenly covers a variety of genres including politics, economics, health, science and technology, sports, arts and culture.", "id": 772, "question": "what surface-level features are used?", "title": "Arabic Diacritic Recovery Using a Feature-Rich biLSTM Model"}, {"answers": ["POS, gender/number and stem POS"], "context": "Arabic words are typically derived from a limited set of roots by fitting them into so-called stem-templates (producing stems) and may accept a variety of prefixes and suffixes such as prepositions, determiners, and pronouns (producing words). Word stems specify the lexical selection and are typically unaffected by the attached affixes. We used 4 feature types, namely:", "id": 773, "question": "what linguistics features are used?", "title": "Arabic Diacritic Recovery Using a Feature-Rich biLSTM Model"}, {"answers": ["More than 2,100 texts were paired with 15 questions each, resulting in a total number of approx. 32,000 annotated questions. 13% of the questions are not answerable. Out of the answerable questions, 10,160 could be answered from the text directly (text-based) and 3,914 questions required the use of commonsense knowledge (script-based). The final dataset comprises 13,939 questions, 3,827 of which require commonsense knowledge (i.e. 27.4%).", "Distribution of category labels, number of answerable-not answerable questions, number of text-based and script-based questions, average text, question, and answer length, number of questions per text"], "context": "Ambiguity and implicitness are inherent properties of natural language that cause challenges for computational models of language understanding. In everyday communication, people assume a shared common ground which forms a basis for efficiently resolving ambiguities and for inferring implicit information. Thus, recoverable information is often left unmentioned or underspecified. Such information may include encyclopedic and commonsense knowledge. This work focuses on commonsense knowledge about everyday activities, so-called scripts.", "id": 774, "question": "what dataset statistics are provided?", "title": "MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge"}, {"answers": [""], "context": "Machine comprehension datasets consist of three main components: texts, questions and answers. In this section, we describe our data collection for these 3 components. We first describe a series of pilot studies that we conducted in order to collect commonsense inference questions (Section SECREF4 ). In Section SECREF5 , we discuss the resulting data collection of questions, texts and answers via crowdsourcing on Amazon Mechanical Turk (henceforth MTurk). Section SECREF17 gives information about some necessary postprocessing steps and the dataset validation. Lastly, Section SECREF19 gives statistics about the final dataset.", "id": 775, "question": "what is the size of their dataset?", "title": "MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge"}, {"answers": ["", ""], "context": "As a starting point for our pilots, we made use of texts from the InScript corpus BIBREF10 , which provides stories centered around everyday situations (see Section SECREF7 ). We conducted three different pilot studies to determine the best way of collecting questions that require inference over commonsense knowledge:", "id": 776, "question": "what crowdsourcing platform was used?", "title": "MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge"}, {"answers": ["The data was collected using 3 components: describe a series of pilot studies that were conducted to collect commonsense inference questions, then discuss the resulting data collection of questions, texts and answers via crowdsourcing on Amazon Mechanical Turk and gives information about some necessary postprocessing steps and the dataset validation."], "context": "As mentioned in the previous section, we decided to base the question collection on script scenarios rather than specific texts. As a starting point for our data collection, we use scenarios from three script data collections BIBREF3 , BIBREF11 , BIBREF12 . Together, these resources contain more than 200 scenarios. To make sure that scenarios have different complexity and content, we selected 80 of them and came up with 20 new scenarios. Together with the 10 scenarios from InScript, we end up with a total of 110 scenarios.", "id": 777, "question": "how was the data collected?", "title": "MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge"}, {"answers": ["For SLC task, the \"ltuorp\" team has the best performing model (0.6323/0.6028/0.6649 for F1/P/R respectively) and for FLC task the \"newspeak\" team has the best performing model (0.2488/0.2863/0.2201 for F1/P/R respectively)."], "context": "In the age of information dissemination without quality control, it has enabled malicious users to spread misinformation via social media and aim individual users with propaganda campaigns to achieve political and financial gains as well as advance a specific agenda. Often disinformation is complied in the two major forms: fake news and propaganda, where they differ in the sense that the propaganda is possibly built upon true information (e.g., biased, loaded language, repetition, etc.).", "id": 778, "question": "What is best performing model among author's submissions, what performance it had?", "title": "Neural Architectures for Fine-Grained Propaganda Detection in News"}, {"answers": ["Linguistic", ""], "context": "Some of the propaganda techniques BIBREF3 involve word and phrases that express strong emotional implications, exaggeration, minimization, doubt, national feeling, labeling , stereotyping, etc. This inspires us in extracting different features (Table TABREF1) including the complexity of text, sentiment, emotion, lexical (POS, NER, etc.), layout, etc. To further investigate, we use topical features (e.g., document-topic proportion) BIBREF4, BIBREF5, BIBREF6 at sentence and document levels in order to determine irrelevant themes, if introduced to the issue being discussed (e.g., Red Herring).", "id": 779, "question": "What extracted features were most influencial on performance?", "title": "Neural Architectures for Fine-Grained Propaganda Detection in News"}, {"answers": ["The best ensemble topped the best single model by 0.029 in F1 score on dev (external).", "They increased F1 Score by 0.029 in Sentence Level Classification, and by 0.044 in Fragment-Level classification"], "context": "Figure FIGREF2 (left) describes the three components of our system for SLC task: features, classifiers and ensemble. The arrows from features-to-classifier indicate that we investigate linguistic, layout and topical features in the two binary classifiers: LogisticRegression and CNN. For CNN, we follow the architecture of DBLP:conf/emnlp/Kim14 for sentence-level classification, initializing the word vectors by FastText or BERT. We concatenate features in the last hidden layer before classification.", "id": 780, "question": "Did ensemble schemes help in boosting peformance, by how much?", "title": "Neural Architectures for Fine-Grained Propaganda Detection in News"}, {"answers": ["BERT"], "context": "Figure FIGREF2 (right) describes our system for FLC task, where we design sequence taggers BIBREF9, BIBREF10 in three modes: (1) LSTM-CRF BIBREF11 with word embeddings ($w\\_e$) and character embeddings $c\\_e$, token-level features ($t\\_f$) such as polarity, POS, NER, etc. (2) LSTM-CRF+Multi-grain that jointly performs FLC and SLC with FastTextWordEmb and BERTSentEmb, respectively. Here, we add binary sentence classification loss to sequence tagging weighted by a factor of $\\alpha $. (3) LSTM-CRF+Multi-task that performs propagandistic span/fragment detection (PFD) and FLC (fragment detection + 19-way classification).", "id": 781, "question": "Which basic neural architecture perform best by itself?", "title": "Neural Architectures for Fine-Grained Propaganda Detection in News"}, {"answers": ["For SLC task : Ituorp, ProperGander and YMJA teams had better results.\nFor FLC task: newspeak and Antiganda teams had better results."], "context": "Data: While the SLC task is binary, the FLC consists of 18 propaganda techniques BIBREF3. We split (80-20%) the annotated corpus into 5-folds and 3-folds for SLC and FLC tasks, respectively. The development set of each the folds is represented by dev (internal); however, the un-annotated corpus used in leaderboard comparisons by dev (external). We remove empty and single token sentences after tokenization. Experimental Setup: We use PyTorch framework for the pre-trained BERT model (Bert-base-cased), fine-tuned for SLC task. In the multi-granularity loss, we set $\\alpha = 0.1$ for sentence classification based on dev (internal, fold1) scores. We use BIO tagging scheme of NER in FLC task. For CNN, we follow DBLP:conf/emnlp/Kim14 with filter-sizes of [2, 3, 4, 5, 6], 128 filters and 16 batch-size. We compute binary-F1and macro-F1 BIBREF12 in SLC and FLC, respectively on dev (internal).", "id": 782, "question": "What participating systems had better results than ones authors submitted?", "title": "Neural Architectures for Fine-Grained Propaganda Detection in News"}, {"answers": ["An output layer for each task", "Multi-tasking is addressed by neural sequence tagger based on LSTM-CRF and linguistic features, while multi-granularity is addressed by ensemble of LSTM-CRF and BERT."], "context": "Table TABREF10 shows the scores on dev (internal and external) for SLC task. Observe that the pre-trained embeddings (FastText or BERT) outperform TF-IDF vector representation. In row r2, we apply logistic regression classifier with BERTSentEmb that leads to improved scores over FastTextSentEmb. Subsequently, we augment the sentence vector with additional features that improves F1 on dev (external), however not dev (internal). Next, we initialize CNN by FastTextWordEmb or BERTWordEmb and augment the last hidden layer (before classification) with BERTSentEmb and feature vectors, leading to gains in F1 for both the dev sets. Further, we fine-tune BERT and apply different thresholds in relaxing the decision boundary, where $\\tau \\ge 0.35$ is found optimal.", "id": 783, "question": "What is specific to multi-granularity and multi-tasking neural arhiteture design?", "title": "Neural Architectures for Fine-Grained Propaganda Detection in News"}, {"answers": ["", ""], "context": "Massive Open Online Courses (MOOCs) have strived to bridge the social gap in higher education by bringing quality education from reputed universities to students at large. Such massive scaling through online classrooms, however, disrupt co-located, synchronous two-way communication between the students and the instructor.", "id": 784, "question": "Do they report results only on English data?", "title": "When to reply? Context Sensitive Models to Predict Instructor Interventions in MOOC Forums"}, {"answers": [""], "context": "A thread INLINEFORM0 consists of a series of posts INLINEFORM1 through INLINEFORM2 where INLINEFORM3 is an instructor's post when INLINEFORM4 is intervened, if applicable. INLINEFORM5 is considered intervened if an instructor had posted at least once. The problem of predicting instructor intervention is cast as a binary classification problem. Intervened threads are predicted as 1 given while non-intervened threads are predicted as 0 given posts INLINEFORM6 through INLINEFORM7 .", "id": 785, "question": "What aspects of discussion are relevant to instructor intervention, according to the attention mechanism?", "title": "When to reply? Context Sensitive Models to Predict Instructor Interventions in MOOC Forums"}, {"answers": ["", ""], "context": "Context has been used and modelled in various ways for different problems in discussion forums. In a work on a closely related problem of forum thread retrieval BIBREF2 models context using inter-post discourse e.g., Question-Answer. BIBREF3 models the structural dependencies and relationships between forum posts using a conditional random field in their problem to infer the reply structure. Unlike BIBREF2 , BIBREF3 can be used to model any structural dependency and is, therefore, more general. In this paper, we seek to infer general dependencies between a reply and its previous context whereas BIBREF3 inference is limited to pairs of posts. More recently BIBREF4 proposed a context based model which factorises attention over threads of different lengths. Differently, we do not model length but the context before a post. However, our attention models cater to threads of all lengths.", "id": 786, "question": "What was the previous state of the art for this task?", "title": "When to reply? Context Sensitive Models to Predict Instructor Interventions in MOOC Forums"}, {"answers": [""], "context": "The problem of predicting instructor intervention in MOOCs was proposed by BIBREF0 . Later BIBREF7 evaluated baseline models by BIBREF0 over a larger corpus and found the results to vary widely across MOOCs. Since then subsequent works have used similar diverse evaluations on the same prediction problem BIBREF1 , BIBREF8 . BIBREF1 proposed models with discourse features to enable better prediction over unseen MOOCs. BIBREF8 recently showed interventions on Coursera forums to be biased by the position at which a thread appears to an instructor viewing the forum interface and proposed methods for debiased prediction.", "id": 787, "question": "What type of latent context is used to predict instructor intervention?", "title": "When to reply? Context Sensitive Models to Predict Instructor Interventions in MOOC Forums"}, {"answers": ["", ""], "context": "We build and test our MMT models on the Multi30K dataset BIBREF21 . Each image in Multi30K contains one English (EN) description taken from Flickr30K BIBREF22 and human translations into German (DE), French (FR) and Czech BIBREF23 , BIBREF24 , BIBREF25 . The dataset contains 29,000 instances for training, 1,014 for development, and 1,000 for test. We only experiment with German and French, which are languages for which we have in-house expertise for the type of analysis we present. In addition to the official Multi30K test set (test 2016), we also use the test set from the latest WMT evaluation competition, test 2018 BIBREF25 .", "id": 788, "question": "Do they report results only on English dataset?", "title": "Distilling Translations with Visual Awareness"}, {"answers": [""], "context": "In addition to using the Multi30K dataset as is (standard setup), we probe the ability of our models to address the three linguistic phenomena where additional context has been proved important (Section ): ambiguities, gender-neutral words and noisy input. In a controlled experiment where we aim to remove the influence of frequency biases, we degrade the source sentences by masking words through three strategies to replace words by a placeholder: random source words, ambiguous source words and gender unmarked source words. The procedure is applied to the train, validation and test sets. For the resulting dataset generated for each setting, we compare models having access to text-only context versus additional text and multimodal contexts. We seek to get insights into the contribution of each type of context to address each type of degradation.", "id": 789, "question": "What dataset does this approach achieve state of the art results on?", "title": "Distilling Translations with Visual Awareness"}, {"answers": ["No data. Pretrained model is used."], "context": "Pre-trained models BIBREF0, BIBREF1 have received much of attention recently thanks to their impressive results in many down stream NLP tasks. Additionally, multilingual pre-trained models enable many NLP applications for other languages via zero-short cross-lingual transfer. Zero-shot cross-lingual transfer has shown promising results for rapidly building applications for low resource languages. BIBREF2 show the potential of multilingual-BERT BIBREF0 in zero-shot transfer for a large number of languages from different language families on five NLP tasks, namely, natural language inference, document classification, named entity recognition, part-of-speech tagging, and dependency parsing.", "id": 790, "question": "How much training data from the non-English language is used by the system?", "title": "From English To Foreign Languages: Transferring Pre-trained Language Models"}, {"answers": ["", ""], "context": "We first provide some background of pre-trained language models. Let $_e$ be English word-embeddings and $\\Psi ()$ be the Transformer BIBREF10 encoder with parameters $$. Let $_{w_i}$ denote the embedding of word $w_i$ (i.e., $_{w_i} = _e[w_1]$). We omit positional embeddings and bias for clarity. A pre-trained LM typically performs the following computations: (i) transform a sequence of input tokens to contextualized representations $[_{w_1},\\dots ,_{w_n}] = \\Psi (_{w_1}, \\dots , _{w_n}; )$, and (ii) predict an output word $y_i$ at $i^{\\text{th}}$ position $p(y_i | _{w_i}) \\propto \\exp (_{w_i}^\\top _{y_i})$.", "id": 791, "question": "Is the system tested on low-resource languages?", "title": "From English To Foreign Languages: Transferring Pre-trained Language Models"}, {"answers": ["", ""], "context": "Our approach to learn the initial foreign word embeddings $_f \\in ^{|V_f| \\times d}$ is based on the idea of mapping the trained English word embeddings $_e \\in ^{|V_e| \\times d}$ to $_f$ such that if a foreign word and an English word are similar in meaning then their embeddings are similar. Borrowing the idea of universal lexical sharing from BIBREF11, we represent each foreign word embedding $_f[i] \\in ^d$ as a linear combination of English word embeddings $_e[j] \\in ^d$", "id": 792, "question": "What languages are the model transferred to?", "title": "From English To Foreign Languages: Transferring Pre-trained Language Models"}, {"answers": ["Build a bilingual language model, learn the target language specific parameters starting from a pretrained English LM , fine-tune both English and target model to obtain the bilingual LM."], "context": "Given an English-foreign parallel corpus, we can estimate word translation probability $p(e\\,|\\,f)$ for any (English-foreign) pair $(e, f)$ using popular word-alignment BIBREF12 toolkits such as fast-align BIBREF13. We then assign:", "id": 793, "question": "How is the model transferred to other languages?", "title": "From English To Foreign Languages: Transferring Pre-trained Language Models"}, {"answers": ["", ""], "context": "For low resource languages, parallel data may not be available. In this case, we rely only on monolingual data (e.g., Wikipedias). We estimate word translation probabilities from word embeddings of the two languages. Word vectors of these languages can be learned using fastText BIBREF14 and then are aligned into a shared space with English BIBREF15, BIBREF16. Unlike learning contextualized representations, learning word vectors is fast and computationally cheap. Given the aligned vectors $\\bar{}_f$ of foreign and $\\bar{}_e$ of English, we calculate the word translation matrix $\\in ^{|V_f|\\times |V_e|}$ as", "id": 794, "question": "What metrics are used for evaluation?", "title": "From English To Foreign Languages: Transferring Pre-trained Language Models"}, {"answers": [""], "context": "After initializing foreign word-embeddings, we replace English word-embeddings in the English pre-trained LM with foreign word-embeddings to obtain the foreign LM. We then fine-tune only foreign word-embeddings on monolingual data. The training objective is the same as the training objective of the English pre-trained LM (i.e., masked LM for BERT). Since the trained encoder $\\Psi ()$ is good at capturing association, the purpose of this step is to further optimize target embeddings such that the target LM can utilized the trained encoder for association task. For example, if the words Albert Camus presented in a French input sequence, the self-attention in the encoder more likely attends to words absurde and existentialisme once their embeddings are tuned.", "id": 795, "question": "What datasets are used for evaluation?", "title": "From English To Foreign Languages: Transferring Pre-trained Language Models"}, {"answers": [""], "context": "Users of photo-sharing websites such as Flickr often provide short textual descriptions in the form of tags to help others find the images. With the availability of GPS systems in current electronic devices such as smartphones, latitude and longitude coordinates are nowadays commonly made available as well. The tags associated with such georeferenced photos often describe the location where these photos were taken, and Flickr can thus be regarded as a source of environmental information. The use of Flickr for modelling urban environments has already received considerable attention. For instance, various approaches have been proposed for modelling urban regions BIBREF0 , and for identifying points-of-interest BIBREF1 and itineraries BIBREF2 , BIBREF3 . However, the usefulness of Flickr for characterizing the natural environment, which is the focus of this paper, is less well-understood.", "id": 796, "question": "what are the existing approaches?", "title": "Embedding Geographic Locations for Modelling the Natural Environment using Flickr Tags and Structured Data"}, {"answers": ["", ""], "context": "The use of low-dimensional vector space embeddings for representing objects has already proven effective in a large number of applications, including natural language processing (NLP), image processing, and pattern recognition. In the context of NLP, the most prominent example is that of word embeddings, which represent word meaning using vectors of typically around 300 dimensions. A large number of different methods for learning such word embeddings have already been proposed, including Skip-gram and the Continuous Bag-of-Words (CBOW) model BIBREF8 , GloVe BIBREF9 , and fastText BIBREF14 . They have been applied effectively in many downstream NLP tasks such as sentiment analysis BIBREF15 , part of speech tagging BIBREF16 , BIBREF17 , and text classification BIBREF18 , BIBREF19 . The model we consider in this paper builds on GloVe, which was designed to capture linear regularities of word-word co-occurrence. In GloVe, there are two word vectors INLINEFORM0 and INLINEFORM1 for each word in the vocabulary, which are learned by minimizing the following objective: DISPLAYFORM0 ", "id": 797, "question": "what dataset is used in this paper?", "title": "Embedding Geographic Locations for Modelling the Natural Environment using Flickr Tags and Structured Data"}, {"answers": [""], "context": "Keyphrase generation is the task of automatically predicting keyphrases given a source text. Desired keyphrases are often multi-word units that summarize the high-level meaning and highlight certain important topics or information of the source text. Consequently, models that can successfully perform this task should be capable of not only distilling high-level information from a document, but also locating specific, important snippets therein.", "id": 798, "question": "How is keyphrase diversity measured?", "title": "Generating Diverse Numbers of Diverse Keyphrases"}, {"answers": ["they obtained computer science related topics by looking at titles and user-assigned tags", ""], "context": "Traditional keyphrase extraction has been studied extensively in past decades. In most existing literature, keyphrase extraction has been formulated as a two-step process. First, lexical features such as part-of-speech tags are used to determine a list of phrase candidates by heuristic methods BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . Second, a ranking algorithm is adopted to rank the candidate list and the top ranked candidates are selected as keyphrases. A wide variety of methods were applied for ranking, such as bagged decision trees BIBREF8 , BIBREF9 , Multi-Layer Perceptron, Support Vector Machine BIBREF9 and PageRank BIBREF10 , BIBREF11 , BIBREF12 . Recently, BIBREF13 , BIBREF14 , BIBREF15 used sequence labeling models to extract keyphrases from text. Similarly, BIBREF16 used Pointer Networks to point to the start and end positions of keyphrases in a source text.", "id": 799, "question": "How was the StackExchange dataset collected?", "title": "Generating Diverse Numbers of Diverse Keyphrases"}, {"answers": [""], "context": "Sequence to Sequence (Seq2Seq) learning was first introduced by BIBREF17 ; together with the soft attention mechanism of BIBREF18 , it has been widely used in natural language generation tasks. BIBREF19 , BIBREF20 used a mixture of generation and pointing to overcome the problem of large vocabulary size. BIBREF21 , BIBREF22 applied Seq2Seq models on summary generation tasks, while BIBREF23 , BIBREF24 generated questions conditioned on documents and answers from machine comprehension datasets. Seq2Seq was also applied on neural sentence simplification BIBREF25 and paraphrase generation tasks BIBREF26 .", "id": 800, "question": "What does the TextWorld ACG dataset contain?", "title": "Generating Diverse Numbers of Diverse Keyphrases"}, {"answers": ["", "around 332k questions"], "context": "Given a piece of source text, our objective is to generate a variable number of multi-word phrases. To this end, we opt for the sequence-to-sequence framework (Seq2Seq) as the basis of our model, combined with attention and pointer softmax mechanisms in the decoder.", "id": 801, "question": "What is the size of the StackExchange dataset?", "title": "Generating Diverse Numbers of Diverse Keyphrases"}, {"answers": ["CopyRNN (Meng et al., 2017), Multi-Task (Ye and Wang, 2018), and TG-Net (Chen et al., 2018b)", ""], "context": "In the following subsections, we use INLINEFORM0 to denote input text tokens, INLINEFORM1 to denote token embeddings, INLINEFORM2 to denote hidden states, and INLINEFORM3 to denote output text tokens. Superscripts denote time-steps in a sequence, and subscripts INLINEFORM4 and INLINEFORM5 indicate whether a variable resides in the encoder or the decoder of the model, respectively. The absence of a superscript indicates multiplicity in the time dimension. INLINEFORM6 refers to a linear transformation and INLINEFORM7 refers to it followed by a non-linear activation function INLINEFORM8 . Angled brackets, INLINEFORM9 , denote concatenation.", "id": 802, "question": "What were the baselines?", "title": "Generating Diverse Numbers of Diverse Keyphrases"}, {"answers": [""], "context": "There are usually multiple keyphrases for a given source text because each keyphrase represents certain aspects of the text. Therefore keyphrase diversity is desired for the keyphrase generation. Most previous keyphrase generation models generate multiple phrases by over-generation, which is highly prone to generate similar phrases due to the nature of beam search. Given our objective to generate variable numbers of keyphrases, we need to adopt new strategies for achieving better diversity in the output.", "id": 803, "question": "What two metrics are proposed?", "title": "Generating Diverse Numbers of Diverse Keyphrases"}, {"answers": ["", ""], "context": "It is well known that language has certain structural properties which allows natural language speakers to make \u201cinfinite use of finite means\" BIBREF3 . This structure allows us to generalize beyond the typical machine learning definition of generalization BIBREF4 (which considers performance on the distribution that generated the training set), permitting the understanding of any utterance sharing the same structure, regardless of probability. For example, sentences of length 100 typically do not appear in natural text or speech (our personal 'training set'), but can be understood regardless due to their structure. We refer to this notion as linguistic generalization .", "id": 804, "question": "Can the findings of this paper be generalized to a general-purpose task?", "title": "The Fine Line between Linguistic Generalization and Failure in Seq2Seq-Attention Models"}, {"answers": [""], "context": "Real world NLP tasks are complex, and as such, it can be difficult to precisely define what a model should and should not learn during training. As done in previous work BIBREF8 , BIBREF9 , we ease analysis by looking at a simple formal task. The task is set up to mimic (albeit, in an oversimplified manner) the input-output symbol alignments and local syntactic properties that models must learn in many natural language tasks, such as translation, tagging and summarization.", "id": 805, "question": "Why does the proposed task a good proxy for the general-purpose sequence to sequence tasks?", "title": "The Fine Line between Linguistic Generalization and Failure in Seq2Seq-Attention Models"}, {"answers": ["", ""], "context": "The apparent rise in political incivility has attracted substantial attention from scholars in recent years. These studies have largely focused on the extent to which politicians and elected officials are increasingly employing rhetoric that appears to violate norms of civility BIBREF0 , BIBREF1 . For the purposes of our work, we use the incidence of offensive rhetoric as a stand in for incivility. The 2016 US presidential election was an especially noteworthy case in this regard, particularly in terms of Donald Trump's campaign which frequently violated norms of civility both in how he spoke about broad groups in the public (such as Muslims, Mexicans, and African Americans) and the attacks he leveled at his opponents BIBREF2 . The consequences of incivility are thought to be crucial to the functioning of democracy since \u201cpublic civility and interpersonal politeness sustain social harmony and allow people who disagree with one another to maintain ongoing relationships\" BIBREF3 .", "id": 806, "question": "What was the baseline?", "title": "Measuring Offensive Speech in Online Political Discourse"}, {"answers": ["", ""], "context": "Our study makes use of multiple datasets in order to identify and characterize trends in offensive speech.", "id": 807, "question": "What was their system's performance?", "title": "Measuring Offensive Speech in Online Political Discourse"}, {"answers": [""], "context": "In order to identify offensive speech, we propose a fully automated technique that classifies comments into two classes: Offensive and Not Offensive.", "id": 808, "question": "What other political events are included in the database?", "title": "Measuring Offensive Speech in Online Political Discourse"}, {"answers": [""], "context": "At a high-level, our approach works as follows:", "id": 809, "question": "What classifier did they use?", "title": "Measuring Offensive Speech in Online Political Discourse"}, {"answers": ["The Conversations Gone Awry dataset is labelled as either containing a personal attack from withint (i.e. hostile behavior by one user in the conversation directed towards another) or remaining civil throughout. The Reddit Change My View dataset is labelled with whether or not a coversation eventually had a comment removed by a moderator for violation of Rule 2: \"Don't be rude or hostile to others users.\""], "context": "\u201cCh\u00e9 saetta previsa vien pi\u00f9 lenta.\u201d", "id": 810, "question": "What labels for antisocial events are available in datasets?", "title": "Trouble on the Horizon: Forecasting the Derailment of Online Conversations as they Develop"}, {"answers": ["", "An expanded version of the existing 'Conversations Gone Awry' dataset and the ChangeMyView dataset, a subreddit whose only annotation is whether the conversation required action by the Reddit moderators. "], "context": "", "id": 811, "question": "What are two datasets model is applied to?", "title": "Trouble on the Horizon: Forecasting the Derailment of Online Conversations as they Develop"}, {"answers": ["", ""], "context": "Coronavirus disease 2019 (COVID-19) is an infectious disease that has affected more than one million individuals all over the world and caused more than 55,000 deaths, as of April 3 in 2020. The science community has been working very actively to understand this new disease and make diagnosis and treatment guidelines based on the findings. One major stream of efforts are focused on discovering the correlation between radiological findings in the lung areas and COVID-19. There have been several works BIBREF0, BIBREF1 publishing such results. However, existing studies are mostly conducted separately by different hospitals and medical institutes. Due to geographic affinity, the populations served by different hospitals have different genetic, social, and ethnic characteristics. As a result, the radiological findings from COVID-19 patient cases in different populations are different. This population bias incurs inconsistent or even conflicting conclusions regarding the correlation between radiological findings and COVID-19. As a result, medical professionals cannot make informed decisions on how to use radiological findings to guide diagnosis and treatment of COVID-19.", "id": 812, "question": "What is the CORD-19 dataset?", "title": "Identifying Radiological Findings Related to COVID-19 from Medical Literature"}, {"answers": [""], "context": "We used the COVID-19 Open Research Dataset (CORD-19) BIBREF2 for our study. In response to the COVID-19 pandemic, the White House and a coalition of research groups prepared the CORD-19 dataset. It contains over 45,000 scholarly articles, including over 33,000 with full text, about COVID-19, SARS-CoV-2, and related coronaviruses. These articles are contributed by hospitals and medical institutes all over the world. Since the outbreak of COVID-19 is after November 2019, we select articles published after November 2019 to study, which include a total of 2081 articles and about 360000 sentences. Many articles report the radiological findings related to COVID-19. Table TABREF4 shows some examples.", "id": 813, "question": "How large is the collection of COVID-19 literature?", "title": "Identifying Radiological Findings Related to COVID-19 from Medical Literature"}, {"answers": ["", ""], "context": "Automatic summarization, machine translation, question answering, and semantic parsing operations are useful for processing, analyzing, and extracting meaningful information from text. However, when applied to long texts, these tasks usually require some minimal syntactic structure to be identified, such as sentences BIBREF0 , BIBREF1 , BIBREF2 , which always end with a period (\u201c.\u201d) in English BIBREF3 .", "id": 814, "question": "Which deep learning architecture do they use for sentence segmentation?", "title": "Semi-supervised Thai Sentence Segmentation Using Local and Distant Word Representations"}, {"answers": [""], "context": "This section includes three subsections. The first subsection concerns Thai sentence segmentation, which is the main focus of this work. The task of English punctuation restoration, which is similar to our main task, is described in the second subsection. The last subsection describes the original Cross-View Training initially proposed in BIBREF20 .", "id": 815, "question": "How do they utilize unlabeled data to improve model representations?", "title": "Semi-supervised Thai Sentence Segmentation Using Local and Distant Word Representations"}, {"answers": ["a perceptual illusion, where listening to a speech sound while watching a mouth pronounce a different sound changes how the audio is heard", "When the perception of what we hear is influenced by what we see."], "context": "A growing body of work on adversarial examples has identified that for machine-learning (ML) systems that operate on high-dimensional data, for nearly every natural input there exists a small perturbation of the point that will be misclassified by the system, posing a threat to its deployment in certain critical settings BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . More broadly, the susceptibility of ML systems to adversarial examples has prompted a re-examination of whether current ML systems are truly learning or if they are assemblages of tricks that are effective yet brittle and easily fooled BIBREF9 . Implicit in this line of reasoning is the assumption that instances of \u201dreal\" learning, such as human cognition, yield extremely robust systems. Indeed, at least in computer vision, human perception is regarded as the gold-standard for robustness to adversarial examples.", "id": 816, "question": "What is the McGurk effect?", "title": "A Surprising Density of Illusionable Natural Speech"}, {"answers": [""], "context": "Illusionable instances for humans are similar to adversarial examples for ML systems. Strictly speaking, however, our investigation of the density of natural language for which McGurk illusions can be created, is not the human analog of adversarial examples. The adversarial examples for ML systems are datapoints that are misclassified, despite being extremely similar to a typical datapoint (that is correctly classified). Our illusions of misdubbed audio are not extremely close to any typically encountered input, since our McGurk samples have auditory signals corresponding to one phoneme/word and visual signals corresponding to another. Also, there is a compelling argument for why the McGurk confusion occurs, namely that human speech perception is bimodal (audio-visual) in nature when lip reading is available BIBREF20 , BIBREF21 . To the best of our knowledge, prior to our work, there has been little systematic investigation of the extent to which the McGurk effect, or other types of illusions, can be made dense in the set of instances encountered in everyday life. The closest work is BIBREF22 , where the authors demonstrate that some adversarial examples for computer vision systems also fool humans when humans were given less than a tenth of second to view the image. However, some of these examples seem less satisfying as the perturbation acts as a pixel-space interpolation between the original image and the \u201cincorrect\u201d class. This results in images that are visually borderline between two classes, and as such, do not provide a sense of illusion to the viewer. In general, researchers have not probed the robustness of human perception with the same tools, intent, or perspective, with which the security community is currently interrogating the robustness of ML systems.", "id": 817, "question": "Are humans and machine learning systems fooled by the same kinds of illusions?", "title": "A Surprising Density of Illusionable Natural Speech"}, {"answers": ["", ""], "context": "Machine translation has made remarkable progress, and studies claiming it to reach a human parity are starting to appear BIBREF0. However, when evaluating translations of the whole documents rather than isolated sentences, human raters show a stronger preference for human over machine translation BIBREF1. These findings emphasize the need to shift towards context-aware machine translation both from modeling and evaluation perspective.", "id": 818, "question": "how many humans evaluated the results?", "title": "Context-Aware Monolingual Repair for Neural Machine Translation"}, {"answers": ["", ""], "context": "We propose a monolingual DocRepair model to correct inconsistencies between sentence-level translations of a context-agnostic MT system. It does not use any states of a trained MT model whose outputs it corrects and therefore can in principle be trained to correct translations from any black-box MT system.", "id": 819, "question": "what was the baseline?", "title": "Context-Aware Monolingual Repair for Neural Machine Translation"}, {"answers": ["Four discourse phenomena - deixis, lexical cohesion, VP ellipsis, and ellipsis which affects NP inflection."], "context": "We use contrastive test sets for evaluation of discourse phenomena for English-Russian by BIBREF11. These test sets allow for testing different kinds of phenomena which, as we show, can be captured from monolingual data with varying success. In this section, we provide test sets statistics and briefly describe the tested phenomena. For more details, the reader is referred to BIBREF11.", "id": 820, "question": "what phenomena do they mention is hard to capture?", "title": "Context-Aware Monolingual Repair for Neural Machine Translation"}, {"answers": ["On average 0.64 "], "context": "There are four test sets in the suite. Each test set contains contrastive examples. It is specifically designed to test the ability of a system to adapt to contextual information and handle the phenomenon under consideration. Each test instance consists of a true example (a sequence of sentences and their reference translation from the data) and several contrastive translations which differ from the true one only in one specific aspect. All contrastive translations are correct and plausible translations at the sentence level, and only context reveals the inconsistencies between them. The system is asked to score each candidate translation, and we compute the system accuracy as the proportion of times the true translation is preferred to the contrastive ones. Test set statistics are shown in Table TABREF15. The suites for deixis and lexical cohesion are split into development and test sets, with 500 examples from each used for validation purposes and the rest for testing. Convergence of both consistency scores on these development sets and BLEU score on a general development set are used as early stopping criteria in models training. For ellipsis, there is no dedicated development set, so we evaluate on all the ellipsis data and do not use it for development.", "id": 821, "question": "by how much did the BLEU score improve?", "title": "Context-Aware Monolingual Repair for Neural Machine Translation"}, {"answers": ["", "Named Entity Recognition, including entities such as proteins, genes, diseases, treatments, drugs, etc. in the biomedical domain"], "context": "The explosion of available scientific articles in the Biomedical domain has led to the rise of Biomedical Information Extraction (BioIE). BioIE systems aim to extract information from a wide spectrum of articles including medical literature, biological literature, electronic health records, etc. that can be used by clinicians and researchers in the field. Often the outputs of BioIE systems are used to assist in the creation of databases, or to suggest new paths for research. For example, a ranked list of interacting proteins that are extracted from biomedical literature, but are not present in existing databases, can allow researchers to make informed decisions about which protein/gene to study further. Interactions between drugs are necessary for clinicians who simultaneously administer multiple drugs to their patients. A database of diseases, treatments and tests is beneficial for doctors consulting in complicated medical cases.", "id": 822, "question": "What is NER?", "title": "A Biomedical Information Extraction Primer for NLP Researchers"}, {"answers": [""], "context": "Named Entity Recognition (NER) in the Biomedical domain usually includes recognition of entities such as proteins, genes, diseases, treatments, drugs, etc. Fact extraction involves extraction of Named Entities from a corpus, usually given a certain ontology. When compared to NER in the domain of general text, the biomedical domain has some characteristic challenges:", "id": 823, "question": "Does the paper explore extraction from electronic health records?", "title": "A Biomedical Information Extraction Primer for NLP Researchers"}, {"answers": [""], "context": "This paper introduces jiant, an open source toolkit that allows researchers to quickly experiment on a wide array of NLU tasks, using state-of-the-art NLP models, and conduct experiments on probing, transfer learning, and multitask training. jiant supports many state-of-the-art Transformer-based models implemented by Huggingface's Transformers package, as well as non-Transformer models such as BiLSTMs.", "id": 824, "question": "Does jiant involve datasets for the 50 NLU tasks?", "title": "jiant: A Software Toolkit for Research on General-Purpose Text Understanding Models"}, {"answers": ["", ""], "context": "Transfer learning is an area of research that uses knowledge from pretrained models to transfer to new tasks. In recent years, Transformer-based models like BERT BIBREF17 and T5 BIBREF18 have yielded state-of-the-art results on the lion's share of benchmark tasks for language understanding through pretraining and transfer, often paired with some form of multitask learning.", "id": 825, "question": "Is jiant compatible with models in any programming language?", "title": "jiant: A Software Toolkit for Research on General-Purpose Text Understanding Models"}, {"answers": [""], "context": "Neural networks have been successfully used to describe images with text using sequence-to-sequence models BIBREF0. However, the results are simple and dry captions which are one or two phrases long. Humans looking at a painting see more than just objects. Paintings stimulate sentiments, metaphors and stories as well. Therefore, our goal is to have a neural network describe the painting artistically in a style of choice. As a proof of concept, we present a model which generates Shakespearean prose for a given painting as shown in Figure FIGREF1. Accomplishing this task is difficult with traditional sequence to sequence models since there does not exist a large collection of Shakespearean prose which describes paintings: Shakespeare's works describes a single painting shown in Figure FIGREF3. Fortunately we have a dataset of modern English poems which describe images BIBREF1 and line-by-line modern paraphrases of Shakespeare's plays BIBREF2. Our solution is therefore to combine two separately trained models to synthesize Shakespearean prose for a given painting.", "id": 826, "question": "What models are used for painting embedding and what for language style transfer?", "title": "Prose for a Painting"}, {"answers": [""], "context": "A general end-to-end approach to sequence learning BIBREF3 makes minimal assumptions on the sequence structure. This model is widely used in tasks such as machine translation, text summarization, conversational modeling, and image captioning. A generative model using a deep recurrent architecture BIBREF0 has also beeen used for generating phrases describing an image. The task of synthesizing multiple lines of poetry for a given image BIBREF1 is accomplished by extracting poetic clues from images. Given the context image, the network associates image attributes with poetic descriptions using a convolutional neural net. The poem is generated using a recurrent neural net which is trained using multi-adversarial training via policy gradient.", "id": 827, "question": "What applicability of their approach is demonstrated by the authors?", "title": "Prose for a Painting"}, {"answers": ["", ""], "context": "We use a total three datasets: two datasets for generating an English poem from an image, and Shakespeare plays and their English translations for text style transfer.", "id": 828, "question": "What limitations do the authors demnostrate of their model?", "title": "Prose for a Painting"}, {"answers": ["", ""], "context": "For generating a poem from images we use an existing actor-critic architecture BIBREF1. This involves 3 parallel CNNs: an object CNN, sentiment CNN, and scene CNN, for feature extraction. These features are combined with a skip-thought model which provides poetic clues, which are then fed into a sequence-to-sequence model trained by policy gradient with 2 discriminator networks for rewards. This as a whole forms a pipeline that takes in an image and outputs a poem as shown on the top left of Figure FIGREF4. A CNN-RNN generative model acts as an agent. The parameters of this agent define a policy whose execution determines which word is selected as an action. When the agent selects all words in a poem, it receives a reward. Two discriminative networks, shown on the top right of Figure FIGREF4, are defined to serve as rewards concerning whether the generated poem properly describes the input image and whether the generated poem is poetic. The goal of the poem generation model is to generate a sequence of words as a poem for an image to maximize the expected return.", "id": 829, "question": "How does final model rate on Likert scale?", "title": "Prose for a Painting"}, {"answers": [""], "context": "For Shakespearizing modern English texts, we experimented with various types of sequence to sequence models. Since the size of the parallel translation data available is small, we leverage a dictionary providing a mapping between Shakespearean words and modern English words to retrofit pre-trained word embeddings. Incorporating this extra information improves the translation task. The large number of shared word types between the source and target sentences indicates that sharing the representation between them is beneficial.", "id": 830, "question": "How big is English poem description of the painting dataset?", "title": "Prose for a Painting"}, {"answers": ["", ""], "context": "We use a sequence-to-sequence model which consists of a single layer unidrectional LSTM encoder and a single layer LSTM decoder and pre-trained retrofitted word embeddings shared between source and target sentences. We experimented with two different types of attention: global attention BIBREF9, in which the model makes use of the output from the encoder and decoder for the current time step only, and Bahdanau attention BIBREF10, where computing attention requires the output of the decoder from the prior time step. We found that global attention performs better in practice for our task of text style transfer.", "id": 831, "question": "What is best BLEU score of language style transfer authors got?", "title": "Prose for a Painting"}, {"answers": ["", "On Coin Collector, proposed model finds shorter path in fewer number of interactions with enironment.\nOn Cooking World, proposed model uses smallest amount of steps and on average has bigger score and number of wins by significant margin."], "context": "Text-based games became popular in the mid 80s with the game series Zork BIBREF1 resulting in many different text-based games being produced and published BIBREF2. These games use a plain text description of the environment and the player has to interact with them by writing natural-language commands. Recently, there has been a growing interest in developing agents that can automatically solve text-based games BIBREF3 by interacting with them. These settings challenge the ability of an artificial agent to understand natural language, common sense knowledge, and to develop the ability to interact with environments using language BIBREF4, BIBREF5.", "id": 832, "question": "How better does new approach behave than existing solutions?", "title": "Exploration Based Language Learning for Text-Based Games"}, {"answers": [""], "context": "Among reinforcement learning based efforts to solve text-based games two approaches are prominent. The first approach assumes an action as a sentence of a fixed number of words, and associates a separate $Q$-function BIBREF15, BIBREF16 with each word position in this sentence. This method was demonstrated with two-word sentences consisting of a verb-object pair (e.g. take apple) BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF17. In the second approach, one $Q$-function that scores all possible actions (i.e. sentences) is learned and used to play the game BIBREF10, BIBREF11, BIBREF12. The first approach is quite limiting since a fixed number of words must be selected in advance and no temporal dependency is enforced between words (e.g. lack of language modelling). In the second approach, on the other hand, the number of possible actions can become exponentially large if the admissible actions (a predetermined low cardinality set of actions that the agent can take) are not provided to the agent. A possible solution to this issue has been proposed by BIBREF18, where a hierarchical pointer-generator is used to first produce the set of admissible actions given the observation, and subsequently one element of this set is chosen as the action for that observation. However, in our experiments we show that even in settings where the true set of admissible actions is provided by the environment, a $Q$-scorer BIBREF10 does not generalize well in our setting (Section 5.2 Zero-Shot) and we would expect performance to degrade even further if the admissible actions were generated by a separate model. Less common are models that either learn to reduce a large set of actions into a smaller set of admissible actions by eliminating actions BIBREF12 or by compressing them in a latent space BIBREF11.", "id": 833, "question": "How is trajectory with how rewards extracted?", "title": "Exploration Based Language Learning for Text-Based Games"}, {"answers": ["", ""], "context": "In most text-based games rewards are sparse, since the size of the action space makes the probability of observing a reward extremely low when taking only random actions. Sparse reward environments are particularly challenging for reinforcement learning as they require longer term planning. Many exploration based solutions have been proposed to address the challenges associated with reward sparsity. Among these exploration approaches are novelty search BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, intrinsic motivation BIBREF24, BIBREF25, BIBREF26, and curiosity based rewards BIBREF27, BIBREF28, BIBREF29. For text based games exploration methods have been studied by BIBREF8, where the authors showed the effectiveness of the episodic discovery bonus BIBREF30 in environments with sparse rewards. This exploration method can only be applied in games with very small action and state spaces, since their counting methods rely on the state in its explicit raw form.", "id": 834, "question": "On what Text-Based Games are experiments performed?", "title": "Exploration Based Language Learning for Text-Based Games"}, {"answers": [""], "context": "Go-Explore BIBREF0 differs from the exploration-based algorithms discussed above in that it explicitly keeps track of under-explored areas of the state space and in that it utilizes the determinism of the simulator in order to return to those states, allowing it to explore sparse-reward environments in a sample efficient way (see BIBREF0 as well as section SECREF27). For the experiments in this paper we mainly focus on the final performance of our policy, not how that policy is trained, thus making Go-Explore a suitable algorithm for our experiments. Go-Explore is composed of two phases. In phase 1 (also referred to as the \u201cexploration\u201d phase) the algorithm explores the state space through keeping track of previously visited states by maintaining an archive. During this phase, instead of resuming the exploration from scratch, the algorithm starts exploring from promising states in the archive to find high performing trajectories. In phase 2 (also referred to as the \u201crobustification\u201d phase, while in our variant we will call it \u201cgeneralization\u201d) the algorithm trains a policy using the trajectories found in phase 1. Following this framework, which is also shown in Figure FIGREF56 (Appendix A.2), we define the Go-Explore phases for text-based games.", "id": 835, "question": "How do the authors show that their learned policy generalize better than existing solutions to unseen games?", "title": "Exploration Based Language Learning for Text-Based Games"}, {"answers": ["Low data: SST-5, TREC, IMDB around 1-2 accuracy points better than baseline\nImbalanced labels: the improvement over the base model increases as the data gets more imbalanced, ranging from around 6 accuracy points on 100:1000 to over 20 accuracy points on 20:1000"], "context": "The performance of machines often crucially depend on the amount and quality of the data used for training. It has become increasingly ubiquitous to manipulate data to improve learning, especially in low data regime or in presence of low-quality datasets (e.g., imbalanced labels). For example, data augmentation applies label-preserving transformations on original data points to expand the data size; data weighting assigns an importance weight to each instance to adapt its effect on learning; and data synthesis generates entire artificial examples. Different types of manipulation can be suitable for different application settings.", "id": 836, "question": "How much is classification performance improved in experiments for low data regime and class-imbalance problems?", "title": "Learning Data Manipulation for Augmentation and Weighting"}, {"answers": ["", ""], "context": "Rich types of data manipulation have been increasingly used in modern machine learning pipelines. Previous work each has typically focused on a particular manipulation type. Data augmentation that perturbs examples without changing the labels is widely used especially in vision BIBREF10, BIBREF11 and speech BIBREF12, BIBREF13 domains. Common heuristic-based methods on images include cropping, mirroring, rotation BIBREF11, and so forth. Recent work has developed automated augmentation approaches BIBREF3, BIBREF2, BIBREF14, BIBREF15, BIBREF16. BIBREF17 additionally use large-scale unlabeled data. BIBREF3, BIBREF2 learn to induce the composition of data transformation operators. Instead of treating data augmentation as a policy in reinforcement learning BIBREF3, we formulate manipulation as a reward function and use efficient stochastic gradient descent to learn the manipulation parameters. Text data augmentation has also achieved impressive success, such as contextual augmentation BIBREF18, BIBREF19, back-translation BIBREF20, and manual approaches BIBREF21, BIBREF22. In addition to perturbing the input text as in classification tasks, text generation problems expose opportunities to adding noise also in the output text, such as BIBREF23, BIBREF24. Recent work BIBREF6 shows output nosing in sequence generation can be treated as an intermediate approach in between supervised learning and reinforcement learning, and developed a new sequence learning algorithm that interpolates between the spectrum of existing algorithms. We instantiate our approach for text contextual augmentation as in BIBREF18, BIBREF19, but enhance the previous work by additionally fine-tuning the augmentation network jointly with the target model.", "id": 837, "question": "What off-the-shelf reward learning algorithm from RL for joint data manipulation learning and model training is adapted?", "title": "Learning Data Manipulation for Augmentation and Weighting"}, {"answers": ["Answer with content missing: (Subscript 1: \"We did not participate in subtask 5 (E-c)\") Authors participated in EI-Reg, EI-Oc, V-Reg and V-Oc subtasks."], "context": "Understanding the emotions expressed in a text or message is of high relevance nowadays. Companies are interested in this to get an understanding of the sentiment of their current customers regarding their products and the sentiment of their potential customers to attract new ones. Moreover, changes in a product or a company may also affect the sentiment of a customer. However, the intensity of an emotion is crucial in determining the urgency and importance of that sentiment. If someone is only slightly happy about a product, is a customer willing to buy it again? Conversely, if someone is very angry about customer service, his or her complaint might be given priority over somewhat milder complaints.", "id": 838, "question": "What subtasks did they participate in?", "title": "UG18 at SemEval-2018 Task 1: Generating Additional Training Data for Predicting Emotion Intensity in Spanish"}, {"answers": [""], "context": "For each task, the training data that was made available by the organizers is used, which is a selection of tweets with for each tweet a label describing the intensity of the emotion or sentiment BIBREF1 . Links and usernames were replaced by the general tokens URL and @username, after which the tweets were tokenized by using TweetTokenizer. All text was lowercased. In a post-processing step, it was ensured that each emoji is tokenized as a single token.", "id": 839, "question": "What were the scores of their system?", "title": "UG18 at SemEval-2018 Task 1: Generating Additional Training Data for Predicting Emotion Intensity in Spanish"}, {"answers": ["", ""], "context": "To be able to train word embeddings, Spanish tweets were scraped between November 8, 2017 and January 12, 2018. We chose to create our own embeddings instead of using pre-trained embeddings, because this way the embeddings would resemble the provided data set: both are based on Twitter data. Added to this set was the Affect in Tweets Distant Supervision Corpus (DISC) made available by the organizers BIBREF0 and a set of 4.1 million tweets from 2015, obtained from BIBREF2 . After removing duplicate tweets and tweets with fewer than ten tokens, this resulted in a set of 58.7 million tweets, containing 1.1 billion tokens. The tweets were preprocessed using the method described in Section SECREF6 . The word embeddings were created using word2vec in the gensim library BIBREF3 , using CBOW, a window size of 40 and a minimum count of 5. The feature vectors for each tweet were then created by using the AffectiveTweets WEKA package BIBREF4 .", "id": 840, "question": "How was the training data translated?", "title": "UG18 at SemEval-2018 Task 1: Generating Additional Training Data for Predicting Emotion Intensity in Spanish"}, {"answers": [" Selection of tweets with for each tweet a label describing the intensity of the emotion or sentiment provided by organizers and tweets translated form English to Spanish.", ""], "context": "Most lexical resources for sentiment analysis are in English. To still be able to benefit from these sources, the lexicons in the AffectiveTweets package were translated to Spanish, using the machine translation platform Apertium BIBREF5 .", "id": 841, "question": "What dataset did they use?", "title": "UG18 at SemEval-2018 Task 1: Generating Additional Training Data for Predicting Emotion Intensity in Spanish"}, {"answers": ["", ""], "context": "The training set provided by BIBREF0 is not very large, so it was interesting to find a way to augment the training set. A possible method is to simply translate the datasets into other languages, leaving the labels intact. Since the present study focuses on Spanish tweets, all tweets from the English datasets were translated into Spanish. This new set of \u201cSpanish\u201d data was then added to our original training set. Again, the machine translation platform Apertium BIBREF5 was used for the translation of the datasets.", "id": 842, "question": "What other languages did they translate the data from?", "title": "UG18 at SemEval-2018 Task 1: Generating Additional Training Data for Predicting Emotion Intensity in Spanish"}, {"answers": [""], "context": "Three types of models were used in our system, a feed-forward neural network, an LSTM network and an SVM regressor. The neural nets were inspired by the work of Prayas BIBREF7 in the previous shared task. Different regression algorithms (e.g. AdaBoost, XGBoost) were also tried due to the success of SeerNet BIBREF8 , but our study was not able to reproduce their results for Spanish.", "id": 843, "question": "What semi-supervised learning is applied?", "title": "UG18 at SemEval-2018 Task 1: Generating Additional Training Data for Predicting Emotion Intensity in Spanish"}, {"answers": [""], "context": "The lack of annotated training and evaluation data for many tasks and domains hinders the development of computational models for the majority of the world's languages BIBREF0, BIBREF1, BIBREF2. The necessity to guide and advance multilingual and cross-lingual NLP through annotation efforts that follow cross-lingually consistent guidelines has been recently recognized by collaborative initiatives such as the Universal Dependency (UD) project BIBREF3. The latest version of UD (as of March 2020) covers more than 70 languages. Crucially, this resource continues to steadily grow and evolve through the contributions of annotators from across the world, extending the UD's reach to a wide array of typologically diverse languages. Besides steering research in multilingual parsing BIBREF4, BIBREF5, BIBREF6 and cross-lingual parser transfer BIBREF7, BIBREF8, BIBREF9, the consistent annotations and guidelines have also enabled a range of insightful comparative studies focused on the languages' syntactic (dis)similarities BIBREF10, BIBREF11, BIBREF12.", "id": 844, "question": "How were the datasets annotated?", "title": "Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual Lexical Semantic Similarity"}, {"answers": ["Chinese Mandarin, Welsh, English, Estonian, Finnish, French, Hebrew, Polish, Russian, Spanish, Kiswahili, Yue Chinese", "Chinese Mandarin, Welsh, English, Estonian, Finnish, French, Hebrew, Polish, Russian, Spanish, Kiswahili, Yue Chinese"], "context": "The focus of the Multi-SimLex initiative is on the lexical relation of pure semantic similarity. For any pair of words, this relation measures whether their referents share the same features. For instance, graffiti and frescos are similar to the extent that they are both forms of painting and appear on walls. This relation can be contrasted with the cognitive association between two words, which often depends on how much their referents interact in the real world, or are found in the same situations. For instance, a painter is easily associated with frescos, although they lack any physical commonalities. Association is also known in the literature under other names: relatedness BIBREF13, topical similarity BIBREF35, and domain similarity BIBREF36.", "id": 845, "question": "What are the 12 languages covered?", "title": "Multi-SimLex: A Large-Scale Evaluation of Multilingual and Cross-Lingual Lexical Semantic Similarity"}, {"answers": ["", "", ""], "context": "Multi-document summarization (MDS), the transformation of a set of documents into a short text containing their most important aspects, is a long-studied problem in NLP. Generated summaries have been shown to support humans dealing with large document collections in information seeking tasks BIBREF0 , BIBREF1 , BIBREF2 . However, when exploring a set of documents manually, humans rarely write a fully-formulated summary for themselves. Instead, user studies BIBREF3 , BIBREF4 show that they note down important keywords and phrases, try to identify relationships between them and organize them accordingly. Therefore, we believe that the study of summarization with similarly structured outputs is an important extension of the traditional task.", "id": 846, "question": "Does the corpus contain only English documents?", "title": "Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps"}, {"answers": ["Answer with content missing: (Evaluation Metrics section) Precision, Recall, F1-scores, Strict match, METEOR, ROUGE-2"], "context": "Concept-map-based MDS is defined as follows: Given a set of related documents, create a concept map that represents its most important content, satisfies a specified size limit and is connected.", "id": 847, "question": "What type of evaluation is proposed for this task?", "title": "Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps"}, {"answers": ["Answer with content missing: (Baseline Method section) We implemented a simple approach inspired by previous work on concept map generation and keyphrase extraction."], "context": "Some attempts have been made to automatically construct concept maps from text, working with either single documents BIBREF14 , BIBREF9 , BIBREF15 , BIBREF16 or document clusters BIBREF17 , BIBREF18 , BIBREF19 . These approaches extract concept and relation labels from syntactic structures and connect them to build a concept map. However, common task definitions and comparable evaluations are missing. In addition, only a few of them, namely Villalon.2012 and Valerio.2006, define summarization as their goal and try to compress the input to a substantially smaller size. Our newly proposed task and the created large-cluster dataset fill these gaps as they emphasize the summarization aspect of the task.", "id": 848, "question": "What baseline system is proposed?", "title": "Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps"}, {"answers": ["", "They break down the task of importance annotation to the level of single propositions and obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary."], "context": "Lloret.2013 describe several experiments to crowdsource reference summaries. Workers are asked to read 10 documents and then select 10 summary sentences from them for a reward of $0.05. They discovered several challenges, including poor work quality and the subjectiveness of the annotation task, indicating that crowdsourcing is not useful for this purpose.", "id": 849, "question": "How were crowd workers instructed to identify important elements in large document collections?", "title": "Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps"}, {"answers": [""], "context": "We break down the task of importance annotation to the level of single propositions. The goal of our crowdsourcing scheme is to obtain a score for each proposition indicating its importance in a document cluster, such that a ranking according to the score would reveal what is most important and should be included in a summary. In contrast to other work, we do not show the documents to the workers at all, but provide only a description of the document cluster's topic along with the propositions. This ensures that tasks are small, simple and can be done quickly (see Figure FIGREF4 ).", "id": 850, "question": "Which collections of web documents are included in the corpus?", "title": "Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps"}, {"answers": [""], "context": "To verify the proposed approach, we conducted a pilot study on Amazon Mechanical Turk using data from TAC2008 BIBREF36 . We collected importance estimates for 474 propositions extracted from the first three clusters using both task designs. Each Likert-scale task was assigned to 5 different workers and awarded $0.06. For comparison tasks, we also collected 5 labels each, paid $0.05 and sampled around 7% of all possible pairs. We submitted them in batches of 100 pairs and selected pairs for subsequent batches based on the confidence of the TrueSkill model.", "id": 851, "question": "How do the authors define a concept map?", "title": "Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps"}, {"answers": ["", ""], "context": "Language modeling is a probabilistic description of language phenomenon. It provides essential context to distinguish words which sound similar and therefore has one of the most useful applications in Natural Language Processing (NLP) especially in downstreaming tasks like Automatic Speech Recognition (ASR). Recurrent Neural Networks (RNN) especially Long Short Term Memory (LSTM) networks BIBREF0 have been the typical solution to language modeling which do achieve strong results. In spite of these results, their fundamental sequential computation constraint has restricted their use in the modeling of long-term dependencies in sequential data. To address these issues Transformer architecture was introduced. Transformers relies completely on an attention mechanism to form global dependencies between input and output. It also offers more parallelization and has achieved SOTA results in language modeling outperforming LSTM models BIBREF1.", "id": 852, "question": "Is the LSTM baseline a sub-word model?", "title": "Finnish Language Modeling with Deep Transformer Models"}, {"answers": ["Answer with content missing: (formulas in selection): Pseudo-perplexity is perplexity where conditional joint probability is approximated."], "context": "The goal of an language model is to assign meaningful probabilities to a sequence of words. Given a set of tokens $\\mathbf {X}=(x_1,....,x_T)$, where $T$ is the length of a sequence, our task is to estimate the joint conditional probability $P(\\mathbf {X})$ which is", "id": 853, "question": "How is pseudo-perplexity defined?", "title": "Finnish Language Modeling with Deep Transformer Models"}, {"answers": ["LSTM to encode the question, VGG16 to extract visual features. The outputs of LSTM and VGG16 are multiplied element-wise and sent to a softmax layer.", ""], "context": "What would be possible if a person had an oracle that could immediately provide the answer to any question about the visual world? Sight-impaired users could quickly and reliably figure out the denomination of their currency and so whether they spent the appropriate amount for a product BIBREF0 . Hikers could immediately learn about their bug bites and whether to seek out emergency medical care. Pilots could learn how many birds are in their path to decide whether to change course and so avoid costly, life-threatening collisions. These examples illustrate several of the interests from a visual question answering (VQA) system, including tackling problems that involve classification, detection, and counting. More generally, the goal for VQA is to have a single system that can accurately answer any natural language question about an image or video BIBREF1 , BIBREF2 , BIBREF3 .", "id": 854, "question": "What is the model architecture used?", "title": "Visual Question: Predicting If a Crowd Will Agree on the Answer"}, {"answers": ["The number of redundant answers to collect from the crowd is predicted to efficiently capture the diversity of all answers from all visual questions."], "context": "The remainder of the paper is organized into four sections. We first describe a study where we investigate: 1) How much answer diversity arises for visual questions? and 2) Why do people disagree (Section SECREF4 )? Next, we explore the following two questions: 1) Given a novel visual question, can a machine correctly predict whether multiple independent members of a crowd would supply the same answer? and 2) If so, what insights does our machine-learned system reveal regarding what humans are most likely to agree about (Section SECREF5 )? In the following section, we propose a novel resource allocation system for efficiently capturing the diversity of all answers for a set of visual questions (Section SECREF6 ). Finally, we end with concluding remarks (Section SECREF7 ).", "id": 855, "question": "How is the data used for training annotated?", "title": "Visual Question: Predicting If a Crowd Will Agree on the Answer"}, {"answers": ["Answer with content missing: (Evaluation section) Given that in CLIR the primary goal is to get a better ranked list of documents against a translated query, we only report Mean Average Precision (MAP)."], "context": "CLIR systems retrieve documents written in a language that is different from search query language BIBREF0 . The primary objective of CLIR is to translate or project a query into the language of the document repository BIBREF1 , which we refer to as Retrieval Corpus (RC). To this end, common CLIR approaches translate search queries using a Machine Translation (MT) model and then use a monolingual IR system to retrieve from RC. In this process, a translation model is treated as a black box BIBREF2 , and it is usually trained on a sentence level parallel corpus, which we refer to as Translation Corpus (TC).", "id": 856, "question": "what quantitative analysis is done?", "title": "A Multi-Task Architecture on Relevance-based Neural Query Translation"}, {"answers": ["", ""], "context": "We train NMT with RAT to achieve better query translations. We improve a recently proposed NMT baseline, Transformer, that achieves state-of-the-art results for sentence pairs in some languages BIBREF8 . We discuss Transformer, RAT, and our multi-task learning architecture that achieves balanced translation.", "id": 857, "question": "what are the baselines?", "title": "A Multi-Task Architecture on Relevance-based Neural Query Translation"}, {"answers": ["", ""], "context": "With the availability of rich data on users' locations, profiles and search history, personalization has become the leading trend in large-scale information retrieval. However, efficiency through personalization is not yet the most suitable model when tackling domain-specific searches. This is due to several factors, such as the lexical and semantic challenges of domain-specific data that often include advanced argumentation and complex contextual information, the higher sparseness of relevant information sources, and the more pronounced lack of similarities between users' searches.", "id": 858, "question": "Do they report results only on English data?", "title": "A Question-Entailment Approach to Question Answering"}, {"answers": [""], "context": "In this section we define the RQE task and describe related work at the intersection of question answering, question similarity and textual inference.", "id": 859, "question": "What machine learning and deep learning methods are used for RQE?", "title": "A Question-Entailment Approach to Question Answering"}, {"answers": ["Average success rate is higher by 2.6 percent points."], "context": "Spoken Dialogue Systems (SDS) allow human-computer interaction using natural speech. Task-oriented dialogue systems, the focus of this work, help users achieve goals such as finding restaurants or booking flights BIBREF0 .", "id": 860, "question": "by how much did nus outperform abus?", "title": "Neural User Simulation for Corpus-based Policy Optimisation for Spoken Dialogue Systems"}, {"answers": ["", ""], "context": "A Task-Oriented SDS is typically designed according to a structured ontology, which defines what the system can talk about. In a system recommending restaurants the ontology defines those attributes of a restaurant that the user can choose, called informable slots (e.g. different food types, areas and price ranges), the attributes that the user can request, called requestable slots (e.g. phone number or address) and the restaurants that it has data about. An attribute is referred to as a slot and has a corresponding value. Together these are referred to as a slot-value pair (e.g. area=north).", "id": 861, "question": "what corpus is used to learn behavior?", "title": "Neural User Simulation for Corpus-based Policy Optimisation for Spoken Dialogue Systems"}, {"answers": ["", "The Reuters-8 dataset (with stop words removed)"], "context": "Text classification has become an indispensable task due to the rapid growth in the number of texts in digital form available online. It aims to classify different texts, also called documents, into a fixed number of predefined categories, helping to organize data, and making easier for users to find the desired information. Over the past three decades, many methods based on machine learning and statistical models have been applied to perform this task, such as latent semantic analysis (LSA), support vector machines (SVM), and multinomial naive Bayes (MNB).", "id": 862, "question": "Which dataset has been used in this work?", "title": "Text Classification based on Word Subspace with Term-Frequency"}, {"answers": ["Word vectors, usually in the context of others within the same class"], "context": "In this section, we outline relevant work towards text classification. We start by describing how text data is conventionally represented using the bag-of-words model and then follow to describe the conventional methods utilized in text classification.", "id": 863, "question": "What can word subspace represent?", "title": "Text Classification based on Word Subspace with Term-Frequency"}, {"answers": ["", "it has 0.024 improvement in accuracy comparing to ELMO Only and 0.006 improvement in F1 score comparing to ELMO Only too"], "context": "Medical text mining is an exciting area and is becoming attractive to natural language processing (NLP) researchers. Clinical notes are an example of text in the medical area that recent work has focused on BIBREF0, BIBREF1, BIBREF2. This work studies abbreviation disambiguation on clinical notes BIBREF3, BIBREF4, specifically those used commonly by physicians and nurses. Such clinical abbreviations can have a large number of meanings, depending on the specialty BIBREF5, BIBREF6. For example, the term MR can mean magnetic resonance, mitral regurgitation, mental retardation, medical record and the general English Mister (Mr.). Table TABREF1 illustrates such an example. Abbreviation disambiguation is an important task in medical text understanding task BIBREF7. Successful recognition of the abbreviations in the notes can contribute to downstream tasks such as classification, named entity recognition, and relation extraction BIBREF7.", "id": 864, "question": "How big are improvements of small-scale unbalanced datasets when sentence representation is enhanced with topic information?", "title": "A Neural Topic-Attention Model for Medical Term Abbreviation Disambiguation"}, {"answers": ["", ""], "context": "", "id": 865, "question": "To what baseline models is proposed model compared?", "title": "A Neural Topic-Attention Model for Medical Term Abbreviation Disambiguation"}, {"answers": ["30 terms, each term-sanse pair has around 15 samples for testing"], "context": "We conducted a comprehensive comparison with the baseline models, and some of them were never investigated for the abbreviation disambiguation task. We applied traditional features by simply taking the TF-IDF features as the inputs into the classic classifiers. Deep features are also considered: a Doc2vec model BIBREF19 was pre-trained using Gensim and these word embeddings were applied to initialize deep models and fine-tuned.", "id": 866, "question": "How big is dataset for testing?", "title": "A Neural Topic-Attention Model for Medical Term Abbreviation Disambiguation"}, {"answers": [""], "context": "", "id": 867, "question": "What existing dataset is re-examined and corrected for training?", "title": "A Neural Topic-Attention Model for Medical Term Abbreviation Disambiguation"}, {"answers": ["Spearman correlation values of GM_KL model evaluated on the benchmark word similarity datasets.\nEvaluation results of GM_KL model on the entailment datasets such as entailment pairs dataset created from WordNet, crowdsourced dataset of 79 semantic relations labelled as entailed or not and annotated distributionally similar nouns dataset.", ""], "context": "Language modelling in its inception had one-hot vector encoding of words. However, it captures only alphabetic ordering but not the word semantic similarity. Vector space models helps to learn word representations in a lower dimensional space and also captures semantic similarity. Learning word embedding aids in natural language processing tasks such as question answering and reasoning BIBREF0, stance detection BIBREF1, claim verification BIBREF2.", "id": 868, "question": "What are the qualitative experiments performed on benchmark datasets?", "title": "Learning Multi-Sense Word Distributions using Approximate Kullback-Leibler Divergence"}, {"answers": [""], "context": "Probabilistic representation of words helps one model uncertainty in word representation, and polysemy. Given a corpus $V$, containing a list of words each represented as $w$, the probability density for a word $w$ can be represented as a mixture of Gaussians with $C$ components BIBREF10.", "id": 869, "question": "How does this approach compare to other WSD approaches employing word embeddings?", "title": "Learning Multi-Sense Word Distributions using Approximate Kullback-Leibler Divergence"}, {"answers": ["", ""], "context": "In recent years, gender has become a hot topic within the political, societal and research spheres. Numerous studies have been conducted in order to evaluate the presence of women in media, often revealing their under-representation, such as the Global Media Monitoring Project BIBREF0. In the French context, the CSA BIBREF1 produces a report on gender representation in media on a yearly basis. The 2017 report shows that women represent 40% of French media speakers, with a significant drop during high-audience hours (6:00-8:00pm) reaching a value of only 29%. Another large scale study confirmed this trend with an automatic analysis of gender in French audiovisuals streams, highlighting a huge variation across type of shows BIBREF2.", "id": 870, "question": "What tasks did they use to evaluate performance for male and female speakers?", "title": "Gender Representation in French Broadcast Corpora and Its Impact on ASR Performance"}, {"answers": ["", ""], "context": "The ever growing use of machine learning in science has been enabled by several progresses among which the exponential growth of data available. The quality of a system now depends mostly on the quality and quantity of the data it has been trained on. If it does not discard the importance of an appropriate architecture, it reaffirms the fact that rich and large corpora are a valuable resource. Corpora are research contributions which do not only allow to save and observe certain phenomena or validate a hypothesis or model, but are also a mandatory part of the technology development. This trend is notably observable within the NLP field, where industrial technologies, such as Apple, Amazon or Google vocal assistants now reach high performance level partly due to the amount of data possessed by these companies BIBREF9.", "id": 871, "question": "What is the goal of investigating NLP gender bias specifically in the news broadcast domain and Anchor role?", "title": "Gender Representation in French Broadcast Corpora and Its Impact on ASR Performance"}, {"answers": ["", ""], "context": "The gender issue has returned to the forefront of the media scene in recent years and with the emergence of AI technologies in our daily lives, gender bias has become a scientific topic that researchers are just beginning to address. Several studies revealed the existence of gender bias in AI technologies such as face recognition (GenderShades BIBREF17), NLP (word embeddings BIBREF5 and semantics BIBREF6) and machine translation (BIBREF18, BIBREF7). The impact of the training data used within these deep-learning algorithms is therefore questioned.", "id": 872, "question": "Which corpora does this paper analyse?", "title": "Gender Representation in French Broadcast Corpora and Its Impact on ASR Performance"}, {"answers": [""], "context": "This section is organized as follows: we first present the data we are working on. In a second time we explain how we proceed to describe the gender representation in our corpus and introduce the notion of speaker's role. The third subsection introduces the ASR system and metrics used to evaluate gender bias in performance.", "id": 873, "question": "How many categories do authors define for speaker role?", "title": "Gender Representation in French Broadcast Corpora and Its Impact on ASR Performance"}, {"answers": [""], "context": "Our data consists of two sets used to train and evaluate our automatic speech recognition system. Four major evaluation campaigns have enabled the creation of wide corpora of French broadcast speech: ESTER1 BIBREF13, ESTER2 BIBREF14, ETAPE BIBREF15 and REPERE BIBREF16. These four collections contain radio and/or TV broadcasts aired between 1998 and 2013 which are used by most academic researchers in ASR. Show duration varies between 10min and an hour. As years went by and speech processing research was progressing, the difficulty of the tasks augmented and the content of these evaluation corpora changed. ESTER1 and ESTER2 mainly contain prepared speech such as broadcast news, whereas ETAPE and REPERE consists also of debates and entertainment shows, spontaneous speech introducing more difficulty in its recognition.", "id": 874, "question": "How big is imbalance in analyzed corpora?", "title": "Gender Representation in French Broadcast Corpora and Its Impact on ASR Performance"}, {"answers": [""], "context": "We first describe the gender representation in training data. Gender representation is measured in terms of number of speakers, number of utterances (or speech turns), and turn lengths (descriptive statistics are given in Section SECREF16). Each speech turn was mapped to its speaker in order to associate it with a gender.", "id": 875, "question": "What are four major corpora of French broadcast?", "title": "Gender Representation in French Broadcast Corpora and Its Impact on ASR Performance"}, {"answers": [""], "context": "- !`Socorro, me ha picado una v\u00edbora!", "id": 876, "question": "What did the best systems use for their model?", "title": "Applying a Pre-trained Language Model to Spanish Twitter Humor Prediction"}, {"answers": ["", "F1 score result of 0.8099"], "context": "The Humor Analysis based on Human Annotation (HAHA) 2019 BIBREF1 competition asked for analysis of two tasks in the Spanish language based on a corpus of publicly collected data described in Castro et al. BIBREF2 :", "id": 877, "question": "What were their results on the classification and regression tasks", "title": "Applying a Pre-trained Language Model to Spanish Twitter Humor Prediction"}, {"answers": ["", ""], "context": "A Winograd schema (Levesque, Davis, and Morgenstern 2012) is a pair of sentences, or of short texts, called the elements of the schema, that satisfy the following constraints:", "id": 878, "question": "Do the authors conduct experiments on the tasks mentioned?", "title": "Winograd Schemas and Machine Translation"}, {"answers": [""], "context": "In many cases, the identification of the referent of the prounoun in a Winograd schema is critical for finding the correct translation of that pronoun in a different language. Therefore, Winograd schemas can be used as a very difficult challenge for the depth of understanding achieved by a machine translation program.", "id": 879, "question": "Did they collect their own datasets?", "title": "Winograd Schemas and Machine Translation"}, {"answers": [""], "context": "No one familiar with the state of the art in machine translation technology or the state of the art of artificial intelligence generally will be surprised to learn that currently machine translation program are unable to solve these Winograd schema challenge problems.", "id": 880, "question": "What data do they look at?", "title": "Winograd Schemas and Machine Translation"}, {"answers": ["", ""], "context": "The masculine and feminine plural pronouns are distinguished in the Romance languages (French, Spanish, Italian, Portuguese etc.) and in Semitic languages (Arabic, Hebrew, etc.) I have consulted with native speakers and experts in these languages about the degree to which the gender distinction is observed in practice. The experts say that in French, Spanish, Italian, and Portuguese, the distinction is very strictly observed; the use of a masculine pronoun for a feminine antecedent is jarringly wrong to a native or fluent speaker. \u201cLes filles ont chant\u00e9 une chanson et ils ont dans\u00e9\u201d sounds as wrong to a French speaker as \u201cThe girl sang a song and he danced\u201d sounds to an English speaker; in both cases, the hearer will interpret the pronoun as referrinig to some other persons or person, who is male. In Hebrew and Arabic, this is much less true; in speech, and even, increasingly, in writing, the masculine pronoun is often used for a feminine antecedent.", "id": 881, "question": "What language do they explore?", "title": "Winograd Schemas and Machine Translation"}, {"answers": ["", ""], "context": "Many research attempts have proposed novel features that improve the performance of learning algorithms in particular tasks. Such features are often motivated by domain knowledge or manual labor. Although useful and often state-of-the-art, adapting such solutions on NLP systems across tasks can be tricky and time-consuming BIBREF0 . Therefore, simple yet general and powerful methods that perform well across several datasets are valuable BIBREF1 .", "id": 882, "question": "Do they report results only on English datasets?", "title": "On the effectiveness of feature set augmentation using clusters of word embeddings"}, {"answers": ["number of clusters, seed value in clustering, selection of word vectors, window size and dimension of embedding", ""], "context": "Word embeddings associate words with dense, low-dimensional vectors. Recently, several models have been proposed in order to obtain these embeddings. Among others, the skipgram (skipgram) model with negative sampling BIBREF7 , the continuous bag-of-words (cbow) model BIBREF7 and Glove (glove) BIBREF8 have been shown to be effective. Training those models requires no annotated data and can be done using big amounts of text. Such a model can be seen as a function INLINEFORM0 that projects a word INLINEFORM1 in a INLINEFORM2 -dimensional space: INLINEFORM3 , where INLINEFORM4 is predefined. Here, we focus on applications using data from Twitter, which pose several difficulties due to being particularly short, using creative vocabulary, abbreviations and slang.", "id": 883, "question": "Which hyperparameters were varied in the experiments on the four tasks?", "title": "On the effectiveness of feature set augmentation using clusters of word embeddings"}, {"answers": [""], "context": "We evaluate the proposed approach for augmenting the feature space in four tasks: (i) NER segmentation, (ii) NER classification, (iii) fine-grained sentiment classification and (iv) fine-grained sentiment quantification. The next sections present the evaluation settings we used. For each of the tasks, we use the designated training sets to train the learning algorithms, and we report the scores of the evaluation measures used in the respective test parts.", "id": 884, "question": "Which other hyperparameters, other than number of clusters are typically evaluated in this type of research?", "title": "On the effectiveness of feature set augmentation using clusters of word embeddings"}, {"answers": ["Word clusters are extracted using k-means on word embeddings"], "context": "NER concerns the classification of textual segments in a predefined set of categories, like persons, organization and locations. We use the data of the last competition in NER for Twitter which released as a part of the 2nd Workshop on Noisy User-generated Text BIBREF10 . More specifically, the organizers provided annotated tweets with 10 named-entity types (person, movie, sportsteam, product etc.) and the task comprised two sub-tasks: 1) the detection of entity bounds and 2) the classification of an entity into one of the 10 types. The evaluation measure for both sub-tasks is the F INLINEFORM0 measure.", "id": 885, "question": "How were the cluster extracted? ", "title": "On the effectiveness of feature set augmentation using clusters of word embeddings"}, {"answers": ["", "Unlabeled sentence-level F1, perplexity, grammatically judgment performance"], "context": " Grammar induction is the task of inducing hierarchical syntactic structure from data. Statistical approaches to grammar induction require specifying a probabilistic grammar (e.g. formalism, number and shape of rules), and fitting its parameters through optimization. Early work found that it was difficult to induce probabilistic context-free grammars (PCFG) from natural language data through direct methods, such as optimizing the log likelihood with the EM algorithm BIBREF0 , BIBREF1 . While the reasons for the failure are manifold and not completely understood, two major potential causes are the ill-behaved optimization landscape and the overly strict independence assumptions of PCFGs. More successful approaches to grammar induction have thus resorted to carefully-crafted auxiliary objectives BIBREF2 , priors or non-parametric models BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , and manually-engineered features BIBREF7 , BIBREF8 to encourage the desired structures to emerge.", "id": 886, "question": "what were the evaluation metrics?", "title": "Compound Probabilistic Context-Free Grammars for Grammar Induction"}, {"answers": ["", ""], "context": " We consider context-free grammars (CFG) consisting of a 5-tuple INLINEFORM0 where INLINEFORM1 is the distinguished start symbol, INLINEFORM2 is a finite set of nonterminals, INLINEFORM3 is a finite set of preterminals, INLINEFORM6 is a finite set of terminal symbols, and INLINEFORM7 is a finite set of rules of the form,", "id": 887, "question": "what are the state of the art methods?", "title": "Compound Probabilistic Context-Free Grammars for Grammar Induction"}, {"answers": ["Answer with content missing: (Data section) Penn Treebank (PTB)"], "context": " A compound probability distribution BIBREF19 is a distribution whose parameters are themselves random variables. These distributions generalize mixture models to the continuous case, for example in factor analysis which assumes the following generative process,", "id": 888, "question": "what english datasets were used?", "title": "Compound Probabilistic Context-Free Grammars for Grammar Induction"}, {"answers": ["Answer with content missing: (Data section) Chinese with version 5.1 of the Chinese Penn Treebank (CTB)"], "context": "", "id": 889, "question": "which chinese datasets were used?", "title": "Compound Probabilistic Context-Free Grammars for Grammar Induction"}, {"answers": ["Distributions of Followers, Friends and URLs are significantly different between the set of tweets containing fake news and those non containing them, but for Favourites, Mentions, Media, Retweets and Hashtags they are not significantly different"], "context": "10pt", "id": 890, "question": "What were their distribution results?", "title": "Characterizing Political Fake News in Twitter by its Meta-Data"}, {"answers": ["an expert annotator determined if the tweet fell under a specific category", ""], "context": "While fake news, understood as deliberately misleading pieces of information, have existed since long ago (e.g. it is not unusual to receive news falsely claiming the death of a celebrity), the term reached the mainstream, particularly so in politics, during the 2016 presidential election in the United States BIBREF0 . Since then, governments and corporations alike (e.g. Google BIBREF1 and Facebook BIBREF2 ) have begun efforts to tackle fake news as they can affect political decisions BIBREF3 . Yet, the ability to define, identify and stop fake news from spreading is limited.", "id": 891, "question": "How did they determine fake news tweets?", "title": "Characterizing Political Fake News in Twitter by its Meta-Data"}, {"answers": ["Viral tweets are the ones that are retweeted more than 1000 times", "those that contain a high number of retweets"], "context": "Our research is connected to different strands of academic knowledge related to the phenomenon of fake news. In relation to Computer Science, a recent survey by Conroy and colleagues BIBREF10 identifies two popular approaches to single-out fake news. On the one hand, the authors pointed to linguistic approaches consisting in using text, its linguistic characteristics and machine learning techniques to automatically flag fake news. On the other, these researchers underscored the use of network approaches, which make use of network characteristics and meta-data, to identify fake news.", "id": 892, "question": "What is their definition of tweets going viral?", "title": "Characterizing Political Fake News in Twitter by its Meta-Data"}, {"answers": ["Accounts that spread fake news are mostly unverified, recently created and have on average high friends/followers ratio", ""], "context": "Previous works on the area (presented in the section above) suggest that there may be important determinants for the adoption and diffusion of fake news. Our hypotheses builds on them and identifies three important dimensions that may help distinguishing fake news from legit information:", "id": 893, "question": "What are the characteristics of the accounts that spread fake news?", "title": "Characterizing Political Fake News in Twitter by its Meta-Data"}, {"answers": [""], "context": "For this study, we collected publicly available tweets using Twitter's public API. Given the nature of the data, it is important to emphasize that such tweets are subject to Twitter's terms and conditions which indicate that users consent to the collection, transfer, manipulation, storage, and disclosure of data. Therefore, we do not expect ethical, legal, or social implications from the usage of the tweets. Our data was collected using search terms related to the presidential election held in the United States on November 8th 2016. Particularly, we queried Twitter's streaming API, more precisely the filter endpoint of the streaming API, using the following hashtags and user handles: #MyVote2016, #ElectionDay, #electionnight, @realDonaldTrump and @HillaryClinton. The data collection ran for just one day (Nov 8th 2016).", "id": 894, "question": "What is the threshold for determining that a tweet has gone viral?", "title": "Characterizing Political Fake News in Twitter by its Meta-Data"}, {"answers": ["Ground truth is not established in the paper"], "context": "The sample collected consisted on 1 785 855 tweets published by 848 196 different users. Within our sample, we identified 1327 tweets that went viral (retweeted more than 1000 times by the 8th of November 2016) produced by 643 users. Such small subset of viral tweets were retweeted on 290 841 occasions in the observed time-window.", "id": 895, "question": "How is the ground truth for fake news established?", "title": "Characterizing Political Fake News in Twitter by its Meta-Data"}, {"answers": [""], "context": "Visual dialog BIBREF0 is an interesting new task combining the research efforts from Computer Vision, Natural Language Processing and Information Retrieval. While BIBREF1 presents some tips and tricks for VQA 2.0 Challenge, we follow their guidelines for the Visual Dialog challenge 2018. Our models use attention similar to BIBREF2 to get object level image representations from Faster R-CNN model BIBREF3. We experiment with different encoder mechanisms to get representations of conversational history.", "id": 896, "question": "What was the baseline?", "title": "Ensemble based discriminative models for Visual Dialog Challenge 2018"}, {"answers": ["", ""], "context": "Common to all the models, we initialize our embedding matrix with pre-trained Glove word vectors of 300 dimensions using 6B tokens . Out of 11319 tokens present in the dataset, we found 188 tokens missing from the pre-trained Glove embeddings, so we manually map these tokens to words conveying semantically similar meaning, e.g. we map over ten variations of the word \u201cyes\u201d - misspelled or not picked up by tokenizer - \u201c*yes\", \u201cyesa\", \u201cyess\", \u201cytes\", \u201cyes-\", \u201cyes3\", \u201cyyes\", \u201cyees\", etc.", "id": 897, "question": "Which three discriminative models did they use?", "title": "Ensemble based discriminative models for Visual Dialog Challenge 2018"}, {"answers": [""], "context": "Ancient Chinese is the writing language in ancient China. It is a treasure of Chinese culture which brings together the wisdom and ideas of the Chinese nation and chronicles the ancient cultural heritage of China. Learning ancient Chinese not only helps people to understand and inherit the wisdom of the ancients, but also promotes people to absorb and develop Chinese culture.", "id": 898, "question": "what NMT models did they compare with?", "title": "Ancient-Modern Chinese Translation with a Large Training Dataset"}, {"answers": ["", "Ancient Chinese history records in several dynasties and articles written by celebrities during 1000BC-200BC collected from the internet "], "context": "There are four steps to build the ancient-modern Chinese translation dataset: (i) The parallel corpus crawling and cleaning. (ii) The paragraph alignment. (iii) The clause alignment based on aligned paragraphs. (iv) Augmenting data by merging aligned adjacent clauses. The most critical step is the third step.", "id": 899, "question": "Where does the ancient Chinese dataset come from?", "title": "Ancient-Modern Chinese Translation with a Large Training Dataset"}, {"answers": ["", ""], "context": "Attempts toward constructing human-like dialogue agents have met significant difficulties, such as maintaining conversation consistency BIBREF0. This is largely due to inabilities of dialogue agents to engage the user emotionally because of an inconsistent personality BIBREF1. Many agents use personality models that attempt to map personality attributes into lower dimensional spaces (e.g. the Big Five BIBREF2). However, these represent human personality at a very high-level and lack depth. They prohibit the ability to link specific and detailed personality traits to characters, and to construct large datasets where dialogue is traceable back to these traits.", "id": 900, "question": "How many different characters were in dataset?", "title": "Follow Alice into the Rabbit Hole: Giving Dialogue Agents Understanding of Human Level Attributes."}, {"answers": [""], "context": "Task completion chatbots (TCC), or task-oriented chatbots, are dialogue agents used to fulfill specific purposes, such as helping customers book airline tickets, or a government inquiry system. Examples include the AIML based chatbot BIBREF5 and DIVA Framework BIBREF6. While TCC are low cost, easily configurable, and readily available, they are restricted to working well for particular domains and tasks.", "id": 901, "question": "How does dataset model character's profiles?", "title": "Follow Alice into the Rabbit Hole: Giving Dialogue Agents Understanding of Human Level Attributes."}, {"answers": ["Metric difference between Aloha and best baseline score:\nHits@1/20: +0.061 (0.3642 vs 0.3032)\nMRR: +0.0572(0.5114 vs 0.4542)\nF1: -0.0484 (0.3901 vs 0.4385)\nBLEU: +0.0474 (0.2867 vs 0.2393)"], "context": "We collect HLA data from TV Tropes BIBREF3, a knowledge-based website dedicated to pop culture, containing information on a plethora of characters from a variety of sources. Similar to Wikipedia, its content is provided and edited collaboratively by a massive user-base. These attributes are determined by human viewers and their impressions of the characters, and are correlated with human-like characteristics. We believe that TV Tropes is better for our purpose of fictional character modeling than data sources used in works such as BIBREF25 shuster2019engaging because TV Tropes' content providers are rewarded for correctly providing content through community acknowledgement.", "id": 902, "question": "How big is the difference in performance between proposed model and baselines?", "title": "Follow Alice into the Rabbit Hole: Giving Dialogue Agents Understanding of Human Level Attributes."}, {"answers": ["", ""], "context": "Our task is the following, where $t$ denotes \u201ctarget\":", "id": 903, "question": "What baseline models are used?", "title": "Follow Alice into the Rabbit Hole: Giving Dialogue Agents Understanding of Human Level Attributes."}, {"answers": ["", ""], "context": "Task-oriented dialogue systems are primarily designed to search and interact with large databases which contain information pertaining to a certain dialogue domain: the main purpose of such systems is to assist the users in accomplishing a well-defined task such as flight booking BIBREF0, tourist information BIBREF1, restaurant search BIBREF2, or booking a taxi BIBREF3. These systems are typically constructed around rigid task-specific ontologies BIBREF1, BIBREF4 which enumerate the constraints the users can express using a collection of slots (e.g., price range for restaurant search) and their slot values (e.g., cheap, expensive for the aforementioned slots). Conversations are then modelled as a sequence of actions that constrain slots to particular values. This explicit semantic space is manually engineered by the system designer. It serves as the output of the natural language understanding component as well as the input to the language generation component both in traditional modular systems BIBREF5, BIBREF6 and in more recent end-to-end task-oriented dialogue systems BIBREF7, BIBREF8, BIBREF9, BIBREF3.", "id": 904, "question": "Was PolyReponse evaluated against some baseline?", "title": "PolyResponse: A Rank-based Approach to Task-Oriented Dialogue with Application in Restaurant Search and Booking"}, {"answers": [""], "context": "The PolyResponse system is powered by a single large conversational search engine, trained on a large amount of conversational and image data, as shown in Figure FIGREF2. In simple words, it is a ranking model that learns to score conversational replies and images in a given conversational context. The highest-scoring responses are then retrieved as system outputs. The system computes two sets of similarity scores: 1) $S(r,c)$ is the score of a candidate reply $r$ given a conversational context $c$, and 2) $S(p,c)$ is the score of a candidate photo $p$ given a conversational context $c$. These scores are computed as a scaled cosine similarity of a vector that represents the context ($h_c$), and a vector that represents the candidate response: a text reply ($h_r$) or a photo ($h_p$). For instance, $S(r,c)$ is computed as $S(r,c)=C cos(h_r,h_c)$, where $C$ is a learned constant. The part of the model dealing with text input (i.e., obtaining the encodings $h_c$ and $h_r$) follows the architecture introduced recently by Henderson:2019acl. We provide only a brief recap here; see the original paper for further details.", "id": 905, "question": "What metric is used to evaluate PolyReponse system?", "title": "PolyResponse: A Rank-based Approach to Task-Oriented Dialogue with Application in Restaurant Search and Booking"}, {"answers": [""], "context": "The model, implemented as a deep neural network, learns to respond by training on hundreds of millions context-reply $(c,r)$ pairs. First, similar to Henderson:2017arxiv, raw text from both $c$ and $r$ is converted to unigrams and bigrams. All input text is first lower-cased and tokenised, numbers with 5 or more digits get their digits replaced by a wildcard symbol #, while words longer than 16 characters are replaced by a wildcard token LONGWORD. Sentence boundary tokens are added to each sentence. The vocabulary consists of the unigrams that occur at least 10 times in a random 10M subset of the Reddit training set (see Figure FIGREF2) plus the 200K most frequent bigrams in the same random subset.", "id": 906, "question": "How does PolyResponse architecture look like?", "title": "PolyResponse: A Rank-based Approach to Task-Oriented Dialogue with Application in Restaurant Search and Booking"}, {"answers": ["English, German, Spanish, Mandarin, Polish, Russian, Korean and Serbian", ""], "context": "Photos are represented using convolutional neural net (CNN) models pretrained on ImageNet BIBREF17. We use a MobileNet model with a depth multiplier of 1.4, and an input dimension of $224 \\times 224$ pixels as in BIBREF18. This provides a $1,280 \\times 1.4 = 1,792$-dimensional representation of a photo, which is then passed through a single hidden layer of dimensionality $1,024$ with ReLU activation, before being passed to a hidden layer of dimensionality 512 with no activation to provide the final representation $h_p$.", "id": 907, "question": "In what 8 languages is PolyResponse engine used for restourant search and booking system?", "title": "PolyResponse: A Rank-based Approach to Task-Oriented Dialogue with Application in Restaurant Search and Booking"}, {"answers": [""], "context": "Text summarization generates summaries from input documents while keeping salient information. It is an important task and can be applied to several real-world applications. Many methods have been proposed to solve the text summarization problem BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . There are two main text summarization techniques: extractive and abstractive. Extractive summarization generates summary by selecting salient sentences or phrases from the source text, while abstractive methods paraphrase and restructure sentences to compose the summary. We focus on abstractive summarization in this work as it is more flexible and thus can generate more diverse summaries.", "id": 908, "question": "Why masking words in the decoder is helpful?", "title": "Pretraining-Based Natural Language Generation for Text Summarization"}, {"answers": ["", ""], "context": "In this paper, we focus on single-document multi-sentence summarization and propose a supervised abstractive model based on the neural attentive sequence-to-sequence framework which consists of two parts: a neural network for the encoder and another network for the decoder. The encoder encodes the input sequence to intermediate representation and the decoder predicts one word at a time step given the input sequence representation vector and previous decoded output. The goal of the model is to maximize the probability of generating the correct target sequences. In the encoding and generation process, the attention mechanism is used to concentrate on the most important positions of text. The learning objective of most sequence-to-sequence models is to minimize the negative log likelihood of the generated sequence as following equation shows, where $y^*_i$ is the i-th ground-truth summary token. ", "id": 909, "question": "What is the ROUGE score of the highest performing model?", "title": "Pretraining-Based Natural Language Generation for Text Summarization"}, {"answers": ["", ""], "context": "Recently, context encoders such as ELMo, GPT, and BERT have been widely used in many NLP tasks. These models are pre-trained on a huge unlabeled corpus and can generate better contextualized token embeddings, thus the approaches built on top of them can achieve better performance.", "id": 910, "question": "How are the different components of the model trained? Is it trained end-to-end?", "title": "Pretraining-Based Natural Language Generation for Text Summarization"}, {"answers": [""], "context": "In this section, we describe the structure of our model, which learns to generate an abstractive multi-sentence summary from a given source document.", "id": 911, "question": "When is this paper published?", "title": "Pretraining-Based Natural Language Generation for Text Summarization"}, {"answers": [""], "context": "Question answering (QA) has been a blooming research field for the last decade. Selection-based QA implies a family of tasks that find answer contexts from large data given questions in natural language. Three tasks have been proposed for selection-based QA. Given a document, answer extraction BIBREF0 , BIBREF1 finds answer phrases whereas answer selection BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 and answer triggering BIBREF6 , BIBREF7 find answer sentences instead, although the presence of the answer context is not assumed within the provided document for answer triggering but it is for the other two tasks. Recently, various QA tasks that are not selection-based have been proposed BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 ; however, selection-based QA remains still important because of its practical value to real applications (e.g., IBM Watson, MIT Start).", "id": 912, "question": "Can their indexing-based method be applied to create other QA datasets in other domains, and not just Wikipedia?", "title": "Analysis of Wikipedia-based Corpora for Question Answering"}, {"answers": ["", ""], "context": "Four publicly available corpora are selected for our analysis. These corpora are based on Wikipedia, so more comparable than the others, and have already been used for the evaluation of several QA systems.", "id": 913, "question": "Do they employ their indexing-based method to create a sample of a QA Wikipedia dataset?", "title": "Analysis of Wikipedia-based Corpora for Question Answering"}, {"answers": ["", "7"], "context": "All corpora provide datasets/splits for answer selection, whereas only (WikiQA, SQuAD) and (WikiQA, SelQA) provide datasets for answer extraction and answer triggering, respectively. SQuAD is much larger in size although questions in this corpus are often paraphrased multiple times. On the contrary, SQuAD's average candidates per question ( INLINEFORM0 ) is the smallest because SQuAD extracts answer candidates from paragraphs whereas the others extract them from sections or infoboxes that consist of bigger contexts. Although InfoboxQA is larger than WikiQA or SelQA, the number of token types ( INLINEFORM1 ) in InfoboxQA is smaller than those two, due to the repetitive nature of infoboxes.", "id": 914, "question": "How many question types do they find in the datasets analyzed?", "title": "Analysis of Wikipedia-based Corpora for Question Answering"}, {"answers": ["They compare the tasks that the datasets are suitable for, average number of answer candidates per question, number of token types, average answer candidate lengths, average question lengths, question-answer word overlap."], "context": "This section describes another selection-based QA task, called answer retrieval, that finds the answer context from a larger dataset, the entire Wikipedia. SQuAD provides no mapping of the answer contexts to Wikipedia, whereas WikiQA and SelQA provide mappings; however, their data do not come from the same version of Wikipedia. We propose an automatic way of mapping the answer contexts from all corpora to the same version of Wikipeda so they can be coherently used for answer retrieval.", "id": 915, "question": "How do they analyze contextual similaries across datasets?", "title": "Analysis of Wikipedia-based Corpora for Question Answering"}, {"answers": ["best model achieves 0.94 F1 score for Wikipedia and Twitter datasets and 0.95 F1 on Formspring dataset"], "context": "Cyberbullying has been defined by the National Crime Prevention Council as the use of the Internet, cell phones or other devices to send or post text or images intended to hurt or embarrass another person. Various studies have estimated that between to 10% to 40% of internet users are victims of cyberbullying BIBREF0 . Effects of cyberbullying can range from temporary anxiety to suicide BIBREF1 . Many high profile incidents have emphasized the prevalence of cyberbullying on social media. Most recently in October 2017, a Swedish model Arvida Bystr\u00f6m was cyberbullied to the extent of receiving rape threats after she appeared in an advertisement with hairy legs.", "id": 916, "question": "What were their performance results?", "title": "Deep Learning for Detecting Cyberbullying Across Multiple Social Media Platforms"}, {"answers": ["", ""], "context": "Please refer to Table TABREF7 for summary of datasets used. We performed experiments using large, diverse, manually annotated, and publicly available datasets for cyberbullying detection in social media. We cover three different types of social networks: teen oriented Q&A forum (Formspring), large microblogging platform (Twitter), and collaborative knowledge repository (Wikipedia talk pages). Each dataset addresses a different topic of cyberbullying. Twitter dataset contains examples of racism and sexism. Wikipedia dataset contains examples of personal attack. However, Formspring dataset is not specifically about any single topic. All three datasets have the problem of class imbalance where posts labeled as cyberbullying are in the minority as compared to neutral posts. Variation in the number of posts across datasets also affects vocabulary size that represents the number of distinct words encountered in the dataset. We measure the size of a post in terms of the number of words in the post. For each dataset, there are only a few posts with large size. We truncate such large posts to the size of post ranked at 95 percentile in that dataset. For example, in Wikipedia dataset, the largest post has 2846 words. However, size of post ranked at 95 percentile in that dataset is only 231. Any post larger than size 231 in Wikipedia dataset will be truncated by considering only first 231 words. This truncation affects only a small minority of posts in each dataset. However, it is required for efficiently training various models in our experiments. Details of each dataset are as follows.", "id": 917, "question": "What cyberbulling topics did they address?", "title": "Deep Learning for Detecting Cyberbullying Across Multiple Social Media Platforms"}, {"answers": ["", ""], "context": "The automatic identification, extraction and representation of the information conveyed in texts is a key task nowadays. In fact, this research topic is increasing its relevance with the exponential growth of social networks and the need to have tools that are able to automatically process them BIBREF0.", "id": 918, "question": "Were any of the pipeline components based on deep learning models?", "title": "From Textual Information Sources to Linked Data in the Agatha Project"}, {"answers": [""], "context": "The framework for processing Portuguese texts is depicted in Fig. FIGREF2, which illustrates how relevant pieces of information are extracted from the text. Namely, input files (Portuguese texts) go through a series of modules: part-of-speech tagging, named entity recognition, dependency parsing, semantic role labeling, subject-verb-object triple extraction, and lexicon matching.", "id": 919, "question": "How is the effectiveness of this pipeline approach evaluated?", "title": "From Textual Information Sources to Linked Data in the Agatha Project"}, {"answers": ["", ""], "context": "Prepositional Phrase (PP) attachment disambiguation is an important problem in NLP, for it often gives rise to incorrect parse trees . Statistical parsers often predict incorrect attachment for prepositional phrases. For applications like Machine Translation, incorrect PP-attachment leads to serious errors in translation. Several approaches have been proposed to solve this problem. We attempt to tackle this problem for English. English is a syntactically ambiguous language with respect to PP attachments. For example, consider the following sentence where the prepositional phrase with pockets may attach either to the verb washed or to the noun jeans.", "id": 920, "question": "What is the size of the parallel corpus used to train the model constraints?", "title": "Prepositional Attachment Disambiguation Using Bilingual Parsing and Alignments"}, {"answers": [""], "context": "A number of supervised and unsupervised approaches for solving the PP-attachment problem have been proposed in the literature. Ratnaparkhi:94 use a Maximum Entropy Model for solving the PP-attachment decision. Schwartz:03 propose an unsupervised approach for solving PP attachment using multilingual aligned data. They transform the data into high-level linguistic representations and use it make reattachment decisions. The intuition is similar to our work, but the approach is entirely different. Brill:94 discuss a transformation-based rule derivation method for PP-attachment disambiguation. It is a simple learning algorithm which derives a set of transformation rules from training corpus, which are then used for solving the PP-attachment problem. Stetina:97 make use of the semantic dictionary to solve the problem of disambiguating PP attachments. Their work describes use of word sense disambiguation (WSD) for both supervised and unsupervised techniques. Agirre Agirre:08 and Medimi Medimi:07 have used WSD-based strategies in different capacities to solve the problem of PP-attachment. Olteanu:05 have attempted to solve the pp-attachment problem as a classification problem of attachment either to the preceding verb or the noun, and have used Support Vector Machines (SVMs) that use complex syntactic and semantic features.", "id": 921, "question": "How does enforcing agreement between parse trees work across different languages?", "title": "Prepositional Attachment Disambiguation Using Bilingual Parsing and Alignments"}, {"answers": ["", "LORELEI datasets of Uzbek, Mandarin and Turkish"], "context": "Topic identification (topic ID) on speech aims to identify the topic(s) for given speech recordings, referred to as spoken documents, where the topics are a predefined set of classes or labels. This task is typically formulated as a three-step process. First, speech is tokenized into words or phones by automatic speech recognition (ASR) systems BIBREF0 , or by limited-vocabulary keyword spotting BIBREF1 . Second, standard text-based processing techniques are applied to the resulting tokenizations, and produce a vector representation for each spoken document, typically a bag-of-words multinomial representation, or a more compact vector given by probabilistic topic models BIBREF2 , BIBREF3 . Finally, topic ID is performed on the spoken document representations by supervised training of classifiers, such as Bayesian classifiers and support vector machines (SVMs).", "id": 922, "question": "What datasets are used to assess the performance of the system?", "title": "Topic Identification for Speech without ASR"}, {"answers": [""], "context": "", "id": 923, "question": "How is the vocabulary of word-like or phoneme-like units automatically discovered?", "title": "Topic Identification for Speech without ASR"}, {"answers": ["The graph representation appears to be semi-supervised. It is included in the learning pipeline for the medical recommendation, where the attention model is learned. (There is some additional evidence that is unavailable in parsed text)"], "context": "The availability of massive electronic health records (EHR) data and the advances of deep learning technologies have provided unprecedented resource and opportunity for predictive healthcare, including the computational medication recommendation task. A number of deep learning models were proposed to assist doctors in making medication recommendation BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . They often learn representations for medical entities (e.g., patients, diagnosis, medications) from patient EHR data, and then use the learned representations to predict medications that are suited to the patient's health condition.", "id": 924, "question": "IS the graph representation supervised?", "title": "Pre-training of Graph Augmented Transformers for Medication Recommendation"}, {"answers": ["There is nothing specific about the approach that depends on medical recommendations. The approach combines graph data and text data into a single embedding.", "It learns a representation of medical records. The learned representation (embeddings) can be used for other predictive tasks involving information from electronic health records."], "context": "Medication Recommendation Medication Recommendation can be categorized into instance-based and longitudinal recommendation methods BIBREF1 . Instance-based methods focus on current health conditions. Among them, Leap BIBREF9 formulates a multi-instance multi-label learning framework and proposes a variant of sequence-to-sequence model based on content-attention mechanism to predict combination of medicines given patient's diagnoses. Longitudinal-based methods leverage the temporal dependencies among clinical events, see BIBREF10 , BIBREF11 , BIBREF12 . Among them, RETAIN BIBREF10 uses a two-level neural attention model to detect influential past visits and significant clinical variables within those visits for improved medication recommendation.", "id": 925, "question": "Is the G-BERT model useful beyond the task considered?", "title": "Pre-training of Graph Augmented Transformers for Medication Recommendation"}, {"answers": [""], "context": "The task of interpreting and following natural language (NL) navigation instructions involves interleaving different signals, at the very least the linguistic utterance and the representation of the world. For example, in turn right on the first intersection, the instruction needs to be interpreted, and a specific object in the world (the intersection) needs to be located in order to execute the instruction. In NL navigation studies, the representation of the world may be provided via visual sensors BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 or as a symbolic world representation. This work focuses on navigation based on a symbolic world representation (referred to as a map).", "id": 926, "question": "How well did the baseline perform?", "title": "RUN through the Streets: A New Dataset and Baseline Models for Realistic Urban Navigation"}, {"answers": ["", ""], "context": "In this work we address the task of following a sequence of NL navigation instructions given in colloquial language based on a dense urban map.", "id": 927, "question": "What is the baseline?", "title": "RUN through the Streets: A New Dataset and Baseline Models for Realistic Urban Navigation"}, {"answers": ["", ""], "context": "Neural machine translation (NMT) BIBREF0 , BIBREF1 is widely applied for machine translation (MT) in recent years and focuses on popular language pairs such as English INLINEFORM0 French, English INLINEFORM1 German, English INLINEFORM2 Chinese or English INLINEFORM3 Japanese. NMT has obtained state-of-the-art performance on those language pairs compared to the traditional statistical machine translation (SMT) when given enough data BIBREF2 , BIBREF3 . Furthermore, due to the ability of feature learning, NMT systems can be trained end-to-end with pure parallel texts and minimal linguistic knowledge of the languages involved. Thus it makes training NMT for a new language pair much easier, more scalable and robust. Nevertheless, NMT has not been employed in many low-resourced language pairs since in those scenarios, data scarcity often limits the learning ability of neural methods. In contrast, combinating complicated linguistic-driven features in a typical log-linear framework still keeps SMT the best approach in many translation directions but also hard to apply to new domains or to other language pairs.", "id": 928, "question": "what methods were used to reduce data sparsity effects?", "title": "Combining Advanced Methods in Japanese-Vietnamese Neural Machine Translation"}, {"answers": ["", ""], "context": "In this section, we will describe the general architecture of NMT as a kind of sequence-to-sequence modeling framework. In this kind of sequence-to-sequence modeling framework, often there is an encoder trying to encode context information from the input sequence and a decoder to generate one item of the output sequence at a time based on the context of both input and output sequences. Besides, an additional component, named attention, exists in between, deciding which parts of the input sequence the decoder should pay attention in order to choose which to output next. In other words, this attention component calculates the context relevant to the decision of the decoder at the considering time. Those components as a whole constitute a large trainable neural architecture called the famous attention-based encoder-decoder framework. This framework becomes popular in many sequence-to-sequence tasks.", "id": 929, "question": "what was the baseline?", "title": "Combining Advanced Methods in Japanese-Vietnamese Neural Machine Translation"}, {"answers": [""], "context": "One of the most severe problems of NMT is dealing with the rare words, which are not in the short lists of the vocabularies, i.e. out-of-vocabulary (OOV) words, or do not appear in the training set at all. On one hand, we would like to have fewer OOV words by increasing the size of the short lists. On the other hand, we need our neural network to learn fast and has a good generalization capability on the unseen words as well.", "id": 930, "question": "did they collect their own data?", "title": "Combining Advanced Methods in Japanese-Vietnamese Neural Machine Translation"}, {"answers": [""], "context": "Vietnamese From the linguistic point of view, each sequence of characters between two white spaces in Vietnamese texts cannot be considered as a word since it does not always have a full meaning to stand alone. For example, in the sentence \u201ch\u00f4m nay l\u00e0 sinh nh\u1eadt c\u1ee7a t\u00f4i\u201d (English equivalence: \u201cToday is my birthday\u201d), \u201ch\u00f4m\u201d and \u201cnay\u201d are not two words, they together form a word, which means \u201ctoday\u201d. Nevertheless, \u201ch\u00f4m\u201d and \u201cnay\u201d somehow still bear some meaning: \u201ch\u00f4m\u201d-\u201cday\u201d, \u201cnay\u201d-\u201cnow\u201d. Similarly, \u201csinh\u201d-\u201cbirth\u201d and \u201cnh\u1eadt\u201d-\u201cdate\u201d also form the word \u201csinh nh\u1eadt\u201d-\u201cbirthday\u201d but they are not two distinct words. We could also call them subwords.", "id": 931, "question": "what japanese-vietnamese dataset do they use?", "title": "Combining Advanced Methods in Japanese-Vietnamese Neural Machine Translation"}, {"answers": [""], "context": "Sequence-to-sequence (seq2seq) transformations have recently proven to be a successful framework for several natural language processing tasks, like: machine translation (MT) BIBREF0 , BIBREF1 , speech recognition BIBREF2 , speech synthesis BIBREF3 , natural language inference BIBREF4 and others. However, the success of these models depends on the availability of large amounts of directly annotated data for the task at hand (like translation examples, text segments and their speech recordings, etc.). This is a severe limitation for tasks where data is not abundantly available as well as for low-resource languages.", "id": 932, "question": "How do they measure style transfer success?", "title": "Grammatical Error Correction and Style Transfer via Zero-shot Monolingual Translation"}, {"answers": ["", "Data already contain errors"], "context": "As mentioned in the introduction, our approach is based on the idea of zero-shot MT BIBREF11 . There the authors show that after training a single model to translate from Portuguese to English as well as from English to Spanish, it can also translate Portuguese into Spanish, without seeing any translation examples for this language pair. We use the zero-shot effect to achieve monolingual translation by training the model on bilingual examples in both directions, and then doing translation into the same language as the input: illustrated on Figure FIGREF1 .", "id": 933, "question": "Do they introduce errors in the data or does the data already contain them?", "title": "Grammatical Error Correction and Style Transfer via Zero-shot Monolingual Translation"}, {"answers": ["", ""], "context": "We use three languages in our experiments: English, Estonian and Latvian. All three have different characteristics, for example Latvian and (especially) Estonian are morphologically complex and have loose word order, while English has a strict word order and the morphology is much simpler. Most importantly, all three languages have error-corrected corpora for testing purposes, though work on their automatic grammatical error correction is extremely limited (see Section SECREF3 ).", "id": 934, "question": "What error types is their model more reliable for?", "title": "Grammatical Error Correction and Style Transfer via Zero-shot Monolingual Translation"}, {"answers": [""], "context": "For Europarl, JRC-Acquis and EMEA we use all data available for English-Estonian, English-Latvian and Estonian-Latvian language pairs. From OpenSubtitles2018 we take a random subset of 3M sentence pairs for English-Estonian, which is still more than English-Latvian and Estonian-Latvian (below 1M; there we use the whole corpus). This is done to balance the corpora representation and to limit the size of training data.", "id": 935, "question": "How does their parallel data differ in terms of style?", "title": "Grammatical Error Correction and Style Transfer via Zero-shot Monolingual Translation"}, {"answers": [""], "context": "Many machine learning models in question answering tasks often involve matching mechanism. For example, in factoid question answering such as SQuAD BIBREF1 , one needs to match between query and corpus in order to find out the most possible fragment as answer. In multiple choice question answering, such as MC Test BIBREF2 , matching mechanism can also help make the correct decision.", "id": 936, "question": "How do they split text to obtain sentence levels?", "title": "Query-based Attention CNN for Text Similarity Map"}, {"answers": ["", ""], "context": "In this question answering task, a reading passage , a query and several answer choices are given. P denotes the passage, Q denotes query and C denotes one of the multiple choices. The target of the model is to choose a correct answer A from multiple choices based on informations of P and Q.", "id": 937, "question": "Do they experiment with their proposed model on any other dataset other than MovieQA?", "title": "Query-based Attention CNN for Text Similarity Map"}, {"answers": ["Introduce a \"Refinement Adjustment LSTM-based component\" to the decoder"], "context": "Natural Language Generation (NLG) plays a critical role in Spoken Dialogue Systems (SDS) with task is to convert a meaning representation produced by the Dialogue Manager into natural language utterances. Conventional approaches still rely on comprehensive hand-tuning templates and rules requiring expert knowledge of linguistic representation, including rule-based BIBREF0 , corpus-based n-gram models BIBREF1 , and a trainable generator BIBREF2 .", "id": 938, "question": "What is the difference of the proposed model with a standard RNN encoder-decoder?", "title": "Natural Language Generation for Spoken Dialogue System using RNN Encoder-Decoder Networks"}, {"answers": ["NLG datasets", "NLG datasets"], "context": "Recently, RNNs-based models have shown promising performance in tackling the NLG problems. BIBREF16 proposed a generator using RNNs to create Chinese poetry. BIBREF11 , BIBREF17 , BIBREF18 also used RNNs in a multi-modal setting to solve image captioning tasks. The RNN-based Sequence to Sequence models have applied to solve variety of tasks: conversational modeling BIBREF6 , BIBREF7 , BIBREF19 , machine translation BIBREF20 , BIBREF21 ", "id": 939, "question": "Does the model evaluated on NLG datasets or dialog datasets?", "title": "Natural Language Generation for Spoken Dialogue System using RNN Encoder-Decoder Networks"}, {"answers": ["", ""], "context": "Learning the distributed representation for long spans of text from its constituents has been a key step for various natural language processing (NLP) tasks, such as text classification BIBREF0 , BIBREF1 , semantic matching BIBREF2 , BIBREF3 , and machine translation BIBREF4 . Existing deep learning approaches take a compositional function with different forms to compose word vectors recursively until obtaining a sentential representation. Typically, these compositional functions involve recurrent neural networks BIBREF5 , BIBREF6 , convolutional neural networks BIBREF7 , BIBREF8 , and tree-structured neural networks BIBREF9 , BIBREF10 .", "id": 940, "question": "What tasks do they experiment with?", "title": "Dynamic Compositional Neural Networks over Tree Structure"}, {"answers": [""], "context": "In this section, we briefly describe the tree-structured neural networks.", "id": 941, "question": "What is the meta knowledge specifically?", "title": "Dynamic Compositional Neural Networks over Tree Structure"}, {"answers": [""], "context": "Singing is an important way of human expression and the techniques of singing synthesis have broad applications in different prospects including virtual human, movie dubbing and so on. Traditional singing synthesize systems are based on concatenative BIBREF1 or HMM BIBREF2 based approaches. With the success of deep learning in Text-to-Speech, some neural singing synthesis methods have also been proposed recently. For example, BIBREF3 introduces a singing synthesis method using an architecture similar to WaveNet BIBREF4. It adopts lyrics and notes as input and generates vocoder features autoregressively for final singing voice synthesis.", "id": 942, "question": "Are there elements, other than pitch, that can potentially result in out of key converted singing?", "title": "PitchNet: Unsupervised Singing Voice Conversion with Pitch Adversarial Network"}, {"answers": ["", "Automatic: Normalized cross correlation (NCC)\nManual: Mean Opinion Score (MOS)"], "context": "Our method follows the autoencoder architecture in BIBREF0 except that there is an additional pitch regression network to separate pitch information out of the latent space. The architecture of PitchNet is illustrated in Fig. FIGREF1. It consists of five parts, an encoder, a decoder, a Look Up Table (LUT) of speaker embedding vectors, a singer classification network, and a pitch regression network.", "id": 943, "question": "How is the quality of singing voice measured?", "title": "PitchNet: Unsupervised Singing Voice Conversion with Pitch Adversarial Network"}, {"answers": ["", ""], "context": "Long short term memory (LSTM) units BIBREF1 are popular for many sequence modeling tasks and are used extensively in language modeling. A key to their success is their articulated gating structure, which allows for more control over the information passed along the recurrence. However, despite the sophistication of the gating mechanisms employed in LSTMs and similar recurrent units, the input and context vectors are treated with simple linear transformations prior to gating. Non-linear transformations such as convolutions BIBREF2 have been used, but these have not achieved the performance of well regularized LSTMs for language modeling BIBREF3 .", "id": 944, "question": "what data did they use?", "title": "Pyramidal Recurrent Unit for Language Modeling"}, {"answers": ["Variational LSTM, CharCNN, Pointer Sentinel-LSTM, RHN, NAS Cell, SRU, QRNN, RAN, 4-layer skip-connection LSTM, AWD-LSTM, Quantized LSTM"], "context": "Multiple methods, including a variety of gating structures and transformations, have been proposed to improve the performance of recurrent neural networks (RNNs). We first describe these approaches and then provide an overview of recent work in language modeling.", "id": 945, "question": "what previous RNN models do they compare with?", "title": "Pyramidal Recurrent Unit for Language Modeling"}, {"answers": ["", ""], "context": "While most NLP resources are English-specific, there have been several recent efforts to build multilingual benchmarks. One possibility is to collect and annotate data in multiple languages separately BIBREF0, but most existing datasets have been created through translation BIBREF1, BIBREF2. This approach has two desirable properties: it relies on existing professional translation services rather than requiring expertise in multiple languages, and it results in parallel evaluation sets that offer a meaningful measure of the cross-lingual transfer gap of different models. The resulting multilingual datasets are generally used for evaluation only, relying on existing English datasets for training.", "id": 946, "question": "What are examples of these artificats?", "title": "Translation Artifacts in Cross-lingual Transfer Learning"}, {"answers": ["English\nFrench\nSpanish\nGerman\nGreek\nBulgarian\nRussian\nTurkish\nArabic\nVietnamese\nThai\nChinese\nHindi\nSwahili\nUrdu\nFinnish", ""], "context": "Current cross-lingual models work by pre-training multilingual representations using some form of language modeling, which are then fine-tuned on the relevant task and transferred to different languages. Some authors leverage parallel data to that end BIBREF5, BIBREF6, but training a model akin to BERT BIBREF7 on the combination of monolingual corpora in multiple languages is also effective BIBREF8. Closely related to our work, BIBREF4 showed that replacing segments of the training data with their translation during fine-tuning is helpful. However, they attribute this behavior to a data augmentation effect, which we believe should be reconsidered given the new evidence we provide.", "id": 947, "question": "What are the languages they use in their experiment?", "title": "Translation Artifacts in Cross-lingual Transfer Learning"}, {"answers": [""], "context": "Most benchmarks covering a wide set of languages have been created through translation, as it is the case of XNLI BIBREF1 for NLI, PAWS-X BIBREF9 for adversarial paraphrase identification, and XQuAD BIBREF2 and MLQA BIBREF10 for Question Answering (QA). A notable exception is TyDi QA BIBREF0, a contemporaneous QA dataset that was separately annotated in 11 languages. Other cross-lingual datasets leverage existing multilingual resources, as it is the case of MLDoc BIBREF11 for document classification and Wikiann BIBREF12 for named entity recognition. Concurrent to our work, BIBREF13 combine some of these datasets into a single multilingual benchmark, and evaluate some well-known methods on it.", "id": 948, "question": "Does the professional translation or the machine translation introduce the artifacts?", "title": "Translation Artifacts in Cross-lingual Transfer Learning"}, {"answers": ["", ""], "context": "Several studies have shown that NLI datasets like SNLI BIBREF14 and MultiNLI BIBREF15 contain spurious patterns that can be exploited to obtain strong results without making real inferential decisions. For instance, BIBREF16 and BIBREF17 showed that a hypothesis-only baseline performs better than chance due to cues on their lexical choice and sentence length. Similarly, BIBREF18 showed that NLI models tend to predict entailment for sentence pairs with a high lexical overlap. Several authors have worked on adversarial datasets to diagnose these issues and provide a more challenging benchmark BIBREF19, BIBREF20, BIBREF21. Besides NLI, other tasks like QA have also been found to be susceptible to annotation artifacts BIBREF22, BIBREF23. While previous work has focused on the monolingual scenario, we show that translation can interfere with these artifacts in multilingual settings.", "id": 949, "question": "Do they recommend translating the premise and hypothesis together?", "title": "Translation Artifacts in Cross-lingual Transfer Learning"}, {"answers": [""], "context": "Translated texts are known to have unique features like simplification, explicitation, normalization and interference, which are refer to as translationese BIBREF24. This phenomenon has been reported to have a notable impact in machine translation evaluation BIBREF25, BIBREF26. For instance, back-translation brings large BLEU gains for reversed test sets (i.e. when translationese is on the source side and original text is used as reference), but its effect diminishes in the natural direction BIBREF27. While connected, the phenomenon we analyze is different in that it arises from translation inconsistencies due to the lack of context, and affects cross-lingual transfer learning rather than machine translation.", "id": 950, "question": "Is the improvement over state-of-the-art statistically significant?", "title": "Translation Artifacts in Cross-lingual Transfer Learning"}, {"answers": [""], "context": "Our goal is to analyze the effect of both human and machine translation in cross-lingual models. For that purpose, the core idea of our work is to (i) use machine translation to either translate the training set into other languages, or generate English paraphrases of it through back-translation, and (ii) evaluate the resulting systems on original, human translated and machine translated test sets in comparison with systems trained on original data. We next describe the models used in our experiments (\u00a7SECREF6), the specific training variants explored (\u00a7SECREF8), and the evaluation procedure followed (\u00a7SECREF10).", "id": 951, "question": "What are examples of these artifacts?", "title": "Translation Artifacts in Cross-lingual Transfer Learning"}, {"answers": [""], "context": "We experiment with two models that are representative of the state-of-the-art in monolingual and cross-lingual pre-training: (i) Roberta BIBREF28, which is an improved version of BERT that uses masked language modeling to pre-train an English Transformer model, and (ii) XLM-R BIBREF8, which is a multilingual extension of the former pre-trained on 100 languages. In both cases, we use the large models released by the authors under the fairseq repository. As discussed next, we explore different variants of the training set to fine-tune each model on different tasks. At test time, we try both machine translating the test set into English (Translate-Test) and, in the case of XLM-R, using the actual test set in the target language (Zero-Shot).", "id": 952, "question": "What languages do they use in their experiments?", "title": "Translation Artifacts in Cross-lingual Transfer Learning"}, {"answers": [""], "context": "Assembling training corpora of annotated natural language examples in specialized domains such as biomedicine poses considerable challenges. Experts with the requisite domain knowledge to perform high-quality annotation tend to be expensive, while lay annotators may not have the necessary knowledge to provide high-quality annotations. A practical approach for collecting a sufficiently large corpus would be to use crowdsourcing platforms like Amazon Mechanical Turk (MTurk). However, crowd workers in general are likely to provide noisy annotations BIBREF0 , BIBREF1 , BIBREF2 , an issue exacerbated by the technical nature of specialized content. Some of this noise may reflect worker quality and can be modeled BIBREF0 , BIBREF1 , BIBREF3 , BIBREF4 , but for some instances lay people may simply lack the domain knowledge to provide useful annotation.", "id": 953, "question": "How much higher quality is the resulting annotated data?", "title": "Predicting Annotation Difficulty to Improve Task Routing and Model Performance for Biomedical Information Extraction"}, {"answers": ["Annotations from experts are used if they have already been collected."], "context": "Crowdsourcing annotation is now a well-studied problem BIBREF7 , BIBREF0 , BIBREF1 , BIBREF2 . Due to the noise inherent in such annotations, there have also been considerable efforts to develop aggregation models that minimize noise BIBREF0 , BIBREF1 , BIBREF3 , BIBREF4 .", "id": 954, "question": "How do they match annotators to instances?", "title": "Predicting Annotation Difficulty to Improve Task Routing and Model Performance for Biomedical Information Extraction"}, {"answers": ["57,505 sentences", "57,505 sentences"], "context": "Our specific application concerns annotating abstracts of articles that describe the conduct and results of randomized controlled trials (RCTs). Experimentation in this domain has become easy with the recent release of the EBM-NLP BIBREF5 corpus, which includes a reasonably large training dataset annotated via crowdsourcing, and a modest test set labeled by individuals with advanced medical training. More specifically, the training set comprises 4,741 medical article abstracts with crowdsourced annotations indicating snippets (sequences) that describe the Participants (p), Interventions (i), and Outcome (o) elements of the respective RCT, and the test set is composed of 191 abstracts with p, i, o sequence annotations from three medical experts.", "id": 955, "question": "How much data is needed to train the task-specific encoder?", "title": "Predicting Annotation Difficulty to Improve Task Routing and Model Performance for Biomedical Information Extraction"}, {"answers": ["", ""], "context": "The test set includes annotations from both crowd workers and domain experts. We treat the latter as ground truth and then define the difficulty of sentences in terms of the observed agreement between expert and lay annotators. Formally, for annotation task $t$ and instance $i$ : ", "id": 956, "question": "What kind of out-of-domain data?", "title": "Predicting Annotation Difficulty to Improve Task Routing and Model Performance for Biomedical Information Extraction"}, {"answers": [""], "context": "Our definition of difficulty is derived from agreement between expert and crowd annotations for the test data, and agreement between a predictive model and crowd annotations in the training data. It is reasonable to ask if these measures are related to inter-annotator agreement, a metric often used in language technology research to identify ambiguous or difficult items. Here we explicitly verify that our definition of difficulty only weakly correlates with inter-annotator agreement.", "id": 957, "question": "Is an instance a sentence or an IE tuple?", "title": "Predicting Annotation Difficulty to Improve Task Routing and Model Performance for Biomedical Information Extraction"}, {"answers": ["people in the US that use Amazon Mechanical Turk", ""], "context": "As social media, specially Twitter, takes on an influential role in presidential elections in the U.S., natural language processing of political tweets BIBREF0 has the potential to help with nowcasting and forecasting of election results as well as identifying the main issues with a candidate \u2013 tasks of much interest to journalists, political scientists, and campaign organizers BIBREF1. As a methodology to obtain training data for a machine learning system that analyzes political tweets, BIBREF2 devised a crowdsourcing scheme with variable crowdworker numbers based on the difficulty of the annotation task. They provided a dataset of tweets where the sentiments towards political candidates were labeled both by experts in political communication and by crowdworkers who were likely not domain experts. BIBREF2 revealed that crowdworkers can match expert performance relatively accurately and in a budget-efficient manner. Given this result, the authors envisioned future work in which groundtruth labels would be crowdsourced for a large number of tweets and then used to design an automated NLP tool for political tweet analysis.", "id": 958, "question": "Who are the crowdworkers?", "title": "Performance Comparison of Crowdworkers and NLP Tools onNamed-Entity Recognition and Sentiment Analysis of Political Tweets"}, {"answers": ["", ""], "context": "NLP toolkits typically have the following capabilities: tokenization, part-of-speech (PoS) tagging, chunking, named entity recognition and sentiment analysis. In a study by BIBREF3, it is shown that the well-known NLP toolkits NLTK BIBREF4, Stanford CoreNLP BIBREF5, and TwitterNLP BIBREF6 have tokenization, PoS tagging and NER modules in their pipelines. There are two main approaches for NER: (1) rule-based and (2) statistical or machine learning based. The most ubiquitous algorithms for sequence tagging use Hidden Markov Models BIBREF7, Maximum Entropy Markov Models BIBREF7, BIBREF8, or Conditional Random Fields BIBREF9. Recent works BIBREF10, BIBREF11 have used recurrent neural networks with attention modules for NER.", "id": 959, "question": "Which toolkits do they use?", "title": "Performance Comparison of Crowdworkers and NLP Tools onNamed-Entity Recognition and Sentiment Analysis of Political Tweets"}, {"answers": ["neutral sentiment"], "context": "We used the 1,000-tweet dataset by BIBREF2 that contains the named-entities labels and entity-level sentiments for each of the four 2016 presidential primary candidates Bernie Sanders, Donald Trump, Hillary Clinton, and Ted Cruz, provided by crowdworkers, and by experts in political communication, whose labels are considered groundtruth. The crowdworkers were located in the US and hired on the BIBREF22 platform. For the task of entity-level sentiment analysis, a 3-scale rating of \"negative,\" \"neutral,\" and \"positive\" was used by the annotators.", "id": 960, "question": "Which sentiment class is the most accurately predicted by ELS systems?", "title": "Performance Comparison of Crowdworkers and NLP Tools onNamed-Entity Recognition and Sentiment Analysis of Political Tweets"}, {"answers": [""], "context": "The dataset of 1,000 randomly selected tweets contains more than twice as many tweets about Trump than about the other candidates. In the named-entity recognition experiment, the average CCR of crowdworkers was 98.6%, while the CCR of the automated systems ranged from 77.2% to 96.7%. For four of the automated systems, detecting the entity Trump was more difficult than the other entities (e.g., spaCy 72.7% for the entity Trump vs. above 91% for the other entities). An example of incorrect NER is shown in Figure FIGREF1 top. The difficulties the automated tools had in NER may be explained by the fact that the tools were not trained on tweets, except for TwitterNLP, which was not in active development when the data was created BIBREF1.", "id": 961, "question": "Is datasets for sentiment analysis balanced?", "title": "Performance Comparison of Crowdworkers and NLP Tools onNamed-Entity Recognition and Sentiment Analysis of Political Tweets"}, {"answers": [""], "context": "Our results show that existing NLP systems cannot accurately perform sentiment analysis of political tweets in the dataset we experimented with. Labeling by humans, even non-expert crowdworkers, yields accuracy results that are well above the results of existing automated NLP systems. In future work we will therefore use a crowdworker-labeled dataset to train a new machine-learning based NLP system for tweet analysis. We will ensure that the training data is balanced among classes. Our plan is to use state-of-the-art deep neural networks and compare their performance for entity-level sentiment analysis of political tweets.", "id": 962, "question": "What measures are used for evaluation?", "title": "Performance Comparison of Crowdworkers and NLP Tools onNamed-Entity Recognition and Sentiment Analysis of Political Tweets"}, {"answers": ["BOW-LR, BOW-RF. TFIDF-RF, TextCNN, C-TextCNN", ""], "context": "Emotion detection has long been a topic of interest to scholars in natural language processing (NLP) domain. Researchers aim to recognize the emotion behind the text and distribute similar ones into the same group. Establishing an emotion classifier can not only understand each user's feeling but also be extended to various application, for example, the motivation behind a user's interests BIBREF0. Based on releasing of large text corpus on social media and the emotion categories proposed by BIBREF1, BIBREF2, numerous models have provided and achieved fabulous precision so far. For example, DeepMoji BIBREF3 which utilized transfer learning concept to enhance emotions and sarcasm understanding behind the target sentence. CARER BIBREF4 learned contextualized affect representations to make itself more sensitive to rare words and the scenario behind the texts.", "id": 963, "question": "what were the baselines?", "title": "EmotionX-IDEA: Emotion BERT -- an Affectional Model for Conversation"}, {"answers": ["", ""], "context": "EmotionLines BIBREF6 is a dialogue dataset composed of two subsets, Friends and EmotionPush, according to the source of the dialogues. The former comes from the scripts of the Friends TV sitcom. The other is made up of Facebook messenger chats. Each subset includes $1,000$ English dialogues, and each dialogue can be further divided into a few consecutive utterances. All the utterances are annotated by five annotators on a crowd-sourcing platform (Amazon Mechanical Turk), and the labeling work is only based on the textual content. Annotator votes for one of the seven emotions, namely Ekman\u2019s six basic emotions BIBREF1, plus the neutral. If none of the emotion gets more than three votes, the utterance will be marked as \u201cnon-neutral\u201d.", "id": 964, "question": "what datasets were used?", "title": "EmotionX-IDEA: Emotion BERT -- an Affectional Model for Conversation"}, {"answers": ["BERT-base, BERT-large, BERT-uncased, BERT-cased"], "context": "For this challenge, we adapt BERT which is proposed by BIBREF5 to help understand the context at the same time. Technically, BERT, designed on end-to-end architecture, is a deep pre-trained transformer encoder that dynamically provides language representation and BERT already achieved multiple state-of-the-art results on GLUE benchmark BIBREF7 and many tasks. A quick recap for BERT's architecture and its pre-training tasks will be illustrated in the following subsections.", "id": 965, "question": "What BERT models are used?", "title": "EmotionX-IDEA: Emotion BERT -- an Affectional Model for Conversation"}, {"answers": [""], "context": "BERT, the Bidirectional Encoder Representations from Transformers, consists of several transformer encoder layers that enable the model to extract very deep language features on both token-level and sentence-level. Each transformer encoder contains multi-head self-attention layers that provide ability to learn multiple attention feature of each word from their bidirectional context. The transformer and its self-attention mechanism are proposed by BIBREF8. This self-attention mechanism can be interpreted as a key-value mapping given query. By given the embedding vector for token input, the query ($Q$), key ($K$) and value ($V$) are produced by the projection from each three parameter matrices where $W^Q \\in \\mathbb {R}^{d_{{\\rm model}} \\times d_{k}}, W^K \\in \\mathbb {R}^{d_{\\rm model} \\times d_{k}}$ and $W^V \\in \\mathbb {R}^{d_{\\rm model} \\times d_{v}}$. The self-attention BIBREF8 is formally represented as:", "id": 966, "question": "What are the sources of the datasets?", "title": "EmotionX-IDEA: Emotion BERT -- an Affectional Model for Conversation"}, {"answers": [""], "context": "In pre-training, intead of using unidirectional language models, BERT developed two pre-training tasks: (1) Masked LM (cloze test) and (2) Next Sentence Prediction. At the first pre-training task, bidirectional language modeling can be done at this cloze-like pre-training. In detail, 15% tokens of input sequence will be masked at random and model need to predict those masked tokens. The encoder will try to learn contextual representations from every given tokens due to masking tokens at random. Model will not know which part of the input is going to be masked, so that the information of each masked tokens should be inferred by remaining tokens. At Next Sentence Prediction, two sentences concatenated together will be considered as model input. In order to give model a good nature language understanding, knowing relationship between sentence is one of important abilities. When generating input sequences, 50% of time the sentence B is actually followed by sentence A, and rest 50% of the time the sentence B will be picked randomly from dataset, and model need to predict if the sentence B is next sentence of sentence A. That is, the attention information will be shared between sentences. Such sentence-level understanding may have difficulties to be learned at first pre-training task (Masked LM), therefore, the pre-training task (NSP) is developed as second training goal to capture the cross sentence relationship.", "id": 967, "question": "What labels does the dataset have?", "title": "EmotionX-IDEA: Emotion BERT -- an Affectional Model for Conversation"}, {"answers": ["", ""], "context": "Word embeddings BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 are unsupervised learning methods for capturing latent semantic structure in language. Word embedding methods analyze text data to learn distributed representations of the vocabulary that capture its co-occurrence statistics. These representations are useful for reasoning about word usage and meaning BIBREF7 , BIBREF8 . Word embeddings have also been extended to data beyond text BIBREF9 , BIBREF10 , such as items in a grocery store or neurons in the brain. efe is a probabilistic perspective on embeddings that encompasses many existing methods and opens the door to bringing expressive probabilistic modeling BIBREF11 , BIBREF12 to the problem of learning distributed representations.", "id": 968, "question": "Do they evaluate on English only datasets?", "title": "Structured Embedding Models for Grouped Data"}, {"answers": ["", "Calculate test log-likelihood on the three considered datasets"], "context": "In this section, we develop sefe, a model that builds on efe BIBREF10 to capture semantic variations across groups of data. In embedding models, we represent each object (e.g., a word in text, or an item in shopping data) using two sets of vectors, an embedding vector and a context vector. In this paper, we are interested in how the embeddings vary across groups of data, and for each object we want to learn a separate embedding vector for each group. Having a separate embedding for each group allows us to study how the usage of a word like 1.10intelligence varies across categories of the ArXiv, or which words are used most differently by U.S. Senators depending on which state they are from and whether they are Democrats or Republicans.", "id": 969, "question": "What experiments are used to demonstrate the benefits of this approach?", "title": "Structured Embedding Models for Grouped Data"}, {"answers": [""], "context": "In exponential family embeddings, we have a collection of objects, and our goal is to learn a vector representation of these objects based on their co-occurrence patterns.", "id": 970, "question": "What hierarchical modelling approach is used?", "title": "Structured Embedding Models for Grouped Data"}, {"answers": [""], "context": "Here, we describe the sefe model for grouped data. In text, some examples of grouped data are Congressional speeches grouped into political parties or scientific documents grouped by discipline. Our goal is to learn group-specific embeddings from data partitioned into INLINEFORM0 groups, i.e., each instance INLINEFORM1 is associated with a group INLINEFORM2 . The sefe model extends efe to learn a separate set of embedding vectors for each group.", "id": 971, "question": "How do co-purchase patterns vary across seasons?", "title": "Structured Embedding Models for Grouped Data"}, {"answers": [""], "context": "In this section, we describe the experimental study. We fit the sefe model on three datasets and compare it against the efe BIBREF10 . Our quantitative results show that sharing the context vectors provides better results, and that amortization and hierarchical structure give further improvements.", "id": 972, "question": "Which words are used differently across ArXiv?", "title": "Structured Embedding Models for Grouped Data"}, {"answers": [""], "context": "Headline generation is the process of creating a headline-style sentence given an input article. The research community has been regarding the task of headline generation as a summarization task BIBREF1, ignoring the fundamental differences between headlines and summaries. While summaries aim to contain most of the important information from the articles, headlines do not necessarily need to. Instead, a good headline needs to capture people's attention and serve as an irresistible invitation for users to read through the article. For example, the headline \u201c$2 Billion Worth of Free Media for Trump\u201d, which gives only an intriguing hint, is considered better than the summarization style headline \u201cMeasuring Trump\u2019s Media Dominance\u201d , as the former gets almost three times the readers as the latter. Generating headlines with many clicks is especially important in this digital age, because many of the revenues of journalism come from online advertisements and getting more user clicks means being more competitive in the market. However, most existing websites naively generate sensational headlines using only keywords or templates. Instead, this paper aims to learn a model that generates sensational headlines based on an input article without labeled data.", "id": 973, "question": "What is future work planed?", "title": "Clickbait? Sensational Headline Generation with Auto-tuned Reinforcement Learning"}, {"answers": [""], "context": "To evaluate the sensationalism intensity score $\\alpha _{\\text{sen}}$ of a headline, we collect a sensationalism dataset and then train a sensationalism scorer. For the sensationalism dataset collection, we choose headlines with many comments from popular online websites as positive samples. For the negative samples, we propose to use the generated headlines from a sentence summarization model. Intuitively, the summarization model, which is trained to preserve the semantic meaning, will lose the sensationalization ability and thus the generated negative samples will be less sensational than the original one, similar to the obfuscation of style after back-translation BIBREF4. For example, an original headline like UTF8gbsn\u201c\u4e00\u8d9f\u632310\u4e07\uff1f\u94c1\u603b\u589e\u5f00\u7533\u901a\u3001\u987a\u4e30\u4e13\u5217\" (One trip to earn 100 thousand? China Railway opens new Shentong and Shunfeng special lines) will become UTF8gbsn\u201c\u4e2d\u94c1\u603b\u5c06\u589e\u5f00\u4eac\u5e7f\u4e24\u5217\u5feb\u9012\u4e13\u5217\" (China Railway opens two special lines for express) from the baseline model, which loses the sensational phrases of UTF8gbsn\u201c\u4e00\u8d9f\u632310\u4e07\uff1f\" (One trip to earn 100 thousand?) . We then train the sensationalism scorer by classifying sensational and non-sensational headlines using a one-layer CNN with a binary cross entropy loss $L_{\\text{sen}}$. Firstly, 1-D convolution is used to extract word features from the input embeddings of a headline. This is followed by a ReLU activation layer and a max-pooling layer along the time dimension. All features from different channels are concatenated together and projected to the sensationalism score by adding another fully connected layer with sigmoid activation. Binary cross entropy is used to compute the loss $L_{\\text{sen}}$.", "id": 974, "question": "What is this method improvement over the best performing state-of-the-art?", "title": "Clickbait? Sensational Headline Generation with Auto-tuned Reinforcement Learning"}, {"answers": [""], "context": "For the CNN model, we choose filter sizes of 1, 3, and 5 respectively. Adam is used to optimize $L_{sen}$ with a learning rate of 0.0001. We set the embedding size as 300 and initialize it from qiu2018revisiting trained on the Weibo corpus with word and character features. We fix the embeddings during training. For dataset collection, we utilize the headlines collected in qin2018automatic, lin2019learning from Tencent News, one of the most popular Chinese news websites, as the positive samples. We follow the same data split as the original paper. As some of the links are not available any more, we get 170,754 training samples and 4,511 validation samples. For the negative training samples collection, we randomly select generated headlines from a pointer generator BIBREF0 model trained on LCSTS dataset BIBREF5 and create a balanced training corpus which includes 351,508 training samples and 9,022 validation samples. To evaluate our trained classifier, we construct a test set by randomly sampling 100 headlines from the test split of LCSTS dataset and the labels are obtained by 11 human annotators. Annotations show that 52% headlines are labeled as positive and 48% headlines as negative by majority voting (The detail on the annotation can be found in Section SECREF26).", "id": 975, "question": "Which baselines are used for evaluation?", "title": "Clickbait? Sensational Headline Generation with Auto-tuned Reinforcement Learning"}, {"answers": ["", ""], "context": "Our classifier achieves 0.65 accuracy and 0.65 averaged F1 score on the test set while a random classifier would only achieve 0.50 accuracy and 0.50 averaged F1 score. This confirms that the predicted sensationalism score can partially capture the sensationalism of headlines. On the other hand, a more natural choice is to take headlines with few comments as negative examples. Thus, we train another baseline classifier on a crawled balanced sensationalism corpus of 84k headlines where the positive headlines have at least 28 comments and the negative headlines have less than 5 comments. However, the results on the test set show that the baseline classifier gets 60% accuracy, which is worse than the proposed classifier (which achieves 65%). The reason could be that the balanced sensationalism corpus are sampled from different distributions from the test set and it is hard for the trained model to generalize. Therefore, we choose the proposed one as our sensationalism scorer. Therefore, our next challenge is to show that how to leverage this noisy sensationalism reward to generate sensational headlines.", "id": 976, "question": "Did they used dataset from another domain for evaluation?", "title": "Clickbait? Sensational Headline Generation with Auto-tuned Reinforcement Learning"}, {"answers": ["", ""], "context": "Our sensational headline generation model takes an article as input and output a sensational headline. The model consists of a Pointer-Gen headline generator and is trained by ARL. The diagram of ARL can be found in Figure FIGREF6.", "id": 977, "question": "How is sensationalism scorer trained?", "title": "Clickbait? Sensational Headline Generation with Auto-tuned Reinforcement Learning"}, {"answers": ["Based on table results provided changing directed to undirected edges had least impact - max abs difference of 0.33 points on all three datasets."], "context": "The concept of message passing over graphs has been around for many years BIBREF0, BIBREF1, as well as that of graph neural networks (GNNs) BIBREF2, BIBREF3. However, GNNs have only recently started to be closely investigated, following the advent of deep learning. Some notable examples include BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12. These approaches are known as spectral. Their similarity with message passing (MP) was observed by BIBREF9 and formalized by BIBREF13 and BIBREF14.", "id": 978, "question": "Which component is the least impactful?", "title": "Message Passing Attention Networks for Document Understanding"}, {"answers": ["Increasing number of message passing iterations showed consistent improvement in performance - around 1 point improvement compared between 1 and 4 iterations", ""], "context": "BIBREF13 proposed a MP framework under which many of the recently introduced GNNs can be reformulated. MP consists in an aggregation phase followed by a combination phase BIBREF14. More precisely, let $G(V,E)$ be a graph, and let us consider $v \\in V$. At time $t+1$, a message vector $\\mathbf {m}_v^{t+1}$ is computed from the representations of the neighbors $\\mathcal {N}(v)$ of $v$:", "id": 979, "question": "Which component has the greatest impact on performance?", "title": "Message Passing Attention Networks for Document Understanding"}, {"answers": [""], "context": "We represent a document as a statistical word co-occurrence network BIBREF18, BIBREF19 with a sliding window of size 2 overspanning sentences. Let us denote that graph $G(V,E)$. Each unique word in the preprocessed document is represented by a node in $G$, and an edge is added between two nodes if they are found together in at least one instantiation of the window. $G$ is directed and weighted: edge directions and weights respectively capture text flow and co-occurrence counts.", "id": 980, "question": "What is the state-of-the-art system?", "title": "Message Passing Attention Networks for Document Understanding"}, {"answers": ["", ""], "context": "We formulate our AGGREGATE function as:", "id": 981, "question": "Which datasets are used?", "title": "Message Passing Attention Networks for Document Understanding"}, {"answers": ["It is a framework used to describe algorithms for neural networks represented as graphs. Main idea is that that representation of each vertex is updated based on messages from its neighbors."], "context": "After passing messages and performing updates for $T$ iterations, we obtain a matrix $\\mathbf {H}^T \\in \\mathbb {R}^{n \\times d}$ containing the final vertex representations. Let $\\hat{G}$ be graph $G$ without the special document node, and matrix $\\mathbf {\\hat{H}}^T \\in \\mathbb {R}^{(n-1) \\times d}$ be the corresponding representation matrix (i.e., $\\mathbf {H}^T$ without the row of the document node).", "id": 982, "question": "What is the message passing framework?", "title": "Message Passing Attention Networks for Document Understanding"}, {"answers": ["", ""], "context": "Sarcasm is an intensive, indirect and complex construct that is often intended to express contempt or ridicule . Sarcasm, in speech, is multi-modal, involving tone, body-language and gestures along with linguistic artifacts used in speech. Sarcasm in text, on the other hand, is more restrictive when it comes to such non-linguistic modalities. This makes recognizing textual sarcasm more challenging for both humans and machines.", "id": 983, "question": "What other evaluation metrics are looked at?", "title": "Harnessing Cognitive Features for Sarcasm Detection"}, {"answers": ["Gaze Sarcasm using Multi Instance Logistic Regression.", ""], "context": "Sarcasm, in general, has been the focus of research for quite some time. In one of the pioneering works jorgensen1984test explained how sarcasm arises when a figurative meaning is used opposite to the literal meaning of the utterance. In the word of clark1984pretense, sarcasm processing involves canceling the indirectly negated message and replacing it with the implicated one. giora1995irony, on the other hand, define sarcasm as a mode of indirect negation that requires processing of both negated and implicated messages. ivanko2003context define sarcasm as a six tuple entity consisting of a speaker, a listener, Context, Utterance, Literal Proposition and Intended Proposition and study the cognitive aspects of sarcasm processing.", "id": 984, "question": "What is the best reported system?", "title": "Harnessing Cognitive Features for Sarcasm Detection"}, {"answers": [""], "context": "Sarcasm often emanates from incongruity BIBREF9 , which enforces the brain to reanalyze it BIBREF10 . This, in turn, affects the way eyes move through the text. Hence, distinctive eye-movement patterns may be observed in the case of successful processing of sarcasm in text in contrast to literal texts. This hypothesis forms the crux of our method for sarcasm detection and we validate this using our previously released freely available sarcasm dataset BIBREF8 enriched with gaze information.", "id": 985, "question": "What kind of stylistic features are obtained?", "title": "Harnessing Cognitive Features for Sarcasm Detection"}, {"answers": [""], "context": "The database consists of 1,000 short texts, each having 10-40 words. Out of these, 350 are sarcastic and are collected as follows: (a) 103 sentences are from two popular sarcastic quote websites, (b) 76 sarcastic short movie reviews are manually extracted from the Amazon Movie Corpus BIBREF11 by two linguists. (c) 171 tweets are downloaded using the hashtag #sarcasm from Twitter. The 650 non-sarcastic texts are either downloaded from Twitter or extracted from the Amazon Movie Review corpus. The sentences do not contain words/phrases that are highly topic or culture specific. The tweets were normalized to make them linguistically well formed to avoid difficulty in interpreting social media lingo. Every sentence in our dataset carries positive or negative opinion about specific \u201caspects\u201d. For example, the sentence \u201cThe movie is extremely well cast\u201d has positive sentiment about the aspect \u201ccast\u201d.", "id": 986, "question": "What traditional linguistics features did they use?", "title": "Harnessing Cognitive Features for Sarcasm Detection"}, {"answers": ["Readability (RED), Number of Words (LEN), Avg. Fixation Duration (FDUR), Avg. Fixation Count (FC), Avg. Saccade Length (SL), Regression Count (REG), Skip count (SKIP), Count of regressions from second half\nto first half of the sentence (RSF), Largest Regression Position (LREG), Edge density of the saliency gaze\ngraph (ED), Fixation Duration at Left/Source\n(F1H, F1S), Fixation Duration at Right/Target\n(F2H, F2S), Forward Saccade Word Count of\nSource (PSH, PSS), Forward SaccadeWord Count of Destination\n(PSDH, PSDS), Regressive Saccade Word Count of\nSource (RSH, RSS), Regressive Saccade Word Count of\nDestination (RSDH, RSDS)"], "context": "The task assigned to annotators was to read sentences one at a time and label them with with binary labels indicating the polarity (i.e., positive/negative). Note that, the participants were not instructed to annotate whether a sentence is sarcastic or not., to rule out the Priming Effect (i.e., if sarcasm is expected beforehand, processing incongruity becomes relatively easier BIBREF12 ). The setup ensures its \u201cecological validity\u201d in two ways: (1) Readers are not given any clue that they have to treat sarcasm with special attention. This is done by setting the task to polarity annotation (instead of sarcasm detection). (2) Sarcastic sentences are mixed with non sarcastic text, which does not give prior knowledge about whether the forthcoming text will be sarcastic or not.", "id": 987, "question": "What cognitive features are used?", "title": "Harnessing Cognitive Features for Sarcasm Detection"}, {"answers": ["", "Modeling considerations: the variables (both predictors and outcomes) are rarely simply binary or categorical; using a particular classification scheme means deciding which variations are visible,; Supervised and unsupervised learning are the most common approaches to learning from data; the unit of text that we are labeling (or annotating, or coding), either automatic or manual, can sometimes be different than one's final unit of analysis."], "context": "In June 2015, the operators of the online discussion site Reddit banned several communities under new anti-harassment rules. BIBREF0 used this opportunity to combine rich online data with computational methods to study a current question: Does eliminating these \u201cecho chambers\u201d diminish the amount of hate speech overall? Exciting opportunities like these, at the intersection of \u201cthick\u201d cultural and societal questions on the one hand, and the computational analysis of rich textual data on larger-than-human scales on the other, are becoming increasingly common.", "id": 988, "question": "What approaches do they use towards text analysis?", "title": "How we do things with words: Analyzing text as social and cultural data"}, {"answers": [""], "context": "We typically start by identifying the questions we wish to explore. Can text analysis provide a new perspective on a \u201cbig question\u201d that has been attracting interest for years? Or can we raise new questions that have only recently emerged, for example about social media? For social scientists working in computational analysis, the questions are often grounded in theory, asking: How can we explain what we observe? These questions are also influenced by the availability and accessibility of data sources. For example, the choice to work with data from a particular social media platform may be partly determined by the fact that it is freely available, and this will in turn shape the kinds of questions that can be asked. A key output of this phase are the concepts to measure, for example: influence; copying and reproduction; the creation of patterns of language use; hate speech. Computational analysis of text motivated by these questions is insight driven: we aim to describe a phenomenon or explain how it came about. For example, what can we learn about how and why hate speech is used or how this changes over time? Is hate speech one thing, or does it comprise multiple forms of expression? Is there a clear boundary between hate speech and other types of speech, and what features make it more or less ambiguous? In these cases, it is critical to communicate high-level patterns in terms that are recognizable.", "id": 989, "question": "What dataset do they use for analysis?", "title": "How we do things with words: Analyzing text as social and cultural data"}, {"answers": ["", ""], "context": "The next step involves deciding on the data sources, collecting and compiling the dataset, and inspecting its metadata.", "id": 990, "question": "Do they demonstrate why interdisciplinary insights are important?", "title": "How we do things with words: Analyzing text as social and cultural data"}, {"answers": [""], "context": "Many scholars in the humanities and the social sciences work with sources that are not available in digital form, and indeed may never be digitized. Others work with both analogue and digitized materials, and the increasing digitization of archives has opened opportunities to study these archives in new ways. We can go to the canonical archive or open up something that nobody has studied before. For example, we might focus on major historical moments (French Revolution, post-Milosevic Serbia) or critical epochs (Britain entering the Victorian era, the transition from Latin to proto-Romance). Or, we could look for records of how people conducted science, wrote and consumed literature, and worked out their philosophies.", "id": 991, "question": "What background do they have?", "title": "How we do things with words: Analyzing text as social and cultural data"}, {"answers": [""], "context": "After identifying the data source(s), the next step is compiling the data. This step is fundamental: if the sources cannot support a convincing result, no result will be convincing. In many cases, this involves defining a \u201ccore\" set of documents and a \u201ccomparison\" set. We often have a specific set of documents in mind: an author's work, a particular journal, a time period. But if we want to say that this \u201ccore\" set has some distinctive property, we need a \u201ccomparison\" set. Expanding the collection beyond the documents that we would immediately think of has the beneficial effect of increasing our sample size. Having more sources increases the chance that we will notice something consistent across many individually varying contexts.", "id": 992, "question": "What kind of issues (that are not on the forefront of computational text analysis) do they tackle?", "title": "How we do things with words: Analyzing text as social and cultural data"}]} \ No newline at end of file